The quality measures industrial complex has run amok for too long, we're all guilty and that's why its on all of us to find a better way.
On March 22nd of 2019, the first year of the pandemic, the Centers for Medicare and Medicaid Services (CMS) announced it was suspending all reporting requirements for healthcare providers participating in its largest quality reporting programs. The long-term implications of this shift in reporting have yet to be observed, but the short-term implications were immediately realized; staff from the top to bottom of healthcare delivery systems were relieved of a significant administrative burden that likely had zero impact on the quality of care that was being delivered at the time. The quality metrics underlying these programs are reliant on claims-based or chart-abstracted data sources which typically imply a years-long lag between the time care was delivered and the time an examination of the quality of that care can be completed. The burden of reporting was suspending, the provider dedication and professionalism that has been on display during the following months has demonstrated however that the desire to provide high quality care and improve was not. As our health system copes with dealing with the ongoing impacts of the pandemic, it is incumbent on CMS and commercial payers to not simply resume previously suspended quality measure collection efforts but to instead recognize that their abrupt absence was seen by many as inconsequential during the health system’s greatest test of its durability. It is now necessary to demand more from our improvement efforts, to recognize that our quality metrics-based programs have failed to deliver the types of improvements we need them to and to focus on what truly motivates and enables health systems to improve.
I have spent the past two decades assisting public and private payers to design cost and quality improvement programs primarily through alternatives to fee-for-service payment arrangements. These programs all tend to fall within the commonly known terms of pay for performance, value-based purchasing (VBP), or larger accountable care constructs like accountable care organizations (ACOs). In attempting to incentivize more desirable levels of performance in health systems however, nearly all of these reform efforts have lost sight of the importance of learning as a fundamental differentiator in achievement. That is because the majority still focus on implementing updated clinical protocols and models believed to be associated with higher care delivery standards, as opposed to promoting the type of structured learning needed to continuously update care delivery methods ad infinitum, agnostic of patient populations or provider specialty type.
A program design approach which relies on what are believed to be accurate metrics for assessing the quality of care delivered and a belief that improving those metric values can be accompanied by cost reductions calibrated to the patient populations in question has proven to be naive at best, and an abject failure at worst. Designing improvement efforts on the back of this logical underpinning does not leave any room for a few fundamental truths: providers care about their patients and should be the source of feedback loops within improvement efforts, mistakes happen from the top to the bottom of any delivery systems but improvement is possible given the right supportive structures, and cost and quality metrics should be looked at as indicators of potential problems requiring greater forensic exploration, not as accurate portrayers of system behavior.
Incentivizing distraction
Over the past 10 years, CMS has paid commercial vendors over$1.3 billion dollars to develop over 2,300 quality measures used across 34 Federal and State programs. These measures are also used in many commercial value-based purchasing arrangements nationwide. The assumption driving this obsession with measures, is that some point in the past represents an accurate baseline, or if projected forward a counter-factual against which the impacts of an intervention can accurately be measured. Empirical evidence does not support this, however. Rather, mounting evidence shows that increasing the number and specificity of quality measures in pay for performance programs has promoted adverse behaviors amongst provider(e.g. reducing tracking efforts of things like hospital acquired infections, increased the diversion of needed resources away from improving patient care and toward reporting activities and driving provider animosity and isolation from larger efforts to improve care). Beyond this, developing static baselines is inherently an effort which disregards exogenous conditions of the delivery environment. Risk adjustment is also an infinitely complex effort often producing unintended consequences, which will always be the case. Reporting on these measures also cost providers an estimated $15.4billion dollars annually.
While supporting states, I have worked directly with CMS to negotiate the structures underlying several major Medicaid reform efforts using a combination of increased financial risk and quality measurement. The selection and approval process of which quality measures to use is nearly identical across all instances: form a committee of improvement and clinical experts within the state to identify a potential measure set primarily based on what has been used in the past and what CMS is currently using, gain consensus from stakeholders within the state (typically the largest provider and hospital groups and associations) and finally, submit the list to CMS to begin negotiations. Final metric lists are then established through a series of e-mails and phone calls that draw primarily on the experience and knowledge of the two to three senior-most officials assigned to the program application. No standardized review process is used, no standard scores are assigned to measures. These conversations feel simply like a series of conversations among informed professionals. This process which is time consuming and subject to unscientific selection was summarized by the Government Accountability Office in 2019:
CMS does not have procedures to ensure systematic assessments of quality measures under consideration against each of its quality measurement strategic objectives, which increases the risk that the quality measures it selects will not help the agency achieve those objectives as effectively as possible. These procedures, such as using a tool or standard methodology to systematically assess each measure under consideration, could help CMS better achieve its objectives.
CMSreports that it has reached its goal of 90% of payment tied to value-based reimbursement methodologies, nearly all of which rest on a chassis of performance metrics. Not surprisingly, many of these flagship programs, like the Hospitals Value-Based Purchasing Program, which uses 20 separate quality metrics to measure improvement have been found to have no effect on quality of care, patient satisfactionor mortality.
Losing the importance of learning amidst the reporting noise
Although the flaws in the measure selection process are clear, to focus there is a distraction from the central issue: the use of quality metrics as a basis for motivating and measuring change. The central problem with this approach is that it misses the need to create the capacity needed to improve. Programs assume that an evidence-based delivery model, if implemented correctly, will result in the favorable movement of an associated quality measure. Not only has this assumption been proven incorrect many times, it also fails to inform or assist health systems in learning how they should be identifying areas of lagging performance by isolating potential interventions and systematically evaluating their implementation. If the failure of so many of our most commonly used metrics is so well known, and if frustrations with quality metrics amongst policymakers, regulators, and providers are so aligned, why is there not a more robust conversation around what will replace them as the basis for reform efforts in the future?
Dr. Don Berwick, a founder of the Institute for HealthcareImprovement (IHI) often speaks of a need to move into a third era of quality measurement and improvement. The first having been a "doctor knows best" era where trust in providers was absolute and rampant asymmetric information obscured any possibility for objectively measuring the quality of a provider’s care. Today’s second era relies on very little trust and is primarily data-driven. Measures can be disaggregated to granular levels and successes and failures are expressed across dashboards displaying flawed quality metrics that are maintained by a dizzying network of measure stewards.
The third epoch, the one which lies ahead, will be difficult for us to collectively enter given it must be fundamentally based once again, on trust. Not blind trust like in the first era, but trust informed by data and transparency, and a trust that simultaneously recognizes how much we have to learn about how health systems can make the changes we ask of them. The classic Donabedian model for improvement suggests that desired outcomes are generated from structures that lead to certain processes that lead to certain outcomes. In applying our current models of quality measurement however, we have fixed the structure (incentives) and the outcomes (the measures), and we still have little idea what processes can get health systems from one to the other. A system that does not fix these key structures and outcomes and instead incentivizes systematic improvement itself will instead incentivize the development of supportive processes through which systematic improvement efforts can take place and refine themselves using continuous empirical results.
Dr. Lara Goitein, a Pulmonologist and President of the medical staff at Christus St. Vincent Regional Medical Center in New Mexico, described her organization’s Clinician-Directed Performance Improvement (CDPI) program. CDPI gives practicing physicians and other clinicians protected time, support, and training to conduct the performance improvement projects that they believe are most important for their hospital services. CDPI also provides courses that focus on the theory and methods of quality improvement. These courses provide the type of education that is not taught to providers during their medical training: how to identify improvement opportunities, define interventions to elicit them and measure their results. Teams are encouraged to choose their own improvement projects based on a simple question: “what is making you worry about your patients?” Over five years, Christus St. Vincent has completed 37 separate CDPI initiatives, many still using quality measures in meaningful ways but not as the top determinate of success. The flexibility to choose their own initiatives, structure their own interventions and grade their own success has demonstrated significant gains across the majority of interventions in terms of patient care, cost savings and improvements in provider satisfaction.
The CDPI example is one in which a large health system placed trust in their providers to identify and improve the care they deliver to their patients. By ensuring sufficient access to the science and tools of quality improvement and guaranteeing protected time for providers to learn and improve, the program has shown extreme promise. Reporting on progress is still an integral part of the program, though reporting is focused on quality improvement efforts themselves as opposed to complex and burdensome metrics.
Management research has shown for years that over-reliance on performance can undermine real improvement when circumstances are complex and rapidly evolving. Human beings are wired to fixate on incentives and achievement, and they are prone to do so at the expense of thinking and learning. In the end, we need to think: “it’s not the model, it’s the mindset” when exploring how health systems effectively absorb change and rapidly deploy new constellations of teams capable of executing the newest evidence-based care models. The third era of measurement and improvement needs to see that mindset as the desired outcome, as opposed to promoting and measuring the implementation of any one particular delivery model as the ultimate goal of incentive-based pay for reform initiatives.
Charting a path forward focused on learning
The next evidence-based model is always going to be just over the horizon, and that type of innovation is welcomed. Incentive programs can help keep our focus on the horizon by emphasizing and paying for learning and structured experimentation. This will allow the next model to be implemented, observed and improved upon whenever it is relevant for a delivery system, promoting a mindset focused on continual and systematic improvement, and not the capacity to report cumbersome metrics on an artificial timeline. Were our current incentive-based payments system to have spent the past ten years developing this type of learning capacity, the suspension of reporting requirements would have been seen instead as a meaningless gesture as ongoing continuous improvement would have continued formed the core of what each health system was doing as it rapidly refined care protocols in the early days of the pandemic through systematic testing, observation and refinement.
This evolution will not be simple, but three ways that our public and commercially funded delivery system reform efforts could immediately begin to move toward a third era of quality measurement and improvement could be:
My team is preparing to launch a pilot project for a client which incorporates the second two major points above. The VBP pilot will be built on a total cost of care construct with shared savings earned by either achieving quality measure goals or, failing that, completing analysis to determine the root cause of missed goals as well as rapid cycle plan, do, study, act (PDSA) efforts in-line with evidence-based quality improvement efforts. This model will be largely overseen by organizations controlled by the 1,500 participating primary care physicians themselves and will be supported through various learning opportunities as well as technical assistance.
Our health system needs the best tools available for it to continually improve and adopt innovative care practices quickly and in a manner that objectively measures their impacts. The shortcomings of the current quality measures regime have been on display for far too long. It is time that incentive-based reform efforts shift away from dictating the implementation of the latest model of care delivery and toward promoting the mindset of systematic improvement.