In Jason Burke's useful and well written book Health Analytics (John Wiley & Sons, 2013), the author, a veteran data analytics consultant in health care, lists the excuses leaders of health care organizations give when planning to use their own data in making business decisions. "The data isn't good enough," inertia, or fears may delay implementation -- and one wonders whether the whole project may seem so daunting that the work isn't best left to consultants (like the author) or do nothing at all.
Another, increasingly attractive (and investable) option (if I read the startup/blog/twittersphere/VC rags correctly on this) is the buying (or bundling) of "vetted" 3rd party data by health care organizations. In fact, the selling of useful 3rd party data seems to be the main engine of growth for many current plays in digital health, particularly certain EHR companies, who seem to be staking their futures not only on their abilities to let doctors compose notes quickly, but offer actionable intelligence to their subscribers (see, for example Jonathan Bush of athenahealth's pitch here).
However, with the mandate of electronic medical records, many health care provider organizations (even small ones) have gone it alone at this point and therefore are (1) collecting their own data and (2) are in a unique position to resist these charms. Namely a proprietary or custom EHR allows the provision of an organization's own data. After spending a pretty penny on these systems, and following the Big Data hype, it isn't a surprise to wonder if some health care organizations aren't in fact chomping at the bit to convert their own proprietary data into actionable information.
This trend presents a headwind to any company in the health care data analytics space. If organizations believe they can do data "DIY", then why would any choose to pay for consultants, technology, or expertise when analysis could be shifted to their own management (ostensibly the reason they hired their MBA's in the first place)?
Here is a summary typical health provider organizations follow when going down this road:
Phase 1: Health care organization focuses on monitoring or improving some key quality metric.
In this phase, the entity focuses on some metric that is responsible for the most burdensome regulatory or reimbursement related issue involved in their business model. In this case the emphasis will be on establishing what this metric is -- a key number that was previously complied manually, over coffee and stale carrot cake, that can now be updated automatically, in real time. There will be an obsession on this number and the entire organization will be tasked on improving it. Analytics per se will prove to be relatively unimportant, as far as the EHR is useful for compiling the results in formal management reports. Mechanisms to fix the metric will be traditional, ie., breaking heads and cracking whips. No real clinical improvements will be made but the numbers may improve due to increased reporting, time limited awareness to the issues, or self conscious input massaging.
Phase 2: Quality fails to improve on this effort as all but the most diligent whip crackers will be fed up with organizational inertia, and upper management releases a concerted plan to improve the metric. This involves systemic reorganization, new processes, trainings, meetings to get the plan into place. Significant resources and time may be utilized at this stage.
Phase 3: The processes may or may not be successful in changing the metric. This usually involves several months of experimentation. At the end, if there is a significant change, the next phase will involve that of prediction. Can we predict the patients that are going to give some crappy quality metric outcome? If so, do we prevent them from entering the system altogether (and how do we get this information into the gatekeepers of our system) OR if we are forced to accept these folks how do we intervene early?
Phase 4: Emphasis shifts from defining the outcome measure to greater stringency on incoming measures, such as data and interviewer standardization at the point of intake, or more reliable instruments, or staff training. The rude awakening here typically is that intervention measures designed to really make a dent in early intervention cost a significant amount, possibly more than the outcome measures justify, or they may take substantial time for return on investment.
Read carefully, this is the beginning of an outline of how to develop add-on data products for health care organizations. The biggest challenges anyone in the digital health analytics faces are the growing ease of organizations collecting their own proprietary data, partnerships for data sharing, open source data, and the limited return on investment some data products make given the realities of implementing processes for changing the metrics.
Another, increasingly attractive (and investable) option (if I read the startup/blog/twittersphere/VC rags correctly on this) is the buying (or bundling) of "vetted" 3rd party data by health care organizations. In fact, the selling of useful 3rd party data seems to be the main engine of growth for many current plays in digital health, particularly certain EHR companies, who seem to be staking their futures not only on their abilities to let doctors compose notes quickly, but offer actionable intelligence to their subscribers (see, for example Jonathan Bush of athenahealth's pitch here).
However, with the mandate of electronic medical records, many health care provider organizations (even small ones) have gone it alone at this point and therefore are (1) collecting their own data and (2) are in a unique position to resist these charms. Namely a proprietary or custom EHR allows the provision of an organization's own data. After spending a pretty penny on these systems, and following the Big Data hype, it isn't a surprise to wonder if some health care organizations aren't in fact chomping at the bit to convert their own proprietary data into actionable information.
This trend presents a headwind to any company in the health care data analytics space. If organizations believe they can do data "DIY", then why would any choose to pay for consultants, technology, or expertise when analysis could be shifted to their own management (ostensibly the reason they hired their MBA's in the first place)?
Here is a summary typical health provider organizations follow when going down this road:
Phase 1: Health care organization focuses on monitoring or improving some key quality metric.
In this phase, the entity focuses on some metric that is responsible for the most burdensome regulatory or reimbursement related issue involved in their business model. In this case the emphasis will be on establishing what this metric is -- a key number that was previously complied manually, over coffee and stale carrot cake, that can now be updated automatically, in real time. There will be an obsession on this number and the entire organization will be tasked on improving it. Analytics per se will prove to be relatively unimportant, as far as the EHR is useful for compiling the results in formal management reports. Mechanisms to fix the metric will be traditional, ie., breaking heads and cracking whips. No real clinical improvements will be made but the numbers may improve due to increased reporting, time limited awareness to the issues, or self conscious input massaging.
Phase 2: Quality fails to improve on this effort as all but the most diligent whip crackers will be fed up with organizational inertia, and upper management releases a concerted plan to improve the metric. This involves systemic reorganization, new processes, trainings, meetings to get the plan into place. Significant resources and time may be utilized at this stage.
Phase 3: The processes may or may not be successful in changing the metric. This usually involves several months of experimentation. At the end, if there is a significant change, the next phase will involve that of prediction. Can we predict the patients that are going to give some crappy quality metric outcome? If so, do we prevent them from entering the system altogether (and how do we get this information into the gatekeepers of our system) OR if we are forced to accept these folks how do we intervene early?
Phase 4: Emphasis shifts from defining the outcome measure to greater stringency on incoming measures, such as data and interviewer standardization at the point of intake, or more reliable instruments, or staff training. The rude awakening here typically is that intervention measures designed to really make a dent in early intervention cost a significant amount, possibly more than the outcome measures justify, or they may take substantial time for return on investment.
Read carefully, this is the beginning of an outline of how to develop add-on data products for health care organizations. The biggest challenges anyone in the digital health analytics faces are the growing ease of organizations collecting their own proprietary data, partnerships for data sharing, open source data, and the limited return on investment some data products make given the realities of implementing processes for changing the metrics.