With the full clout of the Centers for Medicare and Medicaid Services currently being brought to bear on healthcare providers to meet high standards for patient data interoperability and accessibility, it would be easy to assume the only reason this goal was not accomplished long ago is simply a lack of will. Interoperable data? How hard can that be? Much harder than you think, it turns out.
To dig into why this is the case, we asked Pat Heiland, a principal architect at Salesforce, to speak with James Agnew (CTO) and Adam Cole (senior solutions architect) of Smile CDR, a Toronto, Ontario-based provider of a leading platform used by healthcare organizations to achieve Fast Healthcare Interoperability Resources (FHIR) compliance. They discuss the efforts and misadventures witnessed along the way to a time where it no longer seems inconceivable for healthcare providers to exchange patient records.
Pat Helland: I find it amazing the field of healthcare has so far proved essentially impervious to all efforts to provide for better interoperability among providers. And yet you somehow seem to have maintained the faith. How is that?
James Agnew: The short answer is that healthcare is fundamentally a workflow- and data-driven business. Much—if not most—of the time you spend with any doctor, you are going over details from your medical history, which means repeating things you have already told clinicians dozens or hundreds of times before. Then, after completing your account, the doctor will pore over notes and images taken during earlier visits until finally settling on the next course of action. Which, quite possibly, will involve gathering still more information by way of lab tests, radiology images, or whatever. And then, hopefully, some plan of care will be synthesized for you.
It’s easy to see there is far too much friction throughout the entire process. There’s so much repetition, so much waiting … it just feels like a workflow screaming out for a technological remedy. The fix clearly needs to include more seamless data flows. But that’s not a problem doctors are likely to solve for themselves—nor should it be.
Helland: Obviously, there is some highly sophisticated technology locked away within data silos here. It’s my sense that one of the fundamental reasons is that the healthcare domain is innovating and evolving so fast that it’s hard to find any solid ground to stand on when it comes to data exchange.
Agnew: This is something I think about a lot, but I have never been able to zero in on any one explanation for why this has proven to be so hard. Certainly, as you suggest, rapid iteration is one of the issues. Healthcare, more than anything, is a constantly evolving science. This means, if you are doing it right, you are constantly absorbing new information and adopting new technologies and treatments while also deprecating older methods of treatment. I agree this must be a big part of the problem.
But there are also at least a couple of other challenges. One has to do with the data itself. There’s an awful lot of it, much of which is quite complicated—and it comes in many different forms. Think about all the things a doctor refers to in developing a treatment decision. This can include massive images that have been analyzed and annotated by specialists. It can include lab tests performed in a multitude of ways, along with all the different variables to consider when coming up with an interpretation. People also use a variety of devices, each with its own data format, to do much of this work. All of which obviously serves to make data exchange more difficult.
The other thing that makes this tough is the fact that healthcare is ultimately all about human workflows, which naturally vary widely from one place to another. Which is to say that solving interoperability perfectly in one location should never be taken to mean the solution is going to translate smoothly to some other location. That is simply because different facilities go about things in different ways. What’s more, even if it were possible to standardize all of this, it certainly isn’t for informatics to dictate to medical practices how they should go about their business. Instead, it’s the technology that needs to evolve to meet the varied needs of those practices.
Holland: But there has been a litany of failed technology remedies over the years. Attempts at government intervention, standards set by committees… all have largely failed. Why is that?
Agnew: The specifics vary from place to place and country to country, but people in the field have long recognized the benefits of making patient healthcare information more readily available. This is why multiple proposals have been raised over the past decade or more to build national EHR [electronic healthcare record] backbones. It’s fair to say everyone now pretty much recognizes the need to enable just this sort of data exchange.
Famously, in Canada billions of dollars were invested—first in creating technology blueprints for how to build a national EHR, and then in efforts to develop interoperability standards, and finally projects to develop and implement records architectures capable of accomplishing all this. The sad fact is we now have very little to show for that investment.
“Somehow, amidst all this data-modeling brilliance, the folks who implement software were forgotten.”
The systems that have enjoyed some successes have mostly been those that were essentially developed as skunk-works projects, where people decided not to follow the lead of plans adopted at the national level and instead ended up rolling their own projects on a smaller scale and then building up from there. So, unquestionably, this more incremental approach has led to the greatest success.
Helland: It’s always easier to standardize stuff that works than stuff that does not yet exist. De jure standards follow de facto standards that already have proved effective.
Agnew: I could not agree more, but there is even more to this than just the usual standards-built-by-committee nonsense. A big part of the challenge in Canada’s case is that it had already invested heavily in a technology called HL7 version 3, which was an attempt to build a combination information-model/protocol/data-exchange mechanism capable of serving as all things to all people in healthcare all over the world. To say the least, it had very lofty goals.
And, on paper, it’s a beautiful standard. It really is. They developed their own methodology for the modeling and diagramming capabilities. They came up with special software that could be used to elaborate on data models. And there are brilliant concepts behind all of this. But somehow, amidst all this data-modeling brilliance, the folks who implement software were forgotten. This leads one to believe the HL7 people never really concerned themselves with whether this fancy data model might turn out to be something software developers would actually want to use.
So, it probably will come as no surprise that anyone who tries to work with HL7 version 3 soon learns it’s too complicated to use unless there happen to be unlimited sums of money to throw at solving that problem. That ultimately is the real reason this technology did not pan out in Canada—or in much of the rest of the world, for that matter.
Helland: Now we have this new standard FHIR, which you are advocating. How is that different from HL7? Why are you more optimistic about FHIR?
Agnew: First, let’s make sure the nomenclature is clear. HL7 is a standards development body that’s been responsible for many of the health data interop standards. One of those is called HL7 version 3, and now there is another iteration called FHIR, which has been around for only about a decade now, making it much younger than HL7 version 3.
FHIR handles many things quite differently from HL7 version 3, but the fundamental difference is that FHIR is community-first and implementer-first in every respect, and this is perfectly obvious throughout. Another fascinating thing here is that the people responsible for FHIR decided early on to develop it using an iterative model, much as they might have gone about building software.
That’s to say, if somebody wants to add something new to the standard, they will propose it and then it will be documented. Before that goes any further, the process also requires that the proposed addition be brought to what are known as “Connectathon events,” which serve as fabulous testing grounds for people to try out new things and confirm they work at least on a small scale. If so, the new thing then will go on to be tried out by other people at other events.
In fact, before anything in the FHIR standard can reach a normative part of the stack and be declared a normative attribute of the standard, it first must make it through several different Connectathons and be shown to have been used in production in multiple countries. So, the idea here was to create a high barrier to entry by requiring repeated demonstrations that the proposed addition not only adds value but also can be broadly implemented.
This is the single most brilliant thing that has been done to ensure FHIR continues evolving as a standard that works and manages to solve real problems.
Helland: Does this also interact with any sort of economic feedback loop?
Agnew: Yes, this really is quite interesting from an economic perspective. Based on only what I have just said, the FHIR initiative sounds like something that ought to fail simply because it forces people to iterate over and over again, which you would expect would deter most companies since it almost guarantees more work. Yet, it has not worked out that way.
I suspect that’s because the process itself does not turn out to be particularly burdensome. And the incentive for buying into it is that this is a process where the company is involved essentially every step of the way and has a hand in how the spec gets implemented from start to finish. You can also be reasonably certain the resulting spec will prove to be quite compatible with the way the company’s own product already works. On balance, I suspect this will be a cost-effective way for software producers to achieve interoperability.
Helland: Is that enough to ensure broad adoption?
Agnew: Maybe not, but I should add that one of the fundamental principles of FHIR is what we call the 80/20 rule. That basically means that nothing gets into the core standard unless it’s going to be useful to 80% of the EHR systems around the world.
Here is a classic example I like to use to explain: If I am building a system that tracks my patients’ eye color for whatever reason, I might propose it be added to the standard so I can better communicate that information to others. But that would be resoundingly rejected simply because nobody in their right mind believes that 80% of the EHR systems around the world collect eye color as a core patient attribute.
Another part of the 80/20 rule provides for an extension model that lets you use the remaining 20% of the model on your own system to include data that addresses problems not provided for by the core FHIR model. Also included is a self-documenting capability that lets you tag on data attributes as well as descriptions, which then can serve as a key that others can use to figure out the relevance of the special data you have chosen to add to your system.
Experience with this mechanism shows it really does allow for interesting innovation on top of the core FHIR data model, which itself is quite prescriptive about things like nomenclature and vocabulary and how to codify data and units of measure, while also handling all the gory details of clinical data communications. This extension mechanism lends flexibility to the mix by letting app developers add things specific to the types of problems they are addressing and the types of data they need to capture. All this can be handled within a standards-based framework without the need to reinvent the wheel each time. Meanwhile, the ability to continually add things that were not necessarily considered by the original standards developers makes it possible for FHIR to continue growing.
I should also mention that a FHIR construct promoted as “implementation guides” is becoming increasingly popular. Essentially, these are constrained versions of the FHIR spec created for specific purposes or specific jurisdictions—countries, for example. We are starting to see more and more evidence of uptake now. This seems to be encouraging a bit of collaboration where one national government, for example, might notice that another government has added some great innovation, whereupon it then will adapt that for its own implementation guide.
Helland: How do you envision the FHIR standard evolving from this point? Is there some mechanism the community can use to augment the standard, enhance it, or button it up?
Agnew: Actually, our fundamental goal with the 80/20 rule is just to ensure we have a standard that’s approachable, while not being too opinionated in any one direction. Ultimately, the thing that will allow us to continue iterating and evolving is a general understanding that the core standard—or at least many of its core data models—are effectively done already. We are now actually getting to the point where much of the standard will become normative, with the 80% core hopefully becoming permanently fixed in due course. That then ought to provide a stable base people can use as a platform for writing implementation guides.
We often talk about FHIR conceptually as the “Internet of Health.” That’s because, for those of us in the healthcare sphere, it could potentially play much the same role the basic TCP/IP protocol did back in the early days of the Internet. Before TCP/IP existed, people had to agree upon which network protocol to use before they could perform any sort of information exchange. Often, that could even require them to purchase and install specialized hardware. There were some truly fundamental challenges to overcome back before we could rely upon broadly accepted network protocols.
Of course, the introduction of TCP/IP did not magically solve everything, but it did at least mean we finally had a common language. We then were able to start building protocols and technologies and other things on top of that while maintaining this basic underlying ability to communicate.
FHIR holds the same sort of promise for health data. It’s this basic sort of protocol I suspect all sorts of interesting things will be done on top of—even though, on its own, FHIR does not really solve a whole lot. It does, however, provide this basic ability to move data from point A to point B, which is about as fundamental as it gets. But then the apps that are developed to take advantage of that will end up changing so much else. Ultimately, I think these implementation guides will prove to be analogous to the Internet RFCs (requests for comments) that followed TCP/IP once people started to say, “Hey, let’s take this protocol and do something really cool with it.”
I suspect we are now only at the infancy of those sorts of use cases. But hopefully we have at least managed to solve the basic problems that prevented us from getting any further on these fronts previously.
A big part of the challenge was that, over decades of independent development, the institutions that needed to interoperate had built information-processing environments that were just about as heterogeneous as they could possibly be. Different software, different data models, different workflows … you name it.
So, given that, how best to go about creating a mechanism that provides everyone in the ecosystem with the basis for not only the sharing of patient records, but also a common understanding of them?
Helland: In pulling together the FHIR standard, what proved to be the hardest problems?
Agnew: At root, healthcare data is very complicated and cannot really be dumbed down all that much. This doesn’t leave us with any shortcuts. More significantly, most healthcare data is very walled up inside individual institutions, and there’s a long history of institutions being quite guarded with their data—often because of privacy concerns, but also sometimes just to protect their commercial interests.
Even in those cases where some institutions have decided to exchange data, it’s almost a given they are all running different software that employs different data models. Coming up with a mechanism that lets you solve a problem in one location and then propagate that out to all the others has proven to be very challenging. Hopefully, though, this is the very problem FHIR is ultimately going to solve by providing everyone in the ecosystem with the basis for common understanding.
Helland: Bearing in mind, of course, that common understanding is never entirely common. It always comes down to personal understanding.
Agnew: Exactly. I would also add that healthcare in general has a fundamental problem around basic identity management. Just think about your own experience as a patient. Over the course of my life, I have probably been to five hospitals and have been attended to by 10 different doctors. I know for a fact that every one of those hospitals and medical practices has its own identifier for me. I have also had various forms of insurance coverage over time, and each of those insurers has its own identifier for me. So, there’s healthcare data for me locked up in all these different systems with really no way to tie that together.
In fact, this inability to link the identity of some patient over here with the identity of the same patient over there represents a major challenge for health information management in general. That’s simply because there do not tend to be any consistent healthcare identifiers that stay with people throughout their lives. What’s more, even when people do have consistent identifiers, healthcare providers generally don’t use them to link patient records. It’s a bit insane, to be perfectly honest.
Helland: If the healthcare industry continues to be either unable or unwilling to clean this up, is there anything that can be done to cut through the obstacles?
Agnew: For about a decade now, much of humankind has been walking around with super powerful touchscreen computers in their pockets. Certainly, in the healthcare space, we have not managed to take much advantage of that. This is quite a contrast to the banking industry, for example. I simply cannot imagine physically entering a bank anymore since anything I would ever want to do in a bank can now be done on my phone.
I can think of at least two ways that smartphones could also be used to fundamentally transform healthcare, and we are now starting to see evidence of progress on both fronts. One, obviously, is as an interoperability mechanism, which is an area where healthcare has always been challenged since you can never be sure which piece of information is going to prove useful—never mind where or when that might happen.
As an example, I might be allergic to some medications, and, if you were to ask me, I would gladly tell you which ones those are. My doctor could tell you as well. But what happens if I get hit by a bus tomorrow and land in the emergency room at a hospital I’ve never been to? The folks in that ER are not going to have any of that information on hand.
To deal with these sorts of situations, national EHR systems have traditionally been built around huge central repositories into which every possible bit of patient data has been squeezed. We try to give everyone access to query that huge central data bucket. That way, should I show up at this particular hospital in the middle of the night, the people in that emergency room will be able to pull whatever information they need about me out of that bucket. It’s a model that works, and it’s a model we all believe in and often participate in.
But I wonder how the prevalence of smartphones might end up changing that. It’s not unrealistic for me to hold an almost complete copy of my medical record on my phone—maybe not gigabytes of medical images but otherwise all the salient points. Anything that any medical professional would need to know about me could easily fit on my smartphone. So, assuming the right apps are in place, I should be able to download and aggregate that information from whichever hospitals I have visited. I should even be able to select the 10 or so attributes I would most want to share with a new physician—including a summary of my allergies and a list of the medications I’m taking.
The idea of being the broker for all that information is really appealing. It clearly would alleviate many of the privacy concerns that currently arise around health data since it means I would be in charge of deciding who gets to see what information about me. And it certainly could not be any more convenient since I would have my medical chart in my pocket wherever I go.
Of course, none of this would be even remotely possible were it not for the existence of enabling data standards. But now, with the advent of applications like Apple Health Records, it’s possible for me to download copies of my healthcare data whenever I like.
Still, the challenge remains of what to do with all that information once I have it on my phone. There are not yet any great mechanisms for controlling what I share with someone else, but you would have to assume that’s coming.
Helland: Yet, that assumes people actually want to walk around with their medical histories in their pocket. And do the providers want to make all this data accessible to patients? Where are the friction points, and how does this affect what you are trying to do with FHIR?
Agnew: This can be contentious. To that point, one of the big trends in healthcare over the past decade has had to do with the emergence of patient portals—by which I mean web-based mechanisms that allow patients to log in and look at their own data. Technologically, this is quite straightforward, but I have been involved in several patient-portal projects where I think all of us were taken aback by just how much resistance we encountered from certain camps, even as others gave us their full support.
This all requires a different mindset on the part of the clinicians, in particular, since they need to be a lot more careful about the language they use if they know patients are going to enjoy easy access to those records. And there’s some merit to that concern. That is, if the patient happens to be exhibiting some destructive behaviors, the doctor probably really needs to chronicle that since it could prove useful to the next clinician who tries to treat that person. As you can probably imagine, if doctors know the patient is likely to read those comments, they will probably be a bit less candid. So, I get it. But I think this concern is outweighed by the fact that this information fundamentally belongs to the patient.
“FHIR both creates and reduces friction. That is one of the things that makes this hard since, if it were perfectly obvious FHIR was only going to make things better, there would be no resistance at all. Of course, nothing in the real world actually works like that.”
Helland: Has this been a significant factor in the adoption of patient portals?
Agnew: It absolutely has. Wherever an effort is being made to implement patient portals, you can be sure this debate is taking place.
Now that patients can download their own medical files, it’s a foregone conclusion that many will indeed start walking around with their medical histories in their pockets. This seems empowering and perhaps even vitally important in the event of an emergency room visit while traveling. But that could also come at a cost and put a strain on the ongoing dialogue between patient and physician.
That raises some more interesting questions … and opportunities.
Helland: Now that you have been able to observe FHIR being used out in the wild, what do you believe it does either to create or to reduce friction for healthcare providers?
Agnew: I like that you phrased it that way because FHIR actually both creates and reduces friction. That’s one of the things that makes this hard since, if it were perfectly obvious FHIR was only going to make things better, there would be no resistance at all. Of course, nothing in the real world actually works like that.
So yes, there’s some friction. An obvious example of this would be when you see something in your chart but don’t have the tools to interpret it correctly. I have done this myself, in fact. I have got lots of access to my own healthcare data and I have looked it over many times even though I, of course, have no medical training. Occasionally, I will see a number from a lab test along with something that indicates it’s an abnormal value. Naturally, this always causes me to panic a little. I generally try to resist the urge to call my doctor right away, but I really do want to know more about what that abnormality signifies.
Of course, what I don’t understand since I’m not a physician is that a value a bit over the typical reference range is not at all unusual for a male older than 40, meaning there’s really nothing for me to be all that worked up about. A lot of this simply has to do with the sort of contextual understanding that we, as laypeople, typically lack. I don’t doubt that healthcare providers get lots of unnecessarily panicky phone calls, email and text messages as a consequence. So, this surely factors into the reservations many physicians voice about sharing their medical chart entries with patients—especially given that patients might not only misinterpret the information, but actually go so far as to make some poor decisions on the basis of what they understand their situation to be.
Still, on balance, I think the advantages of enabling patients to arrive at appointments armed with better information about their condition outweigh the potential disadvantages. In fact, I think you would have a hard time finding a physician who would discourage patients from becoming better educated and preparing intelligent questions if only because smarter patients generally make for shorter, crisper discussions.
As patients gain the ability to download much of their medical history onto their mobile phones, I have to believe apps will come along that help them better interpret that data. As that continues to play out, doctors are likely to find they have more time for treating patients since they will be spending less time conferring with them. And that, I think, addresses the biggest challenge doctors face right now—namely, the limits on what can be accomplished in the course of any given day. So, if there’s anything they really want from technology, it’s the added efficiency that will allow them to see more patients each hour.
Adam Cole: With this in mind, then, is there anything about FHIR you now wish could be changed?
“I would think the net benefit to the overall healthcare system of a more progressive model focused on innovation and patient engagement ought to prove profound over the mid- to long-term.”
Agnew: One of the things that drew me to FHIR in the first place was its utter simplicity. For the longest time, it was absolutely realistic for someone who spent enough time looking at the standard to know it inside and out. In fact, for a time, I considered myself to be knowledgeable about every corner of the spec. Those days are sadly long gone now, and I think I had always hoped things would remain as simple as they had been.
That now seems really innocent in retrospect since I now can see it’s become truly impossible for any one person to know the whole standard. The healthcare field is broad enough as it is, and many more problems remain to be solved, so the field is only going to continue expanding. Which is to say I can now see it was inevitable the standard would grow to a point where no one person could possibly have full knowledge of it. Yet I still feel it’s unfortunate we had to take on this much added complexity.
Helland: Even as it stands, there seem to be several broad categories of healthcare information to account for. First, you have got the raw data collected during patient visits, as well as from subsequent tests and examinations. Then there’s the knowledge derived from analyzing that. Naturally, billing also needs to be linked to all that. Then there are those things you want to do moving forward to improve workflows, planning, process, and all the rest of it. How does FHIR provide for that?
Agnew: FHIR is spot on when it comes to accounting for all the things that have already happened—the doctor’s notes, the test results, and so forth. It also does a good job with the derived knowledge, although that’s a bit more challenging. Derived knowledge can take many different forms depending on who is deriving it and how it’s being derived. Still, while FHIR was not exactly designed for that, it does manage to get the job done.
Where things become really challenging is when you start to layer workflow on top. FHIR does come with some building blocks for workflow, but this gets into how humans go about doing things, which can get messy. And yet, so much of healthcare revolves around workflows. There are many different facets to caring for a patient, and that generally means not only that certain things need to happen but also that they need to happen in some particular sequence.
FHIR can help with some of that, but ultimately there’s no data model in the world capable of handling all of that for you. Other solutions also come into play. Certainly, there are some people who already are working on this in BPM+ [Business Process Management Plus Health], while others are tackling it using CQL [Clinical Quality Language]. In all likelihood, work being done on both of these fronts will also end up being part of the solution.
There’s clearly a lot on the line here, and for FHIR to truly transform healthcare, it will have to be part of whatever is done to account for workflows. We know, at minimum, that FHIR is going to be what is used to model healthcare information. We also face the challenge of finding better ways to model our vocabulary for codifying information. What’s more, we need to find better ways to model our workflows—not only to ensure that things happen in the right order but also to see to it that anything that needs to happen actually does happen. All these standards will have to play well together if we are to change the way things are handled in healthcare.
Helland: All right, so it sounds like everyone should want this to happen, but I will bet that’s not the case. Who is not rooting for these efforts to succeed? And why is that?
Agnew: The obvious answer is someone who has a profitable business model that works just fine the way things currently stand. In all likelihood, that’s a person who is going to be resistant to changing the environment. As an example, we are involved with some U.S. organizations working on payer-to-payer data exchanges, where the fundamental idea is: If I’m moving from one insurance provider to another, I should have a simple, seamless mechanism that lets me port all my information from my previous payer to my new payer. Who doesn’t want that to work? Well, it might be someone who works for my previous service provider who really doesn’t want to make it easy for me to leave.
Cole: With that said, I would think the net benefit to the overall healthcare system of a more progressive model focused on innovation and patient engagement ought to prove profound over the mid- to long-term.
Helland: Can we get there without governments forcing all the different parties to share data?
Agnew: I would like to say the right things would eventually happen without the need to resort to government coercion, but there’s shockingly little real-world evidence of healthcare data being liberated without some sort of government fix first being imposed. So, government mandates probably will ultimately prove to be part of the equation.
For all the resistance that might be encountered, most providers will find that liberating all that data ends up allowing them to achieve higher profitability through greater efficiency. In part, that will be because patients will come in for visits better informed about their situations and thus will be less likely to take up as much physician time. It also could lead people to take better care of themselves and manage their conditions more effectively simply because they learn how to do that.
In many parts of the world—including both Canada and the U.S.—we are now seeing a move toward what’s referred to as “value-based care,” a model that gives providers an incentive to keep people healthy by essentially saying: “Here’s a large roster of patients you are now responsible for. It doesn’t matter how many you see on any given day. Instead, you will be compensated for taking care of all these people.” With this, the incentives for healthcare providers change dramatically. Suddenly, it’s in their best interest to have patients show up just as rarely as possible since the healthcare organization will be paid the same for a patient’s care regardless of whether the person comes in for 100 visits or none at all. Which is to say healthcare organizations will soon realize it’s in their best interest to do everything possible to keep their patients healthy.
So this, I’d suggest, is where data liberation becomes a matter of better economics. For example, if a patient has some sort of condition that requires dedicated management, it makes sense to require regular lab tests that can inform better decisions both on the part of the provider and the patient. This is where apps providing better ongoing communications between doctor and patient can make a profound difference in terms of better managing chronic conditions in a cost-effective manner. Which is why I think reimagining how we pay for healthcare has the potential to help everyone in the loop.
Helland: What comes next?
Agnew: All these APIs currently under development are going to enable some truly incredible things. We don’t know exactly what that means yet, but it could lead to a TCP/IP moment like the one we were talking about earlier, where everything just comes together in a flash.
I often think about eBay in this light since, for me, its emergence as a Web phenomenon was a seminal moment in the development of the Internet. The Internet had been with us for quite some time before eBay came along—and it gave us universal email, which was good; it also, of course, gave us the web, which led to people inventing all sorts of wonderful things; search engines as well started to become a thing. Clearly, society at large was already starting to get a good feeling for the power of the Internet.
But it was not until I saw eBay that my eyes were really opened to what the Internet could deliver, because that’s where all the considerable power of the Internet was brought to bear and put on display in a single instance. Suddenly, here was this incredible global marketplace where I could look for anything I could dream of wanting and essentially count on finding someone somewhere in the world who was offering that very thing and was willing to negotiate a price for it. That’s when it all came together for me. For all the cool websites that had come along previously, that’s when I first fully recognized the sort of power the Internet had to offer.
Healthcare hasn’t yet had its eBay moment, but I have complete confidence that’s coming since someone is sure to figure out how to capitalize on this incredible network of information to provide an advantage that will become immediately obvious to everyone. And when that happens, the transformative moment will have finally arrived.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment