When people with high blood pressure start taking thiazide diuretics as treatment, they are warned about possible heart-related side effects, including palpitations, fainting, and even sudden death. Patients taking a certain class of antidepressants face similar risks. But what if they are taking both drugs together? Would the bad effects be more likely? No one knew.
Not, that is, until researchers at Stanford University used data-mining techniques to pore through a database of side effects. They discovered that people taking both a thiazide diuretic and a selective serotonin reuptake inhibitor, such as Prozac, were about one-and-a-half times as likely as people taking either drug separately to show a prolonged QT interval, a measurement in cardiology that increases the risk of those heart problems. In mining the database, Russ Altman, professor of medicine and biomedical informatics research, and Nicholas Tatonetti, who has since earned his Ph.D. in biomedical informatics, found an additional 46 drug pairs that interacted to cause side effects that had not been known before beyond those of either drug alone.
Clinical trials do not test for drug interaction; it would be costly and im-practical to test every drug against every other drug. And although the trials can identify some side effects, other side effects do not show up until a medication is given to a larger population of patients over a longer period of time.
To keep tabs on unexpected complications from already approved drugs, the U.S. Food and Drug Administration (FDA) created the Adverse Events Reporting System (AERS), a database of more than four million negative events reported since 1969. Every three months, the FDA releases a new batch of event reports, which statisticians can use to look for previously unrecognized drug-related problems. Pharmaceutical companies are required to report bad effects associated with their medications, and health professionals and patients can also submit reports.
"This database from the FDA is very large and on first blush looks like an amazing resource," says Altman. But teasing out a relationship between a particular drug and an adverse event can be challenging. "People are on multiple medications, they have multiple diseases, and they have multiple side effects, and establishing one-to-one correspondence between those is really daunting."
The reports are "spontaneous," meaning they are not standardized and are based on individual judgments about symptoms someone noticed and deemed significant. The data is statistically noisy, full of biases and confounding factors that may not be easily identifiable. One well-known bias, for instance, is what Tatonetti calls "the Vioxx effect"; when a link was discovered between the painkiller Vioxx and heart attacks, the resulting publicity prompted people using Vioxx to report more heart-related symptoms, which made the background rate for those symptoms seem greater than normal, thereby masking the drug's real effects. There are also symptoms that might be associated with a drug but are not caused by it. Someone taking a medication for diabetes, for instance, could have symptoms caused by the underlying disease, though an algorithm would only notice the association between the symptoms and the drug, and could incorrectly conclude the drug was causing the problem. Someone using an arthritis medicine might report complications that are the result of being elderly, and not from the medication. Modern signal-detection algorithms try to account for biases, but have not addressed all the possible sources, the researchers say. "There are a lot of scientific computational challenges to this database," says Robert O'Neill, director of the Office of Biostatistics in the FDA's Center for Drug Evaluation and Research.
One way to separate the effects of a drug from the effects of related factors is to remove such covariates from the sample. But the database often does not list potential confounding factors such as age, sex, or underlying disease. Existing methods that control for confounding factors by creating subsets according to the covariates cannot work when the covariates are unknown.
But Tatonetti realized that in many cases he could figure out what those covariates were based on the combination of drugs that patients were taking and the set of symptoms they described. If a patient is on a cholesterol-lowering drug, for instance, he is likely to have a high-fat diet. A patient taking antidepressants is somewhat more likely to be female. If the patient is using birth control pills, she is definitely female, whereas someone taking a prostate medication is clearly male. Tatonetti also grouped people by which drugs they were taking in addition to the drug being investigated, and discarded side effects known to be caused by those other drugs.
For his data source, Tatonetti gathered more than 1.8 million reports in the AERS database from 2004 to early 2009, along with another 300,000 reports from a similar Canadian database. He also used a database of known side effects and drug indications mined from medications' FDA labels. And he included information about the biological targets the drugs were aimed at.
In the end, for each drug the researchers wanted to study, they wound up with two groups: one taking that drug and one that matched the first group in as many other ways as possible, except for that one drug. For each drug they studied, their search found an average of 329 bad reactions that were associated with the drug but that were not listed as known side effects. They then applied the same method to find drug interactions, comparing groups that were on only Drug A, only Drug B, or both. To validate their prediction that the combination was causing a side effect, they looked at lab test results from the Stanford hospital system's electronic health records for people on those drugs, and found 47 combinations that seemed to cause problems.
The researchers' hypotheses are a valuable step in discovering unknown health problems. With the hypotheses in hand, drug companies can reexamine their clinical trial data to see if they can verify the problem or even run additional trials. Regulatory agencies can warn doctors to watch for new side effects, or even pull drugs from the market. O'Neill says that by identifying side effects common to a class of drugs, such surveillance could help drug developers screen out failing drug candidates sooner, before they devote too much effort to their development. (See the "Informatics, Drugs, and Chemical Properties" sidebar on this page.)
Niklas Noren, chief science officer at the Uppsala Monitoring Centre in Uppsala, Sweden, which monitors drug safety for the World Health Organization, calls the Stanford approach "interesting and innovative." He has used a different statistical method to correct for the Vioxx effect and for false associations between drugs and symptoms; his approach does not include information about drug indications, but he says it could. "To me their attempt to directly control for this important bias is a key contribution to the field," he says. Noren has also used reports that contain covariate information such as age, sex, time period, and country to sort patients into matched groups that can be compared, but it detects only associations in those subsets and cannot reveal syndromesgroups of drug interactions that tend to be reported together. One of the big questions in the field, he says, is how mining these spontaneous reporting databases, like AERS and similar European systems, fits with the growing use of electronic medical records (EMRs). The value of EMRs is that they have much more detailed information about patients, but they may lack the power to detect rare events. "Spontaneous reporting clearly has a role," he says. "What we need to figure out now is how to use each type of data in the best way."
Rave Harpaz, a research scientist at Stanford's Center for Biomedical Informatics Research but not involved with Altman's work, is looking for the best ways to combine information from multiple sources, using not only AERS and EMRs, but also mining medical literature for clues, picking up patient reports of symptoms from social networks, and adding basic information from biology and chemistry. "There's many ways to combine these data sources, depending on what you want to achieve," Harpaz says.
The advantage of these large and growing datasets is that researchers can use different approaches to amplify signals from rare or hidden events that might otherwise go unnoticed. With enough data, Altman explains, researchers can discard some of it as they remove some of the noise, but still have plenty to work with. "You can throw away lots of it, and as long as you maintain statistical significance, you can still get useful answers."
Harpaz adds that it is possible for a weak signal to "borrow" statistical significance from other sources. For instance, with 14,000 different codes for events in the FDA's coding dictionary, the same event might wind up being described by several different terms, with no single term showing up often enough to appear statistically significant. "If you group all these terms together into a sort of hyperterm or hyperevent, you might be able to find that adverse drug event," Harpaz says. Adds Altman, "Multiple pieces of weak data, when combined, can equal one very strong piece of evidence."
The FDA, too, wants to move beyond what O'Neill calls the passive surveillance of the AERS database to active surveillance, in which it combs through electronic records held by healthcare providers such as the U.S. Department of Veterans' Affairs and Kaiser Permanente. The agency is in the early stages of setting up what it calls the Sentinel Initiative to accomplish just that.
Still, O'Neill cautions the data-mining algorithms alone cannot prove that particular drugs cause particular side effects. They can provide clues that need to be checked in other ways. "Out of every 100 things you find, there may be 10 that are worth pursuing," O'Neill says. "How to separate the wheat from the chaff, that's the trick."
Russ Altman @ PMWC 2012: Data Mining EMR's, http://www.youtube.com/watch?v=XPBrCYaV050, Feb. 24, 2012.
Harpaz, R., DuMouchel, W., Shah, N.H., Ryan, P., and Friedman, C.
Novel data mining methodologies for adverse drug even discovery and analysis, Clinical Pharmacology and Therapeutics, May 2, 2012.
Hopstadius, J., and Noren, G.N.
Robust discovery of local patterns: subsets and stratification in adverse drug reaction surveillance, 2012 ACM SIGHIT International Health Informatics Symposium, Miami, FL, Jan. 2830, 2012.
Lounkine, E., et al.
Large-scale prediction and testing of drug activity on side-effect targets, Nature 486, 7403, June 10, 2012.
Tatonetti, N.P., Ye, P.P., Daneshjou, R., and Altman, R.
Data-driven prediction of drug events and interactions, Science Translational Medicine 4, 125, March 14, 2012.
©2012 ACM 0001-0782/12/10 $15.00
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from firstname.lastname@example.org or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.
No entries found