Opinion
Computing Profession

Looking Within

Amanda Randles on how modeling blood flows inside the human body can save, and improve, lives.

Posted
Amanda Randles

What does your blood look like as it circulates through your body? The answer depends not just on the structure of your cells but the geometry of your vascular system. Medical images like Computed Tomography (CT) scans, Magnetic Resonance Images (MRIs), and angiograms (which show blood flow through arteries, veins, or the heart) can give you a snapshot, but they cannot predict what will happen in the future, or how that might impact your health.

2023 ACM Prize recipient Amanda Randles, a professor of Biomedical Sciences at Duke University’s Pratt School of Engineering, is working to change that. Her work focuses on using supercomputers to model the way blood flows through the human circulatory system. The goal: to give doctors the information they need to predict and prevent disease. Here, she talks about how the work has evolved—and where it’s headed in the future.

You led the development of HARVEY, a massively parallel circulatory simulator that produced the first simulation of the coronary arterial tree at the cellular level for an entire heartbeat. Can you describe how it works?

The process starts with a medical image. For CT scans and MRIs, we use commercial software to segment the image and create a triangulated mesh file that goes into HARVEY. For angiograms, which are two-dimensional images, we worked with a team in Denver to create software that allows you to reconstruct the 3D geometry for use by HARVEY using two separate images.

Once we have the triangulated mesh file, we run a blood flow simulation using a lattice Boltzmann method. This enables us to put a regular Cartesian grid across the file and answer questions like, “What grid points are inside and outside the mesh? What’s a fluid node? What’s a wall node? What’s an inlet and outlet?” We only keep in memory anything that’s in the mesh, because it’s much more efficient.

At every single grid point, we solve the fluid dynamics equations, and then it’s basically a stencil application. Each grid point has a set number of neighbors, and at each time step, you communicate with those neighbors to see what part of the fluid has moved from this grid point to the other grid point. So, we do that for the fluid side, and then we also can capture fluid-structure interactions like red blood cells, cancer cells, and so on inside the geometry.

It’s a resource-intensive process, but it enables researchers to do powerful things like create simulations and tools that help clinicians address issues like heart disease and cancer.

Data is the big problem now. We’re constantly out of storage. A lot of what we’re trying to enable is the identification of novel biomarkers that indicate, for instance, that someone is going to have a heart attack. But we don’t know what that marker is, so we don’t know what information we can throw away. If you’re trying to decide whether or not someone needs a stent, that’s based on fractional flow reserve, which is just a pressure measurement. You might not even need a 3D model—1D might be enough. But if you want to look at wall shear stress or any number of longer-term issues, suddenly you need much more data.

HARVEY runs on supercomputers, and you’ve worked hard to optimize its design for each system’s capabilities. How has that task changed over the years?

When we built HARVEY, everything was CPU-based. Pretty quickly, we had things running on GPUs, and then in the last few years, we’ve started to care more about portability. We have a HIP version. We have a SYCL version. We’ve also been looking at Kokkos.

Also, as the speed of computing increases, we constantly have to reevaluate the trade-off of what we need to store versus what we can just compute again.

You’ve also been working to develop cloud-based applications.

We’ve been doing a lot more work with the cloud, because not every clinic or hospital is going to own a supercomputer, let alone a DOE-scale supercomputer. So we’re trying to figure out whether we could develop new cloud-based algorithms, for instance, or whether we could split things into smaller chunks that could be run in the cloud.

Speaking of splitting things up, how do you and your colleagues in the Randles Lab manage your time in terms of pursuing new clinical and biomedical applications versus solving those more fundamental challenges of scalability and resource allocation?

We have a lot of different people in my lab, and four project group meetings that focus on computer science, biomedical applications, fluid structure interaction, and longitudinal hemodynamic mapping framework development and application. For the students, it’s a great opportunity, because they really are exposed to each piece. You may not be the person who’s writing MPI code, but you have to be able to debug it, because you need to know what’s going on. Increasingly, we have clinicians on student thesis committees to make sure we’re asking the right questions and can translate our research results back to that world. We’ve been very lucky in finding clinical collaborators who are engaged enough to learn about computational models and can meet us in the middle. One of our long-time collaborators at Brigham and Women’s Hospital, Jane Leopold, even learned LaTeX to be able to write papers with us.

You’ve also just helped launch a new interdisciplinary center, the Center for Computational Digital Health and Innovation.

A lot of what we’re creating in the center is the infrastructure and the tools to bridge data silos and connect people who are working on similar projects or using the same tools. The members come from my lab and the School of Medicine, and the idea is to bring people together, get them talking, and give them access to pilot funding and seminars and things like that.

Can you tell us about your work on wearables? I understand that you’ve gotten some really exciting results in terms of what you’ve been able to model and capture.

Wearables represent one of our major areas of focus. Before, we looked at diagnostics that could be improved based on single-heartbeat metrics. Now, we’re trying to figure out if we can drive flow simulations from wearables to see how things change over time. Can we get a picture of someone’s condition when they leave the hospital, monitor them remotely, and use that information to improve their treatment?

Of course, understanding what signals you even need to capture from a wearable to make a correct fluid simulation is still a challenge.

Are you designing your own devices or using off-the-shelf products?

We have some projects where we might need to make custom sensors, but in general, I’d like to use as many commercially available devices as possible, because it increases the usability.

So we’re using Apple Watch and Fitbits, and we’re trying to see how far we can get with non-clinical grade measurements, because those small nuances may not even transfer to a change in fluid flow. My students are doing a lot of uncertainty quantification to understand things like, if your heart rate is 55 versus 56, does that actually change your flow? How much do changes in a metric like cardiac output matter based on your geometry, your blood flow, and other patient-specific issues?

That sounds like another interesting set of data challenges.

It took us decades to get to the point where we could model a single heartbeat—and that still requires an hour on a supercomputer. Extending to a six-month timeframe amplifies the need for new algorithms, data analytics, and machine learning, so there’s definitely a lot to do!

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More
News
Data and Information

Looking Within

Biosensing takes medical diagnostics to a deeper level.
Posted

Understanding what is taking place inside the human body is at the heart of medicine. For centuries, scientists have looked for better ways to detect problems, conditions, and diseases ranging from high blood pressure to cancer.

An array of sensing devices—from thermometers and blood pressure cuffs to blood oximeters—have made it easier to monitor vital signs and detect potential problems. Now, researchers are taking sensing to the molecular level using nanotechnology and synthetic biology: they are developing biosensors that boldly go where medicine has not gone before.

“It’s becoming possible to design sensors that deliver a specific piece of information. Biomedical diagnostics is expanding into many applications. Researchers are discovering ways to detect biomarkers even at a single-cell level,” says Gabe Kwong, an associate professor in the Department of Biomedical Engineering at the Georgia Institute of Technology (Georgia Tech).

These emerging systems will spot infections, detect cancerous tumors, and identify the presence of other chronic diseases, which should translate into earlier interventions and better treatments that save lives. “Living biosensors are a promising way to improve our diagnostics abilities,” says Robert Cooper, a staff research associate in the Biocircuits Institute at the University of California, San Diego (UCSD).

A New View

Using biomarkers to diagnose and treat conditions is not new. At a basic level, blood and urine tests offer insights into events taking place inside the human body. Yet, for all the progress that has taken place in medicine over the last century, there is still a long way to go regarding the diagnosis and treatment of many conditions.

Part of the problem is that detecting the presence of specific molecules or cells can be extraordinarily difficult. For example, a standard 10-milliliter blood draw may contain only a handful of cancer molecules, which could evade detection during a test. “It’s difficult to find the signal among the noise—particularly when the cancer is embedded in a complex organ like a liver or a lung,” Kwong says.

This has implications for early detection, as well as monitoring treatment. For instance, immune checkpoint blockade (ICB) inhibitors now are used to treat an array of cancers. Although they are far more effective at treating malignancies than conventional treatments, only about 25% of patients benefit from the drugs, which work by blocking proteins that prevent the immune system from attacking cancer cells. Worse: many of the gains are only temporary.

Consequently, researchers are focusing on developing a new class of biosensors and related treatment methods that function at a far more sophisticated level, using probiotics, synthetic biology, and specific gene editing tools such as CRISPR to design both organic and inorganic biomarkers. “It’s possible to engineer agents, such as bacteria, to accomplish specific tasks,” Cooper explains.

“Biosensors can detect subtle chemical reactions and produce readouts, whether it’s blood glucose level or a malignancy,” says Kwong. “The body sheds cells in blood and urine that can indicate the presence of cancer cells or other conditions. The goal is to detect these biomarkers using various instruments.”

Desperate Measures

Kwong is at the forefront of this revolution. He and a team of researchers at Georgia Tech have developed synthetic biosensors that detect whether ICB therapy is working through non-invasive urinalysis. The technique sidesteps the need for a painful biopsy or a computed tomography (CT) scan, which can produce inconclusive or misleading results. Instead, it relies on detecting a high presence of proteins that T cells emit after taking an ICB drug.

The nanotechnology utilizes biosensors that attach to the ICB drug. After an injection, they travel through the body to the tumor site, where proteases from both T cells and tumor cells trigger a signaling agent that is released into the urine. The resulting reading—in some cases aided by artificial intelligence and machine learning techniques that can spot patterns that evade humans—determines whether or not a patient is responding to the therapy.

“Synthetic biology allows us to custom-design sensors for highly specific needs,” Kwong says. What is more, it is possible to manipulate the size, shape, color, and other characteristics of materials to achieve different results—or measure different things. “As you shrink different materials, say to 10 or 100 nanometers, you see different emergent properties.” This may lead to changes in color or other characteristics that an analysis can identify.

Scientists at UCSD and in Australia also are pushing the boundaries of biosensing. They use a gene-swapping technique they have dubbed CATCH to identify the presence of colon cancers in live organisms.

“The prototype biosensors turn on antibiotic resistance as an output, which is fairly easy to detect,” Cooper says. “You spread a sample on a petri dish with the antibiotic and you count how many cells grow.” However, before clinical use, they plan to replace antibiotic resistance with a safer output signal, such as a fluorescent molecule detectable in urine.

So far, the group has tested the technique successfully on mice, and they hope to expand it to humans within a few years. “It also opens the possibility of more highly targeted treatment methods,” Cooper says.

Meanwhile, a group at Cornell University has developed silica-hybrid nanoparticles (‘C-dots’) that deliver both positron emission tomography (PET) and optical imaging contrast in the same platform using florescent materials. A group at Columbia University has engineered probiotic bacteria that colonize tumors, thus making them more easily detectable. The team designs gene circuits that control the behavior of living cells to sense and respond to their environments in real time.

Sensing Success

The future of biosensing looks bright, though Kwong, Cooper, and others say that it will take several years for the technology to play out and become practical for humans. “There are a number of important problems to solve before these technologies can be tested on humans. It’s essential to know that these technologies work and that they are safe outside a controlled environment,” Cooper says.

Eventually, Kwong says, next-generation nanosensors and biosensing could be used to detect and treat a wide range of conditions and diseases. “The technology could transform early detection and preventative medicine,” he says. In addition, biosensing could revolutionize areas outside of medicine, including environmental tracking, agriculture, and food safety.

“We are approaching an era where it’s possible to engineer cells as sensors that deliver early information about various conditions and diseases,” Kwong concludes. “The result will be more informed decision making and healthier lives.”

Samuel Greengard is an author and journalist based in West Linn, OR, USA.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More