What does your blood look like as it circulates through your body? The answer depends not just on the structure of your cells but the geometry of your vascular system. Medical images like Computed Tomography (CT) scans, Magnetic Resonance Images (MRIs), and angiograms (which show blood flow through arteries, veins, or the heart) can give you a snapshot, but they cannot predict what will happen in the future, or how that might impact your health.
2023 ACM Prize recipient Amanda Randles, a professor of Biomedical Sciences at Duke University’s Pratt School of Engineering, is working to change that. Her work focuses on using supercomputers to model the way blood flows through the human circulatory system. The goal: to give doctors the information they need to predict and prevent disease. Here, she talks about how the work has evolved—and where it’s headed in the future.
You led the development of HARVEY, a massively parallel circulatory simulator that produced the first simulation of the coronary arterial tree at the cellular level for an entire heartbeat. Can you describe how it works?
The process starts with a medical image. For CT scans and MRIs, we use commercial software to segment the image and create a triangulated mesh file that goes into HARVEY. For angiograms, which are two-dimensional images, we worked with a team in Denver to create software that allows you to reconstruct the 3D geometry for use by HARVEY using two separate images.
Once we have the triangulated mesh file, we run a blood flow simulation using a lattice Boltzmann method. This enables us to put a regular Cartesian grid across the file and answer questions like, “What grid points are inside and outside the mesh? What’s a fluid node? What’s a wall node? What’s an inlet and outlet?” We only keep in memory anything that’s in the mesh, because it’s much more efficient.
At every single grid point, we solve the fluid dynamics equations, and then it’s basically a stencil application. Each grid point has a set number of neighbors, and at each time step, you communicate with those neighbors to see what part of the fluid has moved from this grid point to the other grid point. So, we do that for the fluid side, and then we also can capture fluid-structure interactions like red blood cells, cancer cells, and so on inside the geometry.
It’s a resource-intensive process, but it enables researchers to do powerful things like create simulations and tools that help clinicians address issues like heart disease and cancer.
Data is the big problem now. We’re constantly out of storage. A lot of what we’re trying to enable is the identification of novel biomarkers that indicate, for instance, that someone is going to have a heart attack. But we don’t know what that marker is, so we don’t know what information we can throw away. If you’re trying to decide whether or not someone needs a stent, that’s based on fractional flow reserve, which is just a pressure measurement. You might not even need a 3D model—1D might be enough. But if you want to look at wall shear stress or any number of longer-term issues, suddenly you need much more data.
HARVEY runs on supercomputers, and you’ve worked hard to optimize its design for each system’s capabilities. How has that task changed over the years?
When we built HARVEY, everything was CPU-based. Pretty quickly, we had things running on GPUs, and then in the last few years, we’ve started to care more about portability. We have a HIP version. We have a SYCL version. We’ve also been looking at Kokkos.
Also, as the speed of computing increases, we constantly have to reevaluate the trade-off of what we need to store versus what we can just compute again.
You’ve also been working to develop cloud-based applications.
We’ve been doing a lot more work with the cloud, because not every clinic or hospital is going to own a supercomputer, let alone a DOE-scale supercomputer. So we’re trying to figure out whether we could develop new cloud-based algorithms, for instance, or whether we could split things into smaller chunks that could be run in the cloud.
Speaking of splitting things up, how do you and your colleagues in the Randles Lab manage your time in terms of pursuing new clinical and biomedical applications versus solving those more fundamental challenges of scalability and resource allocation?
We have a lot of different people in my lab, and four project group meetings that focus on computer science, biomedical applications, fluid structure interaction, and longitudinal hemodynamic mapping framework development and application. For the students, it’s a great opportunity, because they really are exposed to each piece. You may not be the person who’s writing MPI code, but you have to be able to debug it, because you need to know what’s going on. Increasingly, we have clinicians on student thesis committees to make sure we’re asking the right questions and can translate our research results back to that world. We’ve been very lucky in finding clinical collaborators who are engaged enough to learn about computational models and can meet us in the middle. One of our long-time collaborators at Brigham and Women’s Hospital, Jane Leopold, even learned LaTeX to be able to write papers with us.
You’ve also just helped launch a new interdisciplinary center, the Center for Computational Digital Health and Innovation.
A lot of what we’re creating in the center is the infrastructure and the tools to bridge data silos and connect people who are working on similar projects or using the same tools. The members come from my lab and the School of Medicine, and the idea is to bring people together, get them talking, and give them access to pilot funding and seminars and things like that.
Can you tell us about your work on wearables? I understand that you’ve gotten some really exciting results in terms of what you’ve been able to model and capture.
Wearables represent one of our major areas of focus. Before, we looked at diagnostics that could be improved based on single-heartbeat metrics. Now, we’re trying to figure out if we can drive flow simulations from wearables to see how things change over time. Can we get a picture of someone’s condition when they leave the hospital, monitor them remotely, and use that information to improve their treatment?
Of course, understanding what signals you even need to capture from a wearable to make a correct fluid simulation is still a challenge.
Are you designing your own devices or using off-the-shelf products?
We have some projects where we might need to make custom sensors, but in general, I’d like to use as many commercially available devices as possible, because it increases the usability.
So we’re using Apple Watch and Fitbits, and we’re trying to see how far we can get with non-clinical grade measurements, because those small nuances may not even transfer to a change in fluid flow. My students are doing a lot of uncertainty quantification to understand things like, if your heart rate is 55 versus 56, does that actually change your flow? How much do changes in a metric like cardiac output matter based on your geometry, your blood flow, and other patient-specific issues?
That sounds like another interesting set of data challenges.
It took us decades to get to the point where we could model a single heartbeat—and that still requires an hour on a supercomputer. Extending to a six-month timeframe amplifies the need for new algorithms, data analytics, and machine learning, so there’s definitely a lot to do!
Join the Discussion (0)
Become a Member or Sign In to Post a Comment