Sign In

Communications of the ACM

ACM Careers

Deep Learning Helps Argonne Create Cancer Research Models

View as: Print Mobile App Share:
cancer researcher

Deep neural networks can be used to model a complex nonlinear relationship between the properties of drugs and tumors in order to predict treatment response, the researchers write.

Argonne researchers have created a neural architecture search that automates the development of deep-learning-based predictive models for cancer data.

While increasing swaths of collected data and growing scales of computing power are helping to improve the understanding of cancer, further development of data-driven methods for the disease's diagnosis, detection, and prognosis is necessary. There is a particular need to develop deep-learning methods — that is, machine learning algorithms capable of extracting science from unstructured data.

Researchers from the U.S. Department of Energy's Argonne National Laboratory have made strides toward accelerating such efforts by presenting a method for the automated generation of neural networks.

As detailed in "Scalable Reinforcement-Learning-Based Neural Architecture Search for Cancer Deep Learning Research," to be presented at the SC19 supercomputing  conference, the researchers, utilizing resources from the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility, have established a neural architecture search (NAS) that, for a class of representative cancer data, automates the development of deep-learning-based predictive models.

"What we are doing in this work is, instead of designing one neural network, we design another neural network that constructs more neural nets and can train them," says Prasanna Balaprakash, Argonne computer scientist and the lead author of the paper. ​"Based on the accuracy that is achieved, this neural network slowly learns how to build better neural networks."

In other words, it's artificial intelligence for artificial intelligence, he says.

The NAS identifies deep neural network architectures with fewer trainable parameters, shorter training time, and accuracy matching or surpassing that of their manually designed counterparts.

The applications of deep neural networks extend far beyond simple classification. Complex tasks for neural networks used in cancer research might include determining how different combinations of drugs interact with each other and their impact on tumor cells. But constructing these networks to achieve such complexity can prove a cumbersome and time-consuming task, one often designed through trial and error.

Moreover, cancer research and discovery require numerous and diverse types of datasets, many of which are unstructured and do not contain images or text. From this multitude, the manual design of neural networks threatens to become a bottleneck as researchers invest much time and energy trying to match a given dataset with the right type of neural network in order to construct predictive models.

"If we automate this coupling process," Balaprakash says, ​"scientists will have the ability to build a model in a matter of hours as opposed spending weeks working on manual development. Building a model will then no longer risk becoming a bottleneck, and the faster build rate means that we can devote more time to pushing scientific boundaries with predictive modeling and so on."

The differences between automated and manual design are immediately striking. Whereas the manually designed model featured some 10 million parameters, the machine-designed model achieved the same degree of accuracy with just one million parameters. A model with a smaller number of parameters, while more robust than a many-parameter model, also has the advantage of requiring less data.

"Using the ALCF's Theta system, we showed that with automated design, training improved by a factor between three and four, which means that we can train this model four times faster than we could this bulkier model that has far more parameters," Balaprakash says. ​"That we can train it faster means that we can build models faster; this, in turn, means that we can push the drug-discovery pipeline faster."

Indeed, NAS will soon begin to produce impacts in the domain of cancer research.

"Neural architecture search stands to dramatically accelerate our ability to build robust machine learning models," says Rick Stevens, Argonne's Associate Laboratory Director for Computing, Environment and Life Sciences, and a co-author of the paper. ​"We plan to soon integrate this technology in our exascale-optimized CANDLE [Cancer Distributed Learning Environment] workflow framework and are eager to see these results."

The CANDLE framework enables billions of virtual drugs to be screened individually and in numerous combinations while predicting their effects on tumor cells.

The paper's broader ramifications for data science could prove to be highly consequential. Accordingly, plans for the developed NAS are not limited to cancer studies. The researchers are already looking to apply their method to other scientific disciplines, including weather forecasting, fluid dynamics, materials science, and chemistry.

Weather forecasting is accomplished via computationally expensive simulations — running on manually designed models — that account for a vast array of parameters. The researchers, however, harnessed their NAS to generate a weather forecasting model roughly one-tenth the size of its human-built counterpart and achieved a corresponding tenfold speedup by comparison.

Materials science and chemistry, meanwhile, stand to benefit from NAS due to their reliance on unstructured data wherein molecules and other constituents are represented by graphs.

With its implementation in the CANDLE workflow framework imminent and the exascale era looming, it is only a matter of time before the effects of NAS are felt.

"NAS and hyperparameter optimization will be vital to productive data-driven scientific discovery and a key workload on current and future supercomputers, including Argonne's forthcoming Aurora exascale system," says Argonne computer scientist and ALCF Data Science Group Lead Venkat Vishwanath, a co-author of the paper. ​"The lab is helping to pave the way for these methods through the development of DeepHyper, a scalable package for NAS and hyperparameter optimization, and the Balsam workflow service, which enables effective execution of DeepHyper on massively parallel systems."

Other co-authors of the SC19 paper are Romain Egele, Misha Salim, Stefan Wild, Fangfang Xia, and Tom Brettin.

This work was supported by the U.S. DOE Office of Science.


No entries found