News
Artificial Intelligence and Machine Learning News

Can AI Learn to Forget?

Specialized techniques may make it possible to induce selective 'amnesia' in machine learning models.
Posted
broom sweeps binary characters, illustration
  1. Introduction
  2. Breaking the Model
  3. Selective Memory
  4. Rethinking Machine Learning
  5. Author
  6. Footnotes
broom sweeps binary characters, illustration

Machine learning has emerged as a valuable tool for spotting patterns and trends that might otherwise escape humans. The technology, which can build elaborate models based on everything from personal preferences to facial recognition, is used widely to understand behavior, spot patterns and trends, and make informed predictions.

Yet for all the gains, there is also plenty of pain. A major problem associated with machine learning is that once an algorithm or model exists, expunging individual records or chunks of data is extraordinarily difficult. In most cases, it is necessary to retrain the entire model—sometimes with no assurance that that model will not continue to incorporate the suspect data in some way, says Gautam Kamath, an assistant professor in the David R. Cheriton School of Computer Science at the University of Waterloo in Canada.

The data in question may originate from system logs, images, health records, social media sites, customer relationship management (CRM) systems, legacy databases, and myriad other places. As right to be forgotten mandates appear, fueled by the European Union’s General Data Privacy Regulation (GDPR) and the California Consumer Privacy Act (CCPA), organizations find themselves coping with potential minefields, including significant compliance penalties.

Not surprisingly, completely retraining models is an expensive and time-consuming process, one that may or may not address the underlying problem of making sensitive data disappear or become completely untraceable. What’s more, there frequently is no way to demonstrate the retrained model has been fully corrected, and that it is entirely accurate and valid.

Enter machine unlearning. Using specialized techniques—including slicing databases into smaller chunks and adapting algorithms—it may be possible to induce selective ‘amnesia’ in machine learning models. The field is only beginning to take shape. “The goal is to find a way to rebuild models on the fly, rather than having to build an entirely new model every time the data changes,” says Aaron Roth, a professor of computer and information science at the University of Pennsylvania.

Back to Top

Breaking the Model

What makes machine learning so appealing is its ability to slice through myriad data points and spot complex relationships that frequently extend beyond human cognition. However, once a model exists, altering or deconstructing it can prove daunting, if not impossible, because there typically is no way to know where a specific data point resides within a model, or how it directly impacts the model.

“In many cases, particularly when a person or situation is an outlier, the model will likely memorize a particular piece of data because it doesn’t have enough examples of the data to otherwise make a prediction,” says Nicolas Papernot, an assistant professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Toronto.

Because there is no way to apply selective amnesia, data scientists must typically retrain and rebuild models from scratch every time there is a need to remove a data element. Not surprisingly, the process can be long, complex, and potentially expensive—and it is likely to be repeated every time an error appears or a right to be forgotten request arrives. “Today, there’s no simple and straightforward way to simply remove the individual data but leave the algorithm intact,” Papernot says.

In addition, today’s data privacy tools do not solve the underlying problem. For example, artificial intelligence (AI) federated learning trains algorithms across multiple edge devices or servers holding local data samples. This can prevent sensitive data from winding up in a database, but cannot do anything to remove it. Data tokenization substitutes sensitive data elements with an element that has no value, but creates the same problem. What is more, data anonymization tools often strip out elements necessary to train models, or they introduce noise that can distort the training process. As Roth puts it, “Privacy techniques and data deletion don’t necessarily arrive at the same place.”

Differential privacy, which substitutes elements and withholds key data, is also insufficient for solving the problem of unlearning, Roth says. It can provide guarantees in a single case or a handful of cases in which someone requests removal from a database, even without any retraining. However, as a growing sequence of deletion requests arrive, the framework’s unlearning models quickly unravel. “Slowly and surely, as more people ask for their data to be removed, even [models that incorporate privacy protections] quickly start looking different from what would have resulted from retraining,” he says.

The inability to validate specific data removal within a model that uses anonymization and differential privacy techniques is more than a theoretical problem, and it has serious consequences. Security researchers have repeatedly demonstrated an ability to extract sensitive data from supposedly generalized algorithms and models, says Kamath. One high-profile example occurred in 2020, when a group of researchers found that the large language model GPT-2 could be manipulated into reproducing portions of its training data, including personally identifiable information and copyrighted text.a

Back to Top

Selective Memory

Amid shifting attitudes, social values, and privacy laws, there is a growing recognition that more advanced methods for machine unlearning are needed. Yet, researchers continue to struggle with a few key barriers, including understanding how each data point impacts a machine learning model and how randomness, also referred to as stochasticity, affects the space. In some cases, relatively minor changes in data input generate significantly different results—or raise questions about the basic validity of a machine learning model.


Researchers continue to struggle with a few key barriers, including understanding how each data point impacts a machine learning model.


One method that has garnered considerable attention appeared in 2019. Papernot and a group of researchers at the universities of Toronto and Wisconsin-Madison presented the idea of segregating machine learning data into multiple, discreet components. By establishing numerous chunks of data—think of them as mini-databases that contribute to the larger database—it is possible to conduct retraining only on the specific component where the removal exists, and then plug it back into the full dataset. This would produce a fully functional machine learning model again.

The group called the method Sharded, Isolated, Sliced, and Aggregated (SISA). It argued that the framework could be used with minimal changes to existing machine learning pipelines. “First, we divide the training data into multiple disjoint shards such that a training point is included in one shard only; shards partition the data,” the authors noted. “Then, we train models in isolation on each of these shards, which limits the influence of a point to the model that was trained on the shard containing the point.” After the shards are combined, it’s possible to successfully remove data elements. “When a request to unlearn a training point arrives, we need to retrain only the affected model. Since shards are smaller than the entire training set, this decreases the retraining time to achieve unlearning,” they said.

The research group tested the SISA framework on more than a million images, and found that the technique worked. Typical speed improvements ranged from 2.45x to 4.63x for unlearning tasks. What is more, “The method reduces training even when changes are requested across the training set. It introduces a more practical approach to dealing with the problem,” Papernot explains. Most importantly, “You can demonstrate to the user that the unlearned model is what you could have obtained in the first place had you never learned about the user data.” The group also proposed model checkpointing, in which the learner builds and stores dozens or even hundreds of discreet models, with certain data points excluded.

While the concept is promising, it has limitations, the authors admit. For example, by reducing the amount of data per shard, there is an impact on machine learning and a lower-quality outcome is likely. In addition, the technique does not always work as billed.

When a group of researchers at Stanford University, Harvard University, and the University of Pennsylvania examined the approach, they discovered that certain sequences of data removal requests under certain sets of conditions caused the framework’s deletion guarantees to fail. That’s because the SISA researchers assumed deletion requests were made independent of the actual machine learning models. “But this will not be the case if, for example, people delete their data in response to what the models reveal about them,” Roth says. “When this happens, we have a concrete demonstration that the deletion guarantees of previous work fail.”

Roth, who was member of this research team, says that while the approach does not always work as is (his group ultimately found a fix for the deletion problem), it is among a growing arsenal of machine unlearning techniques.

Meanwhile, the Stanford, Harvard and Penn researchers have also explored the idea of developing data deletion algorithms directly linked to machine learning algorithms—with specific characteristics designed entirely for maintaining data integrity and the validity of the overall model.b

Back to Top

Rethinking Machine Learning

For now, machine unlearning remains in the nascent stages. However, as researchers and data scientists gain insight into how removing data impacts an overall model, real-world tools to manage the task should begin to appear, Papernot says. The goal is to produce machine learning frameworks and algorithms that allow data scientists to delete a record or individual data point and wind up with a valid model that has completely unlearned the data in question.


As researchers and data scientists gain insight into how removing data impacts the model, real-world tools to manage the task will begin to appear, Papernot said.


Says Papernot: “Right now, we’re simply reacting to a problem and taking a post hoc perspective. … We want to get to the point where we have confidence the model is accurate without the data ever being inserted.”

*  Further Reading

Bourtoule, L., Chandrasekaran, V., Choquette-Choo, C.A., Jia, H., Travers, A., Zhang, B., Lie, D., and Papernot, N.
Machine Unlearning, 42nd IEEE Symposium of Security and Privacy, December 2020. https://arxiv.org/pdf/1912.03817.pdf

Carlini, N., Tramèr, F., Wallace, E., Jagielski, M., Herbert-Voss, A., Lee, K., Roberts, A, Brown, T., Song, D., Erlingsson, Ú. Oprea, A., and Raffel, C.
Extracting Training Data from Large Language Models, June 15, 2021. https://arxiv.org/pdf/2012.07805.pdf

Sekhari, A., Acharya, J., Kamath, G., and Suresh, A.T.
Remember What You Want to Forget: Algorithms for Machine Unlearning, July 22, 2021. https://arxiv.org/pdf/2103.03279.pdf

Gupta, V., Jung, C., Neel, S., Roth, A., Sharifi-Malvajerdi, S., and Waites, C.
Adaptive Machine Learning, June 9, 2021. https://arxiv.org/pdf/2106.04378.pdf

Prabhu, V.U. and Birhane, A.
Large Datasets: a Pyrrhic Win for Computer Vision? July 27, 2020. https://arxiv.org/pdf/2006.16923.pdf

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More