Sign In

Communications of the ACM

ACM News

AI Helps Craft New Beers

View as: Print Mobile App Share:

Artificial intelligence could help brewers predict the attributes of a beer from a new recipe before it is actually produced.

Credit: Workhorse Brewing

Creating new beers has typically relied solely on human expertise, but artificial intelligence (AI) could soon be harnessed to assist in beer development.

"(AI) can help people to be more creative," says Marc Bravin, a researcher at Lucerne University of Applied Sciences and Arts in Switzerland. "Most people have a (specific) mindset and don't think about other ingredients that they could use."

AI could also help brewers predict the attributes of a beer from a new recipe before it is actually produced. Its ultimate color, alcohol content, and bitterness, for example, could be gleaned from the choice of ingredients. "If it's not as bitter as you would like, you can make adjustments before you start," says Ellyn Ayton, a data scientist at the Pacific Northwest National Laboratory in Richland, WA.

Machine learning techniques are being investigated to help create novel beers. In recent work, Ayton and her colleagues aimed to model the relationship between a beer recipe and various attributes. In one trial, they focused on beer type, to see if their system could classify a recipe as an ale, lager, or wheat beer. In another task, their model aimed to categorize beers into one of 81 specific types, such as American IPA (India Pale Ale) or Dry Stout. In a third experiment, the model would try to predict ranges for 10 different attributes, such as bitterness units or color.

The team used two different deep learning models to compare the predictions generated. One of them, called a Deep Neural Network (DNN), which is simple and widely used, is able to learn the features of individual ingredients in a recipe. The second model, however, called Long Short-Term Memory (LSTM), is more complex, as it learns from sequences of ingredients. "(LSTM models) work really well with text data that is sequential like a recipe would be or like a sentence, where you read it from left to right," says Ayton.

A collection of over 200,000 beer recipes shared by homebrewers on a publicly available web site was used as data for the experiments. The two deep learning models were trained with 70% of the recipes, while the rest was retained for testing purposes afterwards.

Ayton and her colleagues found the LSTM model performed best in all three tasks. In the complex classification task with 81 beer types, it demonstrated accuracy of about 34%, which is quite low. However, the team was able to use visualization techniques to confirm that the model was learning meaningful structures in the recipes. "Either the model is having a hard time distinguishing classes from each other or it needs more training data," says Ayton. "We didn't have a lot of examples for some of (the beer types) so that could be where the model was failing."

Using a new deep learning method called a transformer could help obtain better results. It has been shown to outperform LSTM models in other tasks that use text data. "I think that would definitely help boost our performance," says Ayton.

Deep learning models could be useful tools for breweries, according to Ayton, where it often takes three months or more to develop a new beer. In addition to predicting the attributes of a beer, such models could be used in reverse to generate recipes based on desired attributes. "I think it would help with the planning process, which I know is very complicated for breweries," she says.

Another group investigated whether deep learning could generate new beer recipes. Having previously used AI to adapt cooking recipes, Bravin and his colleagues decided to see if a similar approach could be applied to create new beers. To their knowledge, it hadn't been done before starting from a partial or empty list of ingredients. "On a Friday afternoon, when we were drinking a beer at our institute, we had the idea that if it works well for cooking recipes, why shouldn't we try it out with beer?" says Bravin.

The team collected over 150,000 beer recipes from both professional and hobby brewers around the world, which were featured on the same publicly available website. The set of recipes, which contained information such as ingredient amounts and processing steps, was then refined to a training and testing set of over 65,000 recipes.

Bravin and his colleagues then trained a transformer deep learning model with the data. Like an LSTM, this type of model is sequential, so it's possible to limit which hops would be picked based on the choice of ferment, for example. The team worked with a brewery to get advice. "We asked them how they would design a recipe and they said, 'we first pick the ferments and then the hops that match these ferments'," says Bravin. "So that's why we modelled it the same way."

The model was able to generate 10,000 new beer recipes. In order to assess how novel they were, they compared how similar they were to a test set of data, as well as to the recipes with which the model was trained. "This gave us an indicator of whether our model just completely overfitted and simply copied all the recipes in the training set, or if it always came up with new recipes," says Bravin.

On average, the recipes generated were more novel than those in the test set. The researchers also had a professional brewer evaluate the feasibility of producing the recipes, based on various factors such as cost of ingredients and technical aspects. Just under a third of the AI-generated recipes were considered to be fit for production.

The ultimate test, however, was to brew a beer whose recipie was created with deep learning. A local micro-brewery used the team's recipe generator to create a recipe for an India Pale Ale (IPA) beer. They then produced the beer, aptly called Deeper, which had a grapefruit-like flavor. "I think the majority of people really liked it," says Bravin.

Although Bravin is satisfied with the model as an initial prototype, he thinks there is room for improvement. For instance, to simplify the task, the model was trained only on some types of ingredients in a beer. It was therefore able to select different types of hops, for example, but ignored varieties of yeasts, which would be up to the brewer to choose. "It would be really interesting to add more components of a recipe and check whether the model still performs well," he says.

Bravin thinks that such models could be adopted by microbreweries to speed up the innovation process. However, he continues to believe input from professional brewers is required. "Sometimes our model outputs something that experts tell us can never work," he says. "I think there's a long way until outputting perfect recipes that everyone can brew at home in their bathtub."  

Sandrine Ceurstemont is a freelance science writer based in London, U.K.


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account