acm-header
Sign In

Communications of the ACM

ACM Opinion

Self-Supervised Learning and Large Language Models


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
A headshot of Stanford University PhD student Alex Tamkin.

Credit: Alex Tamkin

In an interview, Stanford PhD candidate Alex Tamkin discusses his research, which focuses on understanding, building, and controlling pre-trained models, especially in domain-general or multimodal settings.

Interview topics include viewmaker networks, opportunities and risks of foundation models, impacts of large language models, research culture, scientific communication, and more.

From The Gradient
View Full Article


 

No entries found