Advertisement

News

Who is Liable When AI Goes Wrong?

Increasingly capable generative AI tools have created a gap between the technology's power and what many users understand about the legal liabilities of its use.

News

The Collapse of GPT

Generated data that starts forgetting tail events can lead to a concentration of higher probability distributions, which causes a model to fail.

News

Bringing AI to the Edge

Privacy, performance, and security benefits have everyone from academic computer scientists to technology giants racing to develop more efficient ways of pulling AI out of the cloud and closer to users.

Opinion

The Pollution of AI

The use of AI systems developed without a primary consideration of accountable explainability could have the polluting effect of nudging people toward superficial and thus dogmatic thinking.

News

A Rewarding Line of Work

Sutton and Barto developed reinforcement learning, a machine learning method that trains neural networks by offering them rewards in the form of numerical values.

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More