The news archive provides access to past news stories from Communications of the ACM and other sources by date.
Two papers show that large language models, including ChatGPT, can pass the USMLE.
Scientists in the U.S. and China have adopted a vertical framework to produce high-performance organic electrochemical transistors.
Students at Henderson high school in Tennessee constructed a robotic hand for classmate Sergio Peralta, whose right hand was not fully formed.
A TechBrief released by ACM's global Technology Policy Council warns the spread of algorithmic systems carries with it unaddressed risks.
The DOJ said it stopped victims, including hospitals, schools and infrastructure operators worldwide, from paying $130 million in ransom.
"...it seems likely they will put me in jail for 6 months if I follow through with bringing a robot lawyer into a physical courtroom."
Simson Garfinkel says the focus on quantum attacks may distract us from more immediate threats.
Researchers at NewsGuard, which monitors and studies online misinformation, found that OpenAI's online chatbot ChatGPT can be used to generate propaganda and misinformation.
Researchers at Harvard Medical School created large-scale two-dimensional and three-dimensional spatial maps of colorectal cancer that layer molecular information on top of histological features.
Researchers have developed a technique for assigning workplaces to individuals in synthetic populations.
Researchers at Göttingen University have developed a new method for X-ray color imaging.
Data allegedly from ODIN Intelligence illuminates its software and customers. The company says it's working with law enforcement to investigate.
IARPA scientists are taking up the nascent field of cyber psychology to predict and counter hacker behavior.
To power the 3D-printed, multiplexing system, just shine light on it.
Service was impacted in the Americas, Europe, Asia Pacific, Middle East, and Africa, however, China was unaffected, per CNN.
Google Research spinout Osmo wants to find substitutes for hard-to-source aromas. The tech could inspire new perfumes—and help combat mosquito-borne diseases.
Researchers collaborated on the construction of remote-controlled electronic biological robots powered by organic muscles.
Human artists and artificial intelligence companies are disputing generative AI-intellectual property in a landmark legal case.
Appliance manufacturers LG Electronics and Whirlpool are trying to entice customers whose "smart" appliances are not connected to the Internet to embrace the technology.
Justice Department accuses Google of subverting competition in Internet advertising technologies through serial acquisitions and anticompetitive auction manipulation.
Researchers developed an online portal of fish sound information and a catalogue of recordings, featuring data on 989 fish species found to produce active sounds.
Researchers at Georgetown University, OpenAI, and the Stanford Internet Observatory issued a report warning about the potential misuse of OpenAI's ChatGPT chatbot by propagandists.
Polish security researcher Dawid Potocki discovered a vulnerability in MSI's motherboards that occurs when the Secure Boot default settings for "Image Execution Policy" are changed to "Always Execute."
It took researchers in Belgium just an hour to crack the SIKE cryptographic algorithm.
The deal will give a boost to Microsoft's Azure cloud, while providing OpenAI with additional specially designed supercomputers to run its complex AI models and fuel its research.
ChatGPT and similar AI systems are being used in realms beyond education, but classrooms seem to be where fears about the bot's misuse — and ideas to adapt alongside evolving technology — are playing out first.
A Swedish quantum computer is to become more widely available.
Facebook's rules, the board acknowledged, are "extensive and confusing" and "often convoluted and poorly defined," requiring bizarre, subjective content moderation assessments.
"They use AI to rewrite the intros every two weeks or so because Google likes updated content. Eventually it gets so mangled that about every four months a real editor has to look at it and rewrite it."
Explanations about how deep learning and large language models actually work often emphasized incomprehensibility, or a model's explainability or interpretability, or lack thereof