Sign In

Communications of the ACM

ACM Careers

GitHub Copilot 'Highly Likely' to Introduce Bugs and Vulnerabilities


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
computer bug on circuit board

Credit: Getty Images

Researchers from New York University found that nearly 40% of the suggestions by GitHub's Copilot code-generation tool are erroneous from a security point of view.

Developed by GitHub in collaboration with OpenAI, and currently in private beta testing, Copilot leverages artificial intelligence to make relevant coding suggestions to programmers as they write code.

In their analysis, the researchers asked Copilot to generate code in scenarios relevant to common software security weaknesses. In reviewing the results, the researchers discovered that almost 40% were vulnerable in one way or another. The researchers theorize that the vulnerable code could be the result of buggy code in the Github repositories used as training data.

They describe their work in "An Empirical Cybersecurity Evaluation of GitHub Copilot's Code Contributions."

From TechRadar
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account