Practice
Artificial Intelligence and Machine Learning

Toward Effective AI Support for Developers

A survey shows how developers view, use, and are concerned about AI.

Posted
developers and coding screens atop AI pedestal, illustration

Years of software engineering and product development have taught us that the best way to build products that delight customers is to talk to customers. Talking to actual customers provides important insights into their challenges and their loves. This leads to innovative and creative ways of solving problems (without creating new ones) and guards against ruining the workflows customers already enjoy.

And yet, the emergence of artificial intelligence (AI) has many leaders forgetting these lessons in a rush to create new AI-driven development tools, often without consulting actual developers. Our research is meant to help close that gap and give companies, product teams, and fellow practitioners insights into the opportunities and concerns that developers have with using AI in their work. Armed with this information, product teams and leaders can make better product decisions and communicate more effectively about the changes happening around them.

Our research is meant to give companies, product teams, and fellow practitioners insights into the opportunities and concerns that developers have with using AI in their work.

Much of the existing literature focuses on the impact and efficacy of AI-driven development tools—such as GitHub Copilot, powered by OpenAI’s Codex3—from a performance-centric perspective, such as the relevance of the code generated by GitHub Copilot11,12 or the perceived increase in developer productivity.9 While some research explores how developers engage with such tools,1,8 the scope is limited. Our approach seeks to invert the lens and prioritize the voices of the developers.

While these tools and studies have merit, there is a need to understand what the developers want instead of what we think they want. The workflow of developers is multifaceted. Their responsibilities range from application development—planning, building, testing, and so on—to tasks such as managing communications with team members and searching for career development opportunities. Thus, we conducted a survey that focuses on directly interacting with developers, from new hires to seasoned professionals. The survey questions offer insight into developers’ perspectives on how they view AI, how they want to use it, and what their top concerns are in adopting it.

Survey questions offer insight into developers’ perspectives on how they view AI, how they want to use it, and what their top concerns are in adopting it.

While some of the survey results adhere to current speculations, some show a deviation from expectations. But acknowledging them and accounting for them in R&D will make AI adoption guided instead of speculative. This approach also provides insight into why some teams may be struggling to drive adoption of AI tools within their organizations.

The first section of this article outlines the details of the survey. The second section discusses some of the areas of AI that excite developers the most. The third talks about their concerns. The article ends with a discussion about what organizations and leaders can do to address these concerns.

Methodology

We conducted our survey from April 4–14, 2023, to gather the perspectives of software developers in the realm of AI. The survey was designed to answer two questions:

  • What aspects of their job would developers be most excited about AI helping with?

  • What worries developers most about integrating AI into their workflows?

Following is the methodology used to conduct and analyze the survey:

  1. Survey platform. The survey was conducted using Microsoft Forms, an online platform that facilitates the creation of shareable forms suitable for capturing respondent feedback.

  2. Sample selection. From a pool of 3,000 randomly chosen invitees, a total of 791 responses were garnered (a 26% response rate). While the selection process was primarily aimed at software developers, a marginal number of software development leads and program managers were inadvertently included because of the target algorithm. Those 54 responses were excluded from the results.

  3. Demographics. All respondents were employees of Microsoft, specifically the Cloud + AI division. To ensure unbiased results, members from the Developer Division team, which actively contributes to AI-enabled development tools, were excluded. Furthermore, the survey focused solely on U.S. employees, explicitly excluding partner-level and above developers.

  4. Survey structure. The survey consisted of nine questions:

    • Seven of the core questions related to AI, plus two supplementary questions inquiring whether participants wished to receive the survey results and wanted to enter a sweepstakes.

    • Five of the core questions followed a selection-list format, while the remaining two were open-text questions, giving respondents the freedom to articulate their views.

    • The average response time was 10 minutes.

    • Selection-list questions were displayed in randomized order for each of the participants, who were allowed between one and three selections. This forced participants to prioritize their responses and prevented situations where they would pick most or all answers.

  5. Incentives. To encourage participation, respondents were presented with an opportunity to win one of 50 gift cards, each worth $50. The winners of these gift cards were selected through a random sweepstakes drawing.

Table 1 lists the survey items, excluding the questions about eligibility or entering the optional sweepstakes.

Table 1. 
Survey items.
Question TextItem TypeList options
How long have you worked in your current role?Single choice

Less than 6 months

6 months – 1 year

1 – 3 years

3 – 7 years

More than 7 years

Which of the following aspects of your job would you most like to see AI tools help with?Selection list. Must select at least one item and at most three items. Randomized order.

Authoring new code and/or refactoring existing code

Tracking and managing your work items

Prioritizing your tasks

Managing your calendar

Assistance getting into a state of flow

Helping you relax and/or reduce stress

Generating unit, integration and functional tests

Debugging your code

Performing root cause analysis for bugs and incidents

Identifying and fixing compliance requirements

Helping you identify and consume learning/training material

Writing documentation

Parsing your email for action items and/or responding to emails on your behalf

Analyzing your code for defects, vulnerabilities or optimizations

Evaluating your performance to help you identify growth opportunities

Policy management

Managing/acquiring permissions

Clarifying requirements

Other

Please describe how you envision AI helping with what you selected in the previous questions?Open text. Optional. 
What worries you the most about integrating AI into your daily workflows?Single choice. Randomized order

Automating away your job

Increasing bias in the workplace

Introducing defects or vulnerabilities into your work

Monetary costs to deploy within your team/organization

Impact on climate change

Being more gimmicky than helpful

Having to change the workflows you have established and are used to

Diminishing connections with your peers

Causing your skills to atrophy

The time it will take to learn how to use new AI tools

Automation causing compensation to decrease

Other

Is there anything else you would like to share about your thoughts on the future of AI in software engineering?Open text. Optional 

What Aspects of Their Work Would Developers Be Most Excited about AI Helping Them with?

This question offered insights about how to prioritize AI product features to meet the needs of developers. Responses were spread across 17 distinct options to choose from (plus an “Other” option). Most developers (96%) chose a core development activity as at least one of their top three. Options meant to reduce bureaucratic toil (for example, parsing email or tracking and managing work items) were chosen by 37% of developers, and 25% chose a well-being activity (for example, helping to relax and/or reduce stress) as one of their top three. It should be noted that newly hired developers (those in a role for less than six months) were more likely to choose a well-being area as one of their top three (by 13%).

The next section discusses the top findings in detail. For each finding, we have cherry-picked open-text responses representative of the key themes heard from developers. Figure 1 shows the options provided for the first of the two questions and the percentage of respondents who selected each one.

Figure 1.  Responses to question about desire for AI help.

Notice the sum is greater than 100%. This was expected because a respondent could select from one to three options.

Generating unit, integration, and functional tests (selected by 44% of respondents).  Software testing is a critical part of software development that ensures application reliability and performance and can help prevent costly production defects. Testing can validate that a product meets its requirements and provide stakeholders with confidence in the quality of the product. While the value of testing may be high, it is often a challenging activity that many developers may find less exciting than core feature work. Moreover, what was once considered a separate role altogether, and is still considered a separate role in some organizations, has now been minimized as another one of core development tasks of a software developer. Essentially, developers are now doing what was once two jobs.

“Unit testing is a monotonous process. It would be great if AI can auto generate these cases.”

It is unsurprising, then, that the top task that developers responded they are excited to delegate to AI-powered tools was writing tests. This would not just alleviate the “monotony” but could also result in higher-quality tests and, by extension, higher-quality products. Anything that AI can do to ease the burdens of testing can improve both the developer experience (DevEx) and customer outcomes.

“The time it takes to develop a feature is about the same as the time it takes to test a newly developed feature, and sometimes even more time is spent on testing. We need to come up with more comprehensive test cases, but occasionally we still miss some corner cases. Moreover, most of the testing code is regular and predictable, so I think AI can help complete the testing code after the feature code is finished.”

Analyzing code for defects, vulnerabilities, or optimizations (selected by 42% of respondents).  Writing unit, integration, and functional tests primarily focuses on validating the functional correctness and expected behavior of software. Analyzing code for defects, vulnerabilities, or optimizations, however, examines code in aspects of security and performance characteristics. Both activities are not only complementary but also repetitive, making it unsurprising to see these two areas mentioned by a similar number of respondents.

Because of vast historical context, AI tools are well positioned to help in the detection and mitigation of code vulnerabilities, and thus might give developers more confidence in their capabilities. Identifying runtime errors (for instance, null pointer exceptions) or security vulnerabilities (for example, buffer overflows) are all patterns that AI theoretically could identify and “shift-left” into the code-writing portion of the development workflow.

“I think AI is good at doing closed-loop tasks and surfacing common insights from data it has seen. For example, code vulnerabilities and optimizations have been written and discussed online for decades, and they are plentiful in every codebase. I trust AI recommendations there, although I would thoroughly review any suggested changes”

Pair programming has been a way for developers to get assistance in finding optimizations and reducing defects in their code. The age of hybrid work has made such activities more challenging, however, and some developers acknowledged that AI can help fill that gap.

“While writing the code, if someone [AI] tells me of potential issues with the code, that would be like pair programming with someone.”

Enabling code-level review and improvement prior to sharing it with their peers for a human review is something developers can envision using AI’s help on.

“It would be absolutely awesome to have AI check code prior to asking peers for code reviews.”

Writing documentation (selected by 37% of respondents).  Typically, 60–70% of the software development life cycle (SDLC) is spent maintaining the software.5 When a developer revisits a code snippet, documentation plays a key role in understanding the code, design decisions, and its usage. As important as it is, this is a cumbersome and altogether often-ignored process. In addition, with multiple developers often working on a codebase, it changes at a fast pace, thereby requiring a similar pace of update for the documentation as well. This creates tension between developers’ need for documentation and their dislike of creating it.

“As a developer, we hate having no document to help us understand the code better; at the same time, as developers ourselves, we hate to write documentation. If AI tools can help solve this mystery, that will help anyone :)”

Note that automated documentation of code requires context awareness, which makes developers skeptical of completely relying on AI without human proofreading. This point might help to standardize guidelines for whether documentation should be generated when code is written or when it is consumed/updated.

“If AI is able to create documentation from previously written code, that would make onboarding onto unfamiliar code a lot easier. I see AI doing the ‘busy work’ with documentation writing, but software engineers still need to read through the generated documentation and make edits to ensure accuracy.”

Performing root-cause analysis for bugs and incidents (selected by 31% of respondents).  Root-cause analysis (RCA) is integral to improving software quality because it embodies the principle of learning from errors to prevent future occurrences. The integration of AI into RCAs can reduce the burden on developers by getting them started on their way to uncovering the root issues. It can automate data analysis, pattern recognition, and fault localization, enabling developers to concentrate on strategic problem-solving.

“I would like tools to auto-analyze and identify the root cause of [service incidents] or at least get me close. Run queries, find where the faults happened, add that to the ticket.”

Incorporating AI into RCAs aligns with the modern developer’s dual mandate: to create and maintain robust software. While AI’s role in RCAs is an exciting prospect, it should be implemented with the understanding that it supplements, rather than replaces, the nuanced judgment of experienced developers.

Writing new code and/or refactoring existing code (selected by 25% of respondents).  Writing new code often equates to a sense of accomplishment for developers, making them feel both productive2 and satisfied with their day’s contributions.4,6 This act of creation is not only pivotal for their personal growth and mastery, but also presents opportunities to innovate, experiment with new technologies, and address unique challenges.

Survey respondents said they imagine AI tools such as GitHub Copilot could enhance the code-writing experience by reducing the time spent on boilerplate code, making it easier to interact with unfamiliar APIs, and even changing the entire developer experience of writing code.

Reducing boilerplate: “It’s fantastic for boilerplate, or roughly sketching out a framework.”

Learning how to use new APIs: “I often have to look up usage of APIs that I am not familiar with. I do not want to have to sift through [documentation].”

Changing the developer experience: “Instead of writing code, I should be able to talk to the AI using voice, describe what I want.”

When describing the ideal experience with AI-assisted coding tools, some developers talked about wanting to focus on the big picture versus dealing with all the syntactical implementation details.

“I envision this to be as intuitive and seamless as if I had a person next to me say, ‘Hey, you missed something; you should consider doing this…’ It would be great to be able to focus on the bigger picture of making code changes and letting my ‘copilot’ worry about the implementation details.”

AI-driven development tools are poised to revolutionize the code-writing experience, making the development process not only more efficient but also more intuitive, allowing developers to channel their focus on overarching vision and innovation.

Key Takeaways from How Developers Want to Use AI

The survey revealed a hierarchy of developer expectations for AI integration, reflecting a desire for AI to tackle tasks that range from repetitive to complex.

  1. Automating routine tasks. The majority of developers (96%) anticipated that AI will alleviate the tedium of routine tasks, such as generating tests and documentation. These tasks, while essential, are often seen as monotonous and distract from the more creative aspects of development. AI’s potential to enhance these areas could significantly boost DevEx and product quality.

  2. Streamlining administrative duties. A significant portion (37%) hoped AI can simplify administrative overhead, such as email parsing and task management. These duties, while not core to development, consume substantial time and are ripe for AI’s organizational capabilities.

  3. Enhancing well being and efficiency. Fewer developers (25%) prioritized AI for well-being activities, such as stress reduction, indicating a preference for AI to focus on enhancing job efficiency over personal management tasks.

Note that the percentage of each category of developers, bucketed using their number of years of experience, had roughly the same percentage of selection. That is, a seasoned professional was as likely to select “Generating unit tests” as a new hire. The exceptions were: “Evaluating performance to help identify areas of growth,” for which almost 20% of new hires expressed excitement versus only 4–5% in other categories; and “Clarifying requirements,” for which roughly 9–11% who are new in their career showed excitement versus a mere 1–6% in more senior categories.

“AI should feel like pair programming—sort of like Ironman and Jarvis. Instead of writing code, I should be able to talk to the AI using voice, describe what I want, and the AI should be able to write the code, analyze it, optimize it, help me test and debug it, and then help me create and manage the PRs to get the code into the repo. My job should be on creativity and NOT limited by how fast I can type. Make us more productive so that we can deliver better features faster, giving us an edge on our competition.”

What Worries Developers the Most about Integrating AI into Their Workflows?

The massive pace of growth and adoption of AI in software development has garnered significant attention. The spectrum of promises to automate away many mundane tasks has been acknowledged as truly transformative. This goes hand in hand, however, with the skepticism that many harbor toward it.

“I’m curious to see how it all plays out. I’m sensing some personal burnout of the ‘AI to help you x’ of everything that seems to have swept the company. The hype is very high. In what is typical of a giant company, we will build some good uses and some bad uses for AI; hopefully the bad use cases don’t turn off users.”

To gain an understanding of the primary concerns developers hold regarding the integration of AI into their daily routines, they were presented with a selection of possible apprehensions and asked to identify the one they found most significant. The following section explores a selection of these concerns, handpicked for a more detailed discussion. Like the previous section, each concern is supported by cherry-picked open-text responses. Figure 2 shows the list of concerns we provided for question 2, along with the corresponding percentage of respondents who endorsed each one.

Figure 2.  Responses to question about desire for AI concerns.

Being more gimmicky than helpful (selected by 29 percent of respondents).  The skepticism around AI in software development often stems from the perception that AI’s current abilities are overstated or fail to deliver on their promises. While AI is impressive in demos, developers are concerned that AI tools might not be able to handle the complexity and variety of real-world programming.

“To me it seems more like a gimmick. In order for me to let something external manage my calendar, prioritizing tasks, or refactoring code that is used in production services impacting millions of our customers, it would need to prove itself over a long period of time, and that at the very least seems too far away in the future.”

This skepticism may explain why some organizations have struggled to convince their developers to adopt the AI tools being made available to them. Addressing developers’ skepticism requires a multifaceted approach. AI tool developers need to ensure that their offerings are not just technologically advanced but also transparent and understandable. This can be achieved by accompanying AI tools with documentation and explanatory frameworks or integration of explainable AI10 in the tools. By demonstrating consistent and reliable performance in diverse real-world scenarios over time, these tools (and toolmakers) can gradually earn the trust of developers. This openness not only builds credibility but also empowers engineers to engage critically with AI recommendations, much as they would with advice from human colleagues.

“I think it’s important in the long run to teach engineers why Copilot is recommending its particular way to do things. I could see engineers getting complacent in not trying to understand how the problem is getting solved. Just like how copy/pasting from stack overflow without analysis can be a bad thing, this could come up with the same result.”

Introducing defects or vulnerabilities into your work (selected by 21% of respondents).  While many fear that AI may be more gimmicky than practical, developers harbor a deeper concern: the risk that AI could actively deteriorate the quality of work by introducing defects or vulnerabilities. This fear extends beyond the frustration of unmet expectations—it encompasses the potential for AI to undermine the integrity and safety of software systems.

“I am worried that AI will create answers that give the appearance of correctness but are not actually correct.”

This sentiment intensifies the skepticism previously highlighted, magnifying the caution developers exercise toward embedding AI into their workflows. It underscores the need for transparency from AI, comprehensive training for emergent tools, and the crucial role of human oversight. Ensuring that AI aids rather than hinders development hinges on this symbiotic relationship between human expertise and artificial intelligence.

Automating away your job (selected by 10% of respondents).  Some concerns about AI include how it could affect job security, such as job displacement or augmentation. Somewhat surprisingly, only 10% of developers in this survey identified job displacement as a concern. Moreover, only 5% noted “Less compensation due to automation” as a concern. Still, a section of the developer population expresses unease about the potential for intelligent automation to encroach upon territories traditionally reserved for human expertise.

“I feel like Blockbuster and Netflix. [AI] is about to replace me.”

The rapid advancement of AI capabilities is a sign of great progress, but to some it provokes the idea that their roles could be significantly altered.

“If AI is capable of writing, debugging, and documenting code based on an input prompt of requirements, then will human software engineers be relegated to only on-call and service-engineer tasks? This is the least enjoyable and problem-solving aspect of the job for myself and many others”

This survey shines a light on a pivotal discussion point for leaders and organizations: the apprehension of job displacement due to AI. It is incumbent on leadership to address these concerns proactively, not just through dialogue but by fostering a culture that embraces change. Initiatives such as comprehensive training programs that upskill employees to work alongside AI will be crucial. Leaders also must communicate the value of human insight and creativity—qualities that AI cannot replicate—by ensuring team members understand their evolving, irreplaceable role in a technologically augmented landscape.

Increasing bias in workplace (selected by 5% of respondents).  Developers also say that AI has the potential to exacerbate bias in the workplace because of its reliance on historical data that may reflect existing biases.

“I am worried about the bias that exists within AI structures that can adversely affect minorities in the software engineering industry. If AI is not designed with everyone in mind, it will end up having bias against people that were not considered during its development, and it is usually unlikely that the software will be fixed afterwards as it is typically deemed too much effort for a smaller population group.”

Bias throughout history is irrefutable, so it is important to ensure we are not using AI to generate output based on such bias. To avoid it, there is a need to set up guardrails for including diverse data, improving transparency, and laying out ethics guidelines and fairness metrics to avoid such an outcome. Microsoft’s resources on responsible AI7 serve as an example of one organization’s approach to this.

Other Developer Concerns Regarding AI

This article has discussed a few of the top concerns that developers have regarding AI adoption, although this does not diminish the value of the concerns voted for by the few, for example, concerns regarding AI’s impact on the environment. It could be that only 1% of the developers selected it as a top point not because of a lack of concern but because of a general lack of awareness or because of the design of the survey, which restricted them to selecting only one option. Leaders and organizations should not only explicitly address the major concerns of developers but also create awareness of these less popular concerns and proactively take steps to alleviate them.

“It is a very powerful tool, there is no doubt about that. The potential is massive. It is important to take a proactive approach toward the risks involved, rather than the reactive approach that is common with such rapidly expanding technologies. At least, a mix of the two.”

Future of AI in the Tech Industry

There is a ton of excitement around AI, as there is a ton of cynicism. This survey highlights both aspects, giving organizational leaders the opportunity to take steps to address them. How can organizations and leaders help address the asks and alleviate the concerns surrounding AI? Some ways in which they can are listed in this section.

Education and awareness.  There are three aspects of educating employees about AI: How do AI models work? How can AI tools/APIs be employed in existing services/technologies? How do you equip leaders with the skills to guide their teams through the technological transformation? Mitigating skepticism is not just about demystifying AI; it is also about preparing leaders to navigate the cultural shifts it brings. Organizing AI bootcamps, promoting online courses, and offering leadership workshops that focus on change management in the context of AI can help build a well-informed and adaptable workforce.

Transparency.  This again has multiple dimensions: being transparent when integrating AI in services, where and what parts are integrated, and so on; and leveraging explainable AI,10 as in explaining the results of AI to the developers without merely accepting AI outputs at face value. This allows the developer to reason with AI to make a sound decision to accept or reject the results. This can also help provide feedback to mitigate/suppress biases learned from historical data.

Ethical considerations.  There are so many facets to this concern that, at first, it can be overwhelming. These can vary from the impact of AI on the environment to the dataset on which it trains. To drive the solution, the first step is to understand and acknowledge the problem. These concerns can be addressed by issuing an ethics guideline for each scenario and establishing an ethics committee that concerns itself with ensuring that the guidelines are followed.

Human-in-the-loop.  AI being more of a gimmick than a useful aid was reported as a top concern by many respondents. Rather than completely delegating a task to AI or completely delegating it to a human, a middle ground can be reached by employing AI with a human in the loop, especially for critical tasks.

Conclusion

The journey of integrating AI into the daily lives of software engineers is not without its challenges. Yet, it promises a transformative shift in how developers can translate their creative visions into tangible solutions. As we have seen, AI tools such as GitHub Copilot are already reshaping the code-writing experience, enabling developers to be more productive and to spend more time on creative and complex tasks. The skepticism around AI, from concerns about job security to its real-world efficacy, underscores the need for a balanced approach that prioritizes transparency, education, and ethical considerations. With these efforts, AI has the potential not only to alleviate the burdens of mundane tasks, but also to unlock new horizons of innovation and growth.

Acknowledgments

We would like to thank all the study participants and research reviewers for their valuable feedback and insights.

    References

    • 1. Barke, S., James, M.B., and Polikarpova, N. Grounded Copilot: How programmers interact with code-generating models. In Proceedings of the ACM on Programming Languages 7, Article 78 (2023), 85111; https://bit.ly/4dfVJRE
    • 2. Beller, M., Orgovan, S., Buja, S., and Zimmermann, T. Mind the gap: on the relationship between automatically measured and self-reported productivity. IEEE Software 38, 5 (2020); https://bit.ly/3MQ3Yt6.
    • 3. Finnie-Ansley, J. et al. The robots are coming: Exploring the implications of OpenAI Codex on introductory programming. In Proceedings of the 24th Australasian Computing Education Conf. (2022), 1019; https://bit.ly/4e7VNEe
    • 4. Forsgren, N. et al. The SPACE of developer productivity: There’s more to it than you think. ACM Queue 19, 1 (2021), 2048; https://bit.ly/4e5D78h
    • 5. Gradišnik, M., Beranič, T., and Karakatič, S. Impact of historical software metric changes in predicting future maintainability trends in open-source software development. Applied Sciences 10, 13 (2020), 4624; https://bit.ly/3Xx5rJL
    • 6. Meyer, A., Barr, E., Bird, C., and Zimmermann, T. Today was a good day: The daily life of software developers. IEEE Transactions on Software Engineering 47, 5 (2019); https://bit.ly/3XuhXcL
    • 7. Microsoft AI. Empowering responsible AI practices. (2024); https://bit.ly/4gr8nQC
    • 8. Mozannar, H., Bansal, G., Fourney, A., and Horvitz, E. Reading between the lines: Modeling user behavior and costs in AI-assisted programming. arXiv (2022); https://bit.ly/3Xw8Acr
    • 9. Peng, S., Kalliamvakou, E., Cihon, P., and Demirer, M. The impact of AI on developer productivity: Evidence from GitHub Copilot. arXiv (2023); https://bit.ly/3Xxirie
    • 10. Ribeiro, M.T., Singh, S., and Guestrin, C. Why should I trust you? Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD Intern. Conf. on Knowledge Discovery and Data Mining  (2016), 11351144; https://bit.ly/3ZsD021
    • 11. Vaithilingam, P., Zhang, T., and Glassman, E.L. Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models. In Extended Abstracts of the Conf. on Human Factors in Computing (2022), 17; https://bit.ly/3XMDoqU
    • 12. Ziegler, A. et al. Productivity assessment of neural code completion. In Proceedings of the 6th ACM SIGPLAN Intern. Symp. on Machine Programming (2022), 2129; https://bit.ly/3zpYa6h

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More