Practice
Computing Applications

DevEx in Action

A study of developer experience and its tangible impact.

Posted
team of developers around an oversized mobile phone, illustration

Somewhere, right now, a software developer is pulling open a ticket from the project backlog, excited by the prospect of working on something new. As the developer begins reading through the description of the task, their laptop is suddenly flooded with alerts from the team’s production error-tracking system, disrupting the developer’s ability to focus. Eventually, returning to the task at hand, the developer studies the requirements described in the ticket. Unfortunately, the task lacks context and clarity, so the developer asks for help, which will take days to resolve.

Meanwhile, the developer checks a previous task, which has been stuck in the queue awaiting approval for several days. The tests and builds repeatedly flake out, halting the progress of reviewers each time they attempt to verify the changes. As the developer hops from task to task, hoping to immerse in some deep work, they realize today’s experience isn’t as good as it should be to allow for their best work.

For many professional software developers, this anecdote is all too like their daily experiences. Friction is abundant, the development lifecycle is riddled with red tape, and successful delivery of code to production is a frustratingly infrequent event. Even worse, the problems keep compounding. Developers look on helplessly as upper management fails to intervene, leading to standstill velocity and the departure of top engineers.

How is it that organizations end up in this predicament?

Today, developer experience (DevEx) is garnering increased attention at many software organizations as leaders seek to optimize software delivery amid the backdrop of fiscal tightening and transformational technologies such as AI. Intuitively, there is acceptance among technical leaders that good developer experience enables more effective software delivery and developer happiness. Yet, at many organizations, proposed initiatives and investments to improve DevEx struggle to get buy-in as business stakeholders question the value proposition of improvements. “What is developer experience?” many of them challenge. “And why does it matter?”

Why DevEx Matters

DevEx encompasses how developers feel about, think about, and value their work.11 Why does this matter? First, developers build software and use engineering systems every day, so they are perfectly positioned to give critical insights into how well systems and processes work. For example, is it easy and intuitive to write code and ship it to customers? Or is it confusing and full of manual steps, which are prone to mistakes and outages? Sure, you could argue that in both cases, developers are writing code and shipping software, but the environment and circumstances are very different, and they can be a leading indicator of the quality, reliability, maintainability, and even security of the systems.

DevEx is also important because of its impacts on development.

That may seem obvious because many organizations around the world, from startups to not-for-profit companies to large enterprises, employ developers to write software for customers, improve internal tools, or automate complex processes. But there is a difference between simply writing code and writing code in an environment that is optimized for writing code. Environments optimized for writing code are efficient, effective, and conducive to well-being, and rely on the right mix of tools, practices, processes, and social structures. These environments help developers:

  • Get into the flow and minimize interruptions so they can focus and solve complex tasks.

  • Foster connections and collaborations so they and their teams can be creative when it matters most.

  • Receive high-quality feedback so they can make progress.

Considering DevEx in this light shows that development is about so much more than just writing code. It is a socio-technical process that aids developers’ work while contributing to broader team performance and organizational missions and cultures. We are not aware of empirical investigations into the impacts of DevEx; there is a need to study outcomes of developer work and the work design that supports it.

Thus, the goal of the research described here is to answer this question: How does DevEx impact individual developers, as well as their teams and organizations? Spoiler alert: An improved developer experience has positive outcomes—and not just on developers; it also helps improve team and organization outcomes. For example, we find that a better developer experience can improve productivity, learning, innovation, and profitability, and more.

DevEx as Work Design

Our research is based on work design theory (WDT)22 for two reasons. First, the theory considers the outcomes of work among many dimensions. Research in WDT has found that there are important outcomes and implications for individual contributors, teams, organizations, and society. Our previous research7 also found that by improving the work environment and work design of developers, performance outcomes are better for individuals (for example, reducing burnout), teams (for example, improving software delivery), and organizations (for example, improving customer and organizational metrics).

Second, our investigation is grounded in WDT because its conceptualization of work is complex enough to account for the work practices of software developers today. WDT views work as both an assigned job—that is, a group of tasks formally assigned to an individual—as well as “emergent, social, and sometimes self-initiated activities.”23 Software developers’ work includes assigned tasks (such as items assigned during a sprint), as well as activities that emerge (such as reactive work to fix bugs, self-initiated creative work, and social activities to collaborate and improve processes).

This study uses WDT to do two things: First, it expands on our previous work to investigate broader outcomes at the individual, team, and organizational levels. Second, it explores the work design of developers—with a focus on DevEx—that positively influences those outcomes.

Outcomes.  When considering the outcomes of development work or the developer experience, many researchers and people think about productivity.8,21 In our years of experience, however, we have seen that the improvements in developers’ work go far beyond personal productivity for individual contributors,16 to include team and organizational outcomes.7,11 This investigation considers outcomes at the developer, team, and organizational levels, which is supported by WDT.23

Developer outcomes are those that benefit an individual developer. Prior WDT research shows that improved work design positively influences job performance, creativity,22 and learning5—three outcomes investigated in this study.

Team outcomes are those that can benefit an individual developer but more likely accrue at the team level of work and are therefore operationalized and studied at this level. WDT also shows that outcomes such as quality benefit teams.22 In the context of DevEx, we want to capture how work design can impact the quality of the system the team can work in, and therefore capture this as code quality and technical debt.

Organization outcomes benefit a worker’s employer. While developer and team outcomes likely accrue to the organization, investigating the impacts of work improvements specific to the organization is still important. This is because it can demonstrate the relationship of DevEx to the organization’s mission, explain the value of DevEx to the organization, and provide evidence that can help justify and advocate for investments in DevEx initiatives. Indeed, prior WDT research has shown that improvements in work design impact organizations. Many of these outcomes are top of mind for business leaders, including retention and innovation.22 Prior research has also shown that improvements in developer work positively affect an organization’s profitability and its ability to achieve goals.7 These measures of organization outcomes—retention, innovation, profitability, and ability to meet goals—are measured in this study.

Developer experience.  Based on our prior work, here we present a model for understanding and measuring DevEx through three dimensions that have been found to impact developer experience: flow state, feedback loops, and cognitive load.21 This section defines each of these dimensions and describes how they are supported by WDT. Hypotheses are shown as Hn, meaning Hypothesis 1, 2, or 3, followed by the hypothesis we are testing.

Flow state, often described as “being in the zone,” is a mental state where a person is fully immersed in work and has feelings of energized focus, full involvement, and enjoyment.4 Achieving and supporting flow state occurs through environmental settings (for example, quiet rooms), tooling (for example, focus mode in tools), and personal or team practices (for example, designating blocks of time to do deep work). Similarly, prior research in WDT found that novel work, as well as aspects of the work environment that support focus (such as discretion in scheduling and work areas that are free from noise) influence work-related outcomes.5 We measure flow state in the following ways: satisfaction with the amount of time engaged in deep work, frequency of interruptions, and tasks that hold the developer’s interest.

Based on prior developer research and WDT, we posit that flow state will have positive outcomes in the developer context and that these outcomes will come at three levels: developer, team, and organization. Stated formally:

H1—Flow state positively impacts (a) developer, (b) team, and (c) organization outcomes.

For clarity, we expand this first hypothesis statement:

Hypothesis 1a:

Flow state positively impacts developer outcomes.

Hypothesis 1b:

Flow state positively impacts team outcomes.

Hypothesis 1c:

Flow state positively impacts organization outcomes.

The remaining hypotheses use the same notation and have a similar structure.

A feedback loop occurs when part of the system is used as input to another part of the system.6 In the context of work and software development, the speed and quality of the information in the feedback loop are also important.21 Prior research in WDT has found accurate and timely feedback supports outcomes such as personal performance and catching errors.1,5 In this study, we measure feedback loops as the time to get code changes approved and the frequency of getting a question answered quickly.

We therefore hypothesize that feedback loops support outcomes at three levels: developer, team, and organization:

H2—Feedback loops positively impact (a) developer, (b) team, and (c) organization outcomes.

Cognitive load is the amount of information that working memory can process at one time, and it helps with problem solving and learning.25 In the context of DevEx, cognitive load is the amount of mental processing required for a developer to complete a task.21 Cognitive load theory describes a framework with three types of cognitive load: intrinsic is the inherent amount of effort or difficulty required to do a task; extraneous is the way that information is presented, which can be modified to be more or less intuitive; and germane is related to schemas.25

Prior research in WDT has tested environmental and work characteristics that are well-aligned with cognitive load and that contribute to work outcomes. For example, one previous study found that easy-to-understand tasks (intrinsic) and well-designed information flows (germane) contribute to outcomes.5 Another investigation, which was a large meta-analytic study, found that job complexity (intrinsic) and factors supporting information processing (germane) contributed to outcomes.15

Our research measures cognitive load as the ease of deploying changes, how easy it is to understand code, and how intuitive it is to work with processes and developer tools.

Stated formally, we hypothesize that a low cognitive load supports better outcomes for developers, development teams, and their organization:

H3—Low cognitive load positively impacts (a) developer, (b) team, and (c) organization outcomes

The proposed research model is presented in Figure 1.

Figure 1.  Proposed model.

Measurements and Data

A cross-sectional survey was created using items that were previously validated in the literature or were developed and refined over time with expert input. The items that were refined over time were developed with subject matter experts and refined over three years; this included pilot data collection, subsequent item iteration and analysis, and periodic refinements following feedback from experts and statistical analysis. Sources for all survey items are noted in the accompanying table. This table lists items for each construct. Details for items include mean and standard deviation (SD). Details for constructs include composite reliability (CR) and average variance extracted (AVE). Response options and sources are included at the bottom of the table.

Table.  Survey items and descriptive statistics.
ConstructItemMean (SD)LoadingCRAVE
Flow StateI have a significant amount of time for deep work in my workdays.1,83.383 (0.845)0.8260.7760.542
In a typical week, how often are you interrupted to work on something else that was unplanned or suddenly requested?2,83.826 (1.087)0.557
The coding tasks I work on are more engaging than boring1,103.580 (0.871)0.796
Feedback loopsHow often does it take more than 10 minutes to obtain an answer to an internal technical question (that is, about code, a system, or the domain you are working in)?3,82.799 (1.309)0.7930.7150.558
Approximately what percentage of the code reviews you request are completed within 4 business hours?4,82.895 (1.412)0.698
Cognitive loadFor the primary team you work on, how would you rate the ease of deploying changes?5,83.735 (0.858)0.7280.8200.534
How often can you easily understand the code you work with?1,83.827 (0.788)0.648
In general, the processes I need to follow to do my work are easy for me to figure out.1,113.607 (0.841)0.759
In general, the developer tools I have are intuitive and easy to use.1,113.689 (0.854)0.780
Developer impactsI learned new skills related to my work in the past month6,93.922 (0.995)0.6700.8250.614
I have felt very productive over the past month6,123.680 (0.990)0.816
I have been creative in my work in the past month6,93.635 (0.993)0.852
Team impactsHow would you rate the quality of the code you work on?5,83.584 (0.865)0.9450.7900.660
How often does technical debt impact your ability to complete new work?1,8 (reverse coded)2.826 (0.917)0.653
Organization impactsHow often do you look for jobs at other companies? (Again, this question is private and only visible to the research team.)7,94.142 (1.024)0.6070.8230.545
My company culture supports innovation6,133.795 (0.999).869
My organization achieves its goals6,143.890 (0.828).830
My organization is profitable6,153.763 (0.913).605
1=Never, 2=Rarely, 3=Sometimes, 4=Very Often, 5=Always Have
1=At least once every couple of hours, 2=At least once per day, 3=At least once every two days, 4= At least once per week, 5=Less than once per week
1=At least once every two days, 2=At least once per week, 3=At least once every two weeks, 4=At least once per month, 5=Less than once per month
1=0-20%, 2=21-40%, 3=41-60%, 4=61-80%, 5=81-100%
1=Very Bad, 2=Bad, 3=Acceptable, 4=Good, 5=Very Good
1=Strongly Disagree, 2=Disagree, 3=Undecided, 4=Agree, 5=Strongly Agree
1=Every day, 2=Every week, 3=Every month, 4=Every few months, 5=I never look for jobs at other companies
Refined and adapted over three years; contact third author for details.
Based on experience and internal surveys; contact first author for details.
Adapted from Magyaródi et al. (2013)
Adapted from Morrison et al. (2014)
Adapted from Murphy-Hill et al. (2019)
Adapted from Meijer (2019)
Adapted from Theriou et al. (2017)

Data collection was done via a Web-based survey; this was administered by DX, a company that offers a developer experience platform. The participants were developers at companies that were DX customers. Among these customers, developers are regularly surveyed regarding their DevEx (referred to in the following as the regular survey). Immediately upon completing the regular survey, developers were invited to participate in our study (the research survey).

Completing both surveys was optional, although only developers who had completed the regular survey were then invited to complete the research survey. No questions were duplicated between the two surveys. Across DX’s portfolio, the completion rate for the regular survey is greater than 90% and takes a median time of 10 minutes to complete.

During the five weeks of data collection, 2,213 participants were invited to take the research survey and 219 completed it, a response rate of 9.9%. The completion rate for participants who viewed the research survey was 87%. The median time to complete the research survey was 2.5 minutes. Because the research survey followed the regular survey and was voluntary, the low response rate could be a result of time constraints, which is often true of developers in enterprise settings.20 Of those who completed the survey, 170 (77.6%) were from a company whose primary business is in technology, and 200 (91.%) worked for a company with more than 500 employees (our cut-off for a medium or large company). Because of privacy concerns, the research team did not collect participant demographics such as gender, age, or years of experience.

Analysis and Model Results

The proposed research model was tested using partial least squares (PLS) analysis, chosen for three reasons:2,3 It is well suited for exploratory analysis and theory building; PLS does not require assumptions of multivariate normality; and PLS works well with small-to-medium sample sizes. When considering covariance-based structural equation modeling (CBSEM) vs. PLS, CBSEM is more concerned with model fit while PLS is particularly well suited for predictive models,3 which is an ideal application when considering real-world outcomes. In addition, our proposed model contains nine independent variables and nine dependent variables, with two control variables (explained later), making it a complex model; in comparison, CBSEM may show poor model fit simply because of the complexity of the model.3

A rule of thumb for determining adequate sample size when conducting PLS analysis is 10 times the largest structural equation model.9 In this study, the largest structural equations are the developer experience constructs, where each has three paths to the outcomes. Our sample size of 219 is far larger than the minimum sample size of 30.

We conducted our analysis using SmartPLS 4.24 Consistent with prior research using PLS techniques, model analysis includes two stages:3 assessment of the measurement model and structural model.

In the assessment of the measurement model, convergent validity was established with three criteria:10 Each item loaded on its respective construct and none below the cutoff value of 0.50 (appropriate for exploratory research);12 CR of all constructs below 0.70, confirming reliability;2) AVE of all constructs greater than 0.50. Discriminant validity was confirmed by the heterotrait-monotrait ratio of correlations.13 Thus, our measures exhibit good psychometric properties; the items and descriptive statistics are shown in the table.

Like linear regression, PLS assesses the significance of relationships between constructs and provides R2 values. These values indicate the proportion of variance in the dependent variable that can be explained by the independent variables. Furthermore, path coefficients and their significance can be used to assess the strength and importance of the proposed relationships between constructs. Together, the R2 values and path coefficients provide insights into how well the data supports the hypothesized model.

When testing the model, we included two control variables: organization size and industry. Based on the types of companies in the sample, these were reduced to two binary values and included as dummy variables. Organization size was coded as small (fewer than 500 employees) or not small (500 or more employees); industry was coded as primarily tech or not primarily tech. The analysis showed the control variables were not significant. The results of the hypothesis testing are presented in Figure 2 and summarized here:

  • H1 states that flow state positively impacts developer, team, and organization outcomes. This hypothesis is fully supported.

  • H2 states that feedback loops positively impact developer, team, and organization outcomes. This hypothesis is partially supported; feedback loops influence team outcomes but not developer or organization outcomes. This finding is discussed in more detail later in this article.

  • H3 states that cognitive load positively impacts developer, team, and organization outcomes. This hypothesis is fully supported.

Figure 2.  Emergent model.

IPMA: Making an impact.  Because of its focus on maximizing prediction of dependent variables, PLS provides additional insight into items that may have outsized impact. We conducted an importance-performance map analysis (IPMA) on the research model to identify items that could give teams and organizations additional insight. In summary, this analysis identifies items that have a high impact and performance relative to the dependent variable under consideration. To improve developer outcomes, deep work and engaging work have the biggest potential impact. To improve organizational outcomes, several items have the potential for big impact: deep work, engaging work, intuitive processes, and intuitive developer tools. We could not run the analysis for team outcomes based on the emergent model.

Note these results are based on our research context, but they provide clues that can offer actionable insights for teams and organizations. For example, if you are hoping to improve developer outcomes such as productivity, learning, and creativity, think about ways that you can provide opportunities for deep work; these can include strategies such as encouraging both focus time for individual developers and coordinated focus time among teams, like days with few or no meetings. You can also look for opportunities to create engaging work and for learning, such as hack days.

The analysis shows that deep and engaging work provides outsized support for organizational outcomes such as innovation, retention, profitability, and broader organizational goals. Intuitive processes and tools also support these goals. Organizations can look for ways to streamline and clarify their processes, which has been found to be impactful in other research, or provide access to intuitive, easy-to-use developer tools. Previous research has found that inefficient work processes are a top challenge for developers,14 and improvements in process and tools present a driver for outcomes.7

Alternative models.  We also considered an alternative model supported by WDT. One reading of the theory allows for the items in team outcomes—technical debt and code quality—to be reframed as environmental factors that moderate developer and organization outcomes.22 That is, they could either attenuate (reduce) or amplify (increase) the impact of DevEx factors on outcomes. A test of this model found that “team outcomes as environmental moderators”—specifically, our operationalization of technical debt and code quality—were not significant. Therefore, we do not include details of the results. Other environmental factors could be relevant. Also note that the control variables were not significant in this analysis.

Another view of impact: Likelihood analysis.  To put these results in perspective, we consider which outcomes might be expected when specific DevEx interventions are put in place. Following are the statistical results observed in this likelihood analysis, broken down by the three dimensions of DevEx: flow state, cognitive load, and feedback loops.

Flow state:

  • Developers who have a significant amount of time carved out for deep work feel 50% more productive compared with those lacking in dedicated time. Granted, it’s not always easy for developers to reserve blocks of time on their calendars, especially if they work on teams distributed across time zones. But dedicating time to deep work is a practice that pays high dividends in terms of allowing developers to be truly productive. Encouraging developers and teams to carve out time to focus is important, and their environment needs to support this practice by minimizing interruptions.

  • Developers who find their work engaging feel they are 30% more productive, compared with those who find their work boring. Rethinking the distribution of tasks among individuals in a team, or teams within an organization, can help here. Are the same developers continually working on less desirable projects and tasks, which could lead to burnout? Are certain teams tasked regularly with activities they find boring or divorced from the company’s mission and customers?

Cognitive load:

  • Developers who report a high degree of understanding the code they work with feel 42% more productive than those who report low to no understanding. It’s an all too familiar pattern when teams need to move fast and overlook making their code clear, simple, or well documented. While that is sometimes necessary, it can really hinder the team’s long-term productivity. Tooling and conventions that help code be understandable within and across teams can futureproof productivity.

  • Developers who find their tools and work processes intuitive and easy to use feel they are 50% more innovative compared with those with opaque or hard-to-understand processes. Unintuitive tools and processes can be both a time sink and a source of frustration—in either case, a severe hindrance to individuals’ and teams’ creativity.

Feedback loops:

  • Developers who report fast code-review turnaround times feel 20% more innovative compared with developers who report slow turnaround times. Code reviews that are completed quickly allow developers and teams to move to their next idea quickly, setting the stage for coming up with the next great thing.

  • Tight feedback loops have another positive outcome. Teams that provide fast responses to developers’ questions report 50% less technical debt than teams whose responses are slow. It pays to document repeated developer questions and/or put tooling in place so developers can easily and quickly navigate to the response they need and integrate good coding practices and solutions as they write their code, which creates less technical debt.

Discussion

The most important contribution of this study is the evidence it provides that improving DevEx creates positive outcomes for individuals, teams, and organizations.

This is the first study we are aware of that analyzes the statistical relationships between DevEx factors and outcomes at the individual, team, and organization levels. Although prior studies have suggested these relationships, they have not been quantified. The results reported here provide concrete evidence that can empower development teams and leaders to advocate for investment in DevEx.

How to advocate for investment in DevEx.  Most developers know they need good DevEx to do their best work. Furthermore, because so many companies today are software-driven, their ability to be profitable depends on their developers’ ability to be productive and creative, and write and maintain high-quality software, with low technical debt. Even a company’s ability to innovate and be profitable depends on DevEx—because if it’s too hard to do daily work, it’s too difficult to innovate.

Knowing intuitively the importance of DevEx, however, is not always enough to make a compelling case to upper management. When management rightfully asks if DevEx has an impact on business, this study can provide an answer, showing that DevEx affects the performance of individual developers, teams, and organizations. Further, our analysis clearly indicates which factors should be given priority for teams to achieve positive outcomes. This evidence can justify a DevEx initiative, as well as provide actionable insights to guide a DevEx intervention.

Now that you are sold on improving DevEx, how can you convince your organization to buy in? First, have them read this article. Then, joke aside, here are five important steps that can help you advocate for continuous improvements by keeping your arguments grounded in data.

  1. Get data on the current developer experience. Understand what DevEx is like at your organization. For organizations that are just beginning their DevEx journey, this means collecting new data to reveal their biggest pain points, as well as knowing their current abilities to make changes. (You can use or adapt the survey used in this study, included in the table, or a dedicated solution such as DX.) If this is your first time collecting data about DevEx, this becomes your baseline. If you have already been doing some DevEx work, you can integrate this data and update your metrics.

  2. Set goals based on your DevEx data. Use your DevEx data to inform your goals and investments. These can be based on current business priorities, DevEx data, and what you learned from this article. For example, let’s say your organization just collected DevEx data last month. It was an exploratory study, so it asked two questions: One was an NPS for internal dev tools (a 1–10 scale if the respondents would recommend the tool to others), and the other an open text question for feedback about dev tools. Using this data, you see that key challenges (opportunities!) are build times, test flakiness, and gaps in monitoring. When reviewing business priorities, your organization has been making ongoing investments in monitoring, build times, and improving PR processes. Reviewing this research, you see there are three categories of DevEx that are impactful, with deep work and engaging work having a large impact.

    This would put you in a good position to set goals. Any of the items listed in the previous paragraph would be areas in which to start making investments. To get a bit more strategic, you could identify overlaps. One strategic overlap is build times (identified in both your DevEx data and existing business priorities), which could align with feedback loops (supported by this study). Another possible strategic overlap is with monitoring (seen in both your DevEx data and existing business priorities), which could also support feedback loops for teams (a concept supported here).

  3. Set your team up for success. Once you’ve reviewed your data and set your goals, be sure to leverage mechanisms your company has in place to set you and your team up for success. This can mean setting a team objectives and key results (OKR)—or setting a shared OKR with another team to establish shared accountability. Communicate your goal with your team and organization. Revisit and check progress periodically.

  4. Share progress and validate investments. Share results with developers, as well as DevEx and business leaders, to evaluate and discuss the value of your investments. Reflect on which investments delivered impact, as well as what was surprising and what you learned. (Sharing surprises can be especially useful because it highlights the value of having data and acting on it, and the ability to course-correct quickly.) By periodically reassessing the state of DevEx and highlighting improvements, you can increase confidence that your investments are making an impact.

  5. Repeat the process. Go back to step 1 and collect more data. As you do this, reflect on your last experience to update and refine your data collection and interventions. (Remember that except when correcting for errors, adjustments in data collection should usually remain small to allow for comparisons over time.) In general, you should repeat this process of data collection and setting goals (or checking progress on large goals) every three to six months.

Limitations and opportunities for research.  While the results of this research are promising, there are some limitations, which is true of all research. First, this study was conducted among developers working at companies that were already engaging with a DevEx company (DX). This could indicate that these companies have a commitment to improving DevEx, which could bias the results when compared with organizations that do not have a similar commitment or culture. We also invited developers who participated in a DevEx survey, meaning we likely reached a population that cares about developer experience. We see this as a benefit, however, since these developers are likely more reflective about their experience.

Second, our measures were operationalized based on a model of DevEx that focuses on flow, feedback loops, and cognitive load. There may be other dimensions or definitions of DevEx that can warrant additional research. In addition, our operationalization of feedback loops focused on only a small subset of developers’ work: answers to questions and code reviews. These are key aspects of how a developer works on a team, which may explain why this only influenced team outcomes; we highlight the need to explore additional aspects of DevEx, such as feedback loops that come solely from people (conversations or discussions), purely from systems (automated builds or tests), and from people but mediated through systems (code review).

Third, this study was cross-sectional. DevEx is a complex process that happens over time and with processes that are mutually reinforcing. Future work should investigate longitudinal relationships between and among DevEx constructs and their outcomes. Our research strongly suggests that improving DevEx is worth the effort, and the impact of doing so can be measured. We invite you to share how improving and measuring DevEx factors impact outcomes in your organization.

 

Nicole Forsgren is a partner at Microsoft Research, where she leads the Developer Experience Lab. She is an expert in DevEx, DevOps, and decision making, and is the lead author of the Shingo Publication Award-winning book Accelerate: The Science of Lean Software and DevOps.

Eirini Kalliamvakou is a staff researcher at GitHub Next, where she guides the team’s strategic prototyping efforts, grounding them in user-centric research. Previously at GitHub, she led the Good Day Project and the 2021 State of the Octoverse Report.

Abi Noda is the founder and CEO at DX, where he leads the company’s strategic direction and R&D efforts. Before joining DX, Noda held engineering leadership roles at various companies and founded Pull Panda, which was acquired by GitHub in 2019.

Michaela Greiler is the head of research at DX, focusing on developer productivity and experience. Previously, she was at the University of Zurich and Microsoft Research to help boost developer productivity using engineering data.

Brian Houck is a principal productivity engineer at Microsoft focused on improving the well-being and productivity of Microsoft’s internal developers. Over the past three years, much of his research has centered on how the shift to remote/hybrid work has impacted developers.

Margaret-Anne Storey is a professor of computer science at the University of Victoria and holds a Canada Research Chair in human and social aspects of software engineering. She serves as chief scientist at DX and consults with Microsoft to improve developer productivity.

    References

    • 1. Campion, M.A. and McClelland, C.L.  Interdisciplinary examination of the costs and benefits of enlarged jobs: A job design quasi-experiment. J. Applied Psychology 76, 2 (1991), 186198; https://psycnet.apa.org/record/1991-25985-001.
    • 2. Chin, W.W., Marcolin, B.L., and Newsted, P.R.  A partial least squares latent variable modeling approach for measuring interaction effects: results from a Monte Carlo simulation study and an electronic-mail emotion/adoption study. Info. Systems Research 14, 2 (2003), 189217; https://pubsonline.informs.org/doi/10.1287/isre.14.2.189.16018.
    • 3. Chin, W.W.  How to write up and report PLS analyses. Handbook of Partial Least Squares . V.E.Vinzi, W.W.Chin, J.Henseler, and H.Wang, eds  . Springer, Berlin, Heidelberg, 2010, 655690; https://link.springer.com/chapter/10.1007/978-3-540-32827-8_29.
    • 4. Csikszentmihalyi, M.  Flow: The Psychology of Optimal Experience . Harper Perennial Modern Classics, 2008.
    • 5. Edwards, J.R., Scully, J.A., and Brtek, M.D.  The measurement of work: hierarchical representation of the Multimethod Job Design Questionnaire. Personnel Psychology 52, 2 (1999), 305334; https://psycnet.apa.org/record/1999-05792-002.
    • 6. Ford, F.A.  Modeling the Environment: An Introduction to System Dynamics Models of Environmental Systems . Island Press, 1999.
    • 7. Forsgren, N., Smith, D., Humble, J., and Frazelle, J.  Accelerate State of DevOps Report , 2019; https://services.google.com/fh/files/misc/state-of-devops-2019.pdf.
    • 8. Forsgren, N.et al.  . The SPACE of developer productivity: There’s more to it than you think. acmqueue 19, 1 (2021), 2048; https://queue.acm.org/detail.cfm?id=3454124.
    • 9. Gefen, D., Straub, D., and Boudreau, M.  Structural equation modeling and regression: Guidelines for research practice. Commun. AIS 4, (2000); https://aisel.aisnet.org/cais/vol4/iss1/7/.
    • 10. Gefen, D. and Straub, D.  A practical guide to factorial validity using PLS-Graph: Tutorial and annotated example. Commun. AIS 16, (2005), 5; https://aisel.aisnet.org/cais/vol16/iss1/5/.
    • 11. Greiler, M., Storey, M.A., and Noda, A.  An actionable framework for understanding and improving developer experience. IEEE Trans. Softw. Eng. 49, 4 (2022), 14111425; https://ieeexplore.ieee.org/document/9785882.
    • 12. Hair Jr., J., Hult, G.T.M., Ringle, C.M., and Sarstedt, M.  A Primer on Partial Least Squares Structural Equation Modeling . Sage Publications, 2021.
    • 13. Henseler, J., Ringle, C.M., and Sarstedt, M.  A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Academy of Marketing Sci. 43, (2015), 115135. https://link.springer.com/article/10.1007/s11747-014-0403-8.
    • 14. Houck, B.et al.  . The best of both worlds: Unlocking the potential of hybrid work for software engineers. Microsoft and Vista Equity Partners White Paper , (2023); https://bit.ly/492GEBl.
    • 15. Humphrey, S.E., Nahrgang, J.D., and Morgeson, F.P.  Integrating motivational, social, and contextual work design features: A meta-analytic summary and theoretical extension of the work design literature. J. Applied Psychology 92, 5 (2007), 13321356; https://bit.ly/3OboGVa.
    • 16. Kalliamvakou, E., Forsgren, N., Redford, L., and Stephenson, S.  Octoverse Spotlight 2021: Good Day Project—Personal analytics to make your work days better. GitHub Blog ; https://github.blog/2021-05-25-octoverse-spotlight-good-day-project/.
    • 17. Magyaródi, T.et al.  . Psychometric properties of a newly established flow state questionnaire. J. Happiness & Well-Being 1, 2 (2013), 89100; https://github.blog/2021-05-25-octoverse-spotlight-good-day-project/.
    • 18. Meijer, A.  Public innovation capacity: developing and testing a self-assessment survey instrument. Intern. J. Public Administration 42, 8 (2019), 617627; https://bit.ly/4bahbaK.
    • 19. Morrison, B.B., Dorn, B., and Guzdial, M.  Measuring cognitive load in introductory CS: Adaptation of an instrument. In Proceedings of the 10th Annual Conf. Intern. Computing Education Research , 2014, 131138; 10.1145/2632320.2632348.
    • 20. Murphy-Hill, E.et al.  . What predicts software developers’ productivity?. IEEE Trans. Softw. Eng. 47, 3 (2019), 582594; https://ieeexplore.ieee.org/abstract/document/8643844.
    • 21. Noda, A., Storey, M.A., Forsgren, N., and Greiler, M.  DevEX: What actually drives productivity?acmqueue 212, (2023); https://queue.acm.org/detail.cfm?id=3595878.
    • 22. Parker, S.K., Wall, T.D., and Cordery, J.L.  Future work design research and practice: Towards an elaborated model of work design. J. Occupational and Organizational Psychology 74, 4 (2001), 413440; https://bit.ly/4b7m2tk.
    • 23. Parker, S.K., Morgeson, F.P., and Johns, G.  One hundred years of work design research: Looking back and looking forward. J. of Applied Psychology 102, 3 (2017), 403420; https://psycnet.apa.org/record/2017-06118-001.
    • 24. Ringle, C.M., Wende, S., and Becker, J.-M.  SmartPLS 4. Oststeinbek: SmartPLS , 2022; https://www.smartpls.com.
    • 25. Sweller, J.  Cognitive load during problem solving: effects on learning. Cognitive Science 12, 2 (1988), 257285; https://onlinelibrary.wiley.com/doi/abs/10.1207/s15516709cog1202_4.
    • 26. Theriou, N., Maditinos, D.I., and Theriou, G.  Management control systems and strategy: A resource-based perspective. Evidence from Greece. Intern. J. of Business and Economic Sciences Applied Research 10, 2 (2017), 3547; https://bit.ly/3OftE3r.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More