Research and Advances
Computing Applications Virtual extension

A New Map For Knowledge Dissemination Channels

Posted
  1. Introduction
  2. Survey-based Ratings
  3. Reference-based Ratings
  4. A New Approach to Rating Journals
  5. Discussion
  6. Conclusion
  7. References
  8. Author
  9. Footnotes
  10. Figures
  11. Tables

The landscape for Information Systems (IS) research spreads across a large, diverse, and growing territory with linkages to other fields and traversed by increasing numbers of researchers. There are well over 500 journals for publishing IS-related research (http://lamp.infosys.deakin.edu.au/journals/index.php). Thus, many trails are available to researchers as they contemplate where to place their research to have a good opportunity of yielding strong benefits and being readily visible to those with an interest in its subject matter. Similarly, there are many options available to searchers as they try to become and remain knowledgeable about subject matter that strikes their varied IS interests. These knowledge searchers include IS practitioners, seeking insights and solutions for the practical problems that they face. Others are IS educators or trainers, searching for up-to-date findings to help them structure and deliver content. Still other searchers are IS researchers, seeking to find their bearings as a basis for launching their future research initiatives.

Both searchers and researchers can benefit from maps that identify and rate available publishing venues and routes. To the extent it is well-founded, such a map helps practitioners, educators, and researchers zero-in on the most important forums to monitor for IS knowledge emerging from scholarly investigations. As Figure 1 shows, there are two traditional classes of maps: those created from surveys that gauge/consolidate opinions and those based on points of reference that rate journals on extent to which they are visited/used/cited. However, the foundations of these traditional classes of maps are not particularly sound. To circumvent the substantial limitations of the opinion survey and journal citation approaches, we adopt a new approach that rates journals based on publishing behaviors of tenured IS researchers from a set of prominent research universities, looking for journals that these researchers collectively tend to emphasize in placing their research contributions.

Tenured IS faculty members at prominent research universities perform/publish research that tends to be important, influential, and of high quality. They tend to succeed in placing the most substantial portions of that work in journals that they recognize as being premier publishing venues for IS research. Every one of these researchers has individually compiled an IS research record that his/her university deems worthy of tenure. Following a brief review and critique of the two traditional approaches to mapping the IS publishing landscape, we develop and illustrate the new approach.

Back to Top

Survey-based Ratings

There are several key parameters to consider when interpreting results from survey-based ratings of journals for IS research. How they are treated in a particular survey influences ratings that result.

Criterion. On what criterion are survey respondents asked to offer their opinions about journals? Is a respondent asked for his/her perception of a journal’s content in terms of its importance, originality, influence, relevance, rigor, accessibility? The criterion may be less specific such as “journal’s overall quality.” Because respondent impressions about the meaning of “quality” can differ widely, they are essentially giving opinions based on different, or ill-defend, criteria.

Experience. What experiences do survey respondents have in performing/publishing IS research? In some surveys, opinions across great ranges of experience (such as doctoral student to full professor) are combined with even weighting. A survey may also combine opinions from those at research-intensive universities with those having lower research expectations. Survey opinions that come from administrators may or may not be founded on IS research experience.

Anchoring. Often, surveys ask respondents to rate journals from a pre-specified list, thus anchoring results to that list. Even though “write-in” journals may be allowed, anchor journals are likely to be more highly rated when responses are aggregated into an overall rating. Conversely, using no anchor journals relies entirely on each respondent’s capacity to avoid inadvertently overlooking potential “write-ins” that would have been in an anchor list.

Locale. Surveys can vary in terms of the geographic locales represented by respondents. Results of an opinion survey including only Asia-Pacific respondents can differ from an identical survey of European respondents.

Cutoff. When survey results are reported, there may be a limit on the number of journals included in the rating. In an extreme case, Business Week surveys business school administrators for their opinions about journals and uses a cutoff that recognizes only one IS journal. The Financial Times uses a cutoff of only two IS journals. A difficulty with extremely small cutoffs is that resultant journals do not represent the full scope of a fled. Relying on only two journals is inadequate for appreciating quality research contributions across the diverse and expansive IS landscape.10

Table 1 summarizes how various surveys have treated these five parameters.1,6,7,8,12,13 There are, of course, other points of difference among surveys such as the number of respondents and the response rate. Because of their subjective nature and challenges of handling the survey parameters, ratings of journals for publishing IS research are neither definitive nor straightforward to apply for such tasks as evaluating IS researcher productivity or guiding IS searchers.

Back to Top

Reference-based Ratings

Reference-based approaches to mapping the IS publishing landscape avoid many of the problems of surveys, but have their own limitations. Based on about 15,000 citations, one of the earliest of these ranked hundreds of journals based on the frequencies with which they were cited in reference lists of all papers published over a five-year period in several well-established journals devoted to IS research.3 A different citation measure involves normalizing citation frequencies to compensate for differences in the number of years articles in cited journals had been available for reference.4

More recently, citation analysis has been used to rank twenty-seven journals that publish IS research.5 Based on about 15,000 citations to these journals, as tracked during a four-year window by the Social Science Citation Index (SSCI) and the Science Citation Index (SCI), this study uses seven different citation measures to yield seven rankings, plus a composite ranking. Because they were not tracked by SSCI and SCI, some notable IS journals (such as Journal of Management Information Systems) do not appear among the twenty-seven that are ranked. Multi-discipline journals (such as Management Science) are also excluded.

Citation analysis has also been used to help identify “top” journals for purposes of ranking business school faculties. The extent of a school’s publications in these “top” journals determines its rank. One example of such a study uses the Institute for Scientific Information (ISI) impact factors, along with survey results, to construct a list of journals for each of the major business disciplines.11 For a given year, a journal’s ISI impact factor is calculated as the count of citations in that year to papers published in that journal during the two prior years divided by the number of articles that journal published in those two years. The result is highly sensitive to the mix of journal issues included in ISI’s citation database. Also, for disciplines such as IS, where a publication cycle is often more than two years from conception to print, ISI’s two-year window is much too narrow to measure the bulk of impacts.

The help page for ISI impact factors states that these measures are sensitive to various other influences such as subject specialty, journal format, and journal tradition. Moreover, it states that ISI impact factors should not be used to compare journals across fields. Because there are IS journals that concentrate in diverse sub-fields (modeling, e-commerce, intelligent systems), journals from reference disciplines that publish IS researchers’ work (computer science, information science), and multidiscipline journals that encompass IS and other fields, adoption of ISI impact factors to try to rate forums for publishing IS research is highly questionable.

Regardless of the citation measure used for rating journals, there are also problematic issues in dealing with semantic differences among references: Some references are essential for development of reported research, some are tangential; some are cited many times in the article; others are cited once; some are cited positively, but some may be disputed. Because all references are not the same, measures based on citations must be interpreted carefully.

One way to try to compensate for limitations of any particular journal rating is to combine several ratings into a single rating.3,9 While this may soften idiosyncrasies of any particular study, limitations of the survey and citation approaches still underlie the derived journal rating.

Back to Top

A New Approach to Rating Journals

Now, consider a new and very different approach to rating journals: directly examine publishing behaviors of IS researchers whose records have been judged by leading research universities as being sufficient to warrant tenure. We can expect the collective publication record of full-time, tenured IS faculty members at a sizable set of leading research universities to be representative of the best IS research and published primarily in journals that make the greatest contributions to the IS fled.

To operationalize the new approach, several decisions must be made. First, what set of “leading research universities” should be used? Second, what time period should be used for observing their IS faculty members’ journal publishing activities? Third, what cutoff point should be used to keep the numbers of rated journals manageable? Fourth, what measure(s) should be adopted as the basis for determining the relative rating of all journals that meet or exceed the cutoff point? To illustrate, we consider each of these issues in sequence and then apply them to identify the premier journals for publishing IS research and rate the relative importance of each journal.

Ideally, identifying a set of “leading research universities” should be done independently from constructing the rating. The set should include sufficient universities to avoid being highly sensitive to publishing behaviors of tenured IS researchers at any single university. Collectively, included universities’ faculties should be broadly representative of the IS fled. Here, we adopt those designated as being the top research-intensive public universities in the 2005 Annual Report of The Center for Measuring University Performance (http://mup.asu.edu/research2005.pdf). Annually, TheCenter compiles a comprehensive set of data on over 600 institutions and analyzes it in terms of nine main criteria to produce research university rankings. Using the 2005 rankings, we focus on the thirty-one leading public research universities exclusive of strictly medical schools (there is a tie for the thirtieth position). As of June 2006, these universities had 106 full-time, tenured IS faculty members. While there may be some IS specialty topics not covered by this group, it is likely that most of the key IS topics are well represented in the collective work of these 106 senior researchers from leading universities.

The time-period for observing publishing behaviors needs to be sufficient to capture major developments in the fled and to avoid biases due to short-run phenomena. To get a good historical perspective on journals’ relative importance, consistency, and staying power, we tabulate publishing behaviors of the 106 IS researchers since 1980.

Only those journals in which the 106 faculty members have authored at least ten articles are considered here. This threshold gives relatively new journals a chance to appear, but avoids a huge list, most of whose journals have but a few articles by the 106. It turns out that there are 43 journals in which the 106 faculty members have authored at least ten articles.

The fourth decision is concerned with metric(s) to use in rating the journals. One option is the relative publishing breadth for the journals: if a high percentage of tenured IS researchers have authored articles in a journal, it is rated above journals in which smaller percentages have authored articles. Table 2 shows ratings for journals with a publishing breadth of at least 10%. Notice that Communications of the ACM is the only journal with a publishing breadth of over 50%, with three others being fairly close to 50%. In terms of the publishing breadth measure, these are premier journals for IS researchers and searchers. These are followed by five more journals that exceed a 25% publishing breadth. Journals that are close in the breadth measure should be interpreted as being comparable, rather than adhering to a strict ranking (e.g., there is little difference between a breadth of 11% versus 10%).

Another rating option uses each journal’s publishing mode. With this measure, if a journal is the most frequent publication outlet for a high percentage of tenured IS researchers, it is rated above journals that are rarely the mode. As Table 2 shows, the four journals that most often are the publishing modes account for over 50% of the 106 faculty members’ modes. Six journals below the cutoff are modes for at least one faculty member apiece: The Information Society, European Journal of Information Systems, Human Computer Interaction, ACM Transactions on Database Systems, Information Processing and Management, and Electronic Markets.

A third measurement option is publishing intensity (or rate). With this measure, the most important journal outlets for IS research are those with the highest average number of articles per researcher across the set of senior IS scholars at the leading research universities. Table 3 displays ratings of journals for this option. Notice that there are four journals for which the publication intensity across all faculty members exceeds 1.0 and three others that exceed a .75 level of intensity. In terms of the publishing intensity measure, these seven are the very top journals for IS researchers and searchers. They are followed by a cluster of three more journals in the range of one-half to three-fourths article per researcher, to identify the top-10 journals where the tenured IS scholars at leading research universities have collectively published the greatest concentrations of their journal articles. Journals that are close in the publishing intensity measure should be interpreted as being comparable, rather than adhering to a strict ranking (e.g., an intensity of .17 is little different than .16).

A fourth measure, publishing weight, answers the question: For those tenured IS faculty members who have published in a specific journal, how heavily do they tend to publish there? A relatively heavy weight for a journal suggests that those who do publish there find it to be a particularly appropriate and valuable outlet (i.e., premier outlet) for their IS research. Table 3 shows journal ratings based on this weight measure. Notice that, within the IS fled, some researchers tend to publish heavily in reference-discipline journals. Three of the four journals with the greatest weights (such as, The Information Society, Journal of the American Society for Information Systems and Technology, Information Processing and Management) are from the information science fled, and the highest weight journal is from the public administration fled. The average weight across all forty-five journals is 2.17. Each of the four highest intensity journals has a weight well above the mean, while the next cluster of three journals each has a weight shy of the mean. Note that multidiscipline journals (such as, Management Science, Decision Sciences, Sloan Management Review) tend to have below-average weights, as do many of the IS specialty journals (such as, Expert Systems with Applications, International Journal of Electronic Commerce, Journal of Strategic Information Systems) and relatively new journals (such as, Information Systems and E-Business Management, Information Systems Frontiers).

Back to Top

Discussion

Both practitioner and academic segments of the IS community can benefit from paying attention, on a continuing basis, to the findings of IS researchers. Just as the pace of technological advances in computing and communications is rapid, so too is the ongoing dynamic emergence of new knowledge about the nature, usage, and impacts of information systems. Within this growing body of knowledge are potentials that can variously propel, stimulate, and provoke IS thought and practice. But there is a problem in actualizing this potential. The searcher and researcher of today’s IS landscape encounters a large and complex panorama of research forums. Because there is insufficient time for an individual to regularly monitor dozens, much less hundreds, of journals relevant to IS issues, a short list of the most important research journals can help by identifying the first-reads for IS. On the other hand, focusing on just one or two journals is inadequate to give a sense of the full IS research picture,10 leading a searcher to not only have a misimpression about IS research, but to overlook valuable resources that are perhaps more relevant, innovative, and important with respect to his/her needs and interests.

As explained in this article, traditional approaches to rating IS journals (opinion surveys and journal citations) are methodologically problematic, which casts doubt on the value of their results for both academicians and practitioners. Thus, we employ a non-traditional approach that observes the actual collective publishing behavior of senior IS researchers at an independently identified set of very prominent research universities. The resultant forty-three journals in Table 3 have published the heaviest concentrations of articles authored by these IS researchers. The four measures of publishing breadth, mode, intensity, and weight give distinct, yet complementary, ways of rating these journals – without encountering difficulties inherent in survey-based or reference-based approaches. Together, they map out well-grounded and useful guidance in identifying premier journals for both placing and finding IS research. The strong grounding comes from the fact that the journals reflect the actual IS publishing behavior consistent with achieving/holding tenure at leading research universities. The usefulness comes from flexibility to combine one or more of the four ratings (as desired) to produce a core set of premier journals that can be supplemented as needed for the purpose at hand.

For an IS practitioner, for example, the purpose may be to construct a short, regular reading list. Prime candidates for inclusion would be the upper echelons of Table 3. These journals tend to be fairly broad in the scope of their editorial coverage, and those whose content strikes a responsive chord in the reader would anchor that reader’s list. Many of the other journals in Table 3 are IS specialty journals and reference discipline journals (hence their lower intensity and breadth measures). Still, because they are on the radar screen in Table 3, some of these are also candidates for inclusion in the practitioner’s short list. For instance, if the practitioner has a particular interest in uses of artificial intelligence in information systems, then the IS specialty journal, Expert Systems with Applications, and the reference discipline journal, IEEE Intelligent Systems, are strong contenders. The practitioner may want to consider including additional journals devoted to this interest area, such as the IS specialty journal Intelligent Systems in Accounting, Finance, and Management.

Similarly, if the purpose is to establish a standard against which IS faculty members’ research performance at a specific university will be judged when making promotion, tenure, and merit decisions, then the core of premier journals may need to be supplemented to represent that IS faculty’s special expertise or focus. When there is, for instance, a special focus on information security, the core can be supplemented with journals from reference disciplines or IS specialty journals that are known for their security research. Failure to do so, would mean that the IS faculty members would be judged against a standard that ignores their special strength or ambition. Conversely, it could be advisable to pare the core to eliminate any reference discipline or IS specialty journal whose subject matter has no overlap with the research directions of the university’s IS faculty. However, there is no sound rationale for excluding from the core any journal that consistently rates near the top of the four measures. These journals are clearly the very important locations for placing and viewing IS research. Relatively large proportions of senior IS scholars from leading research universities publish in them and do so in relatively great intensity (thereby contributing more to the stature of these journals than to other journals).

The basic methodology used here could be applied for a different (but still sizable) group of researchers, other than those at TheCenter’s leading universities. A particular IS department may have a prescribed set of benchmarks comprised of IS departments at certain other universities. The collective publishing behavior of the tenured IS researchers in these departments could be used to establish a journal rating for self-evaluation based on the prescribed peers. For instance, for an IS department at a non-doctoral-degree-granting university, the prescribed benchmark departments may be those from 20–30 other schools without doctoral degree programs, and the resultant journal rankings could well differ from those in tables 2 and 3.

There are a few limitations of the new behavior-based rating methodology that should be kept in mind when applying it and interpreting its results. First, there is the selected set of leading research universities. It is difficult to dispute TheCenter’s designation of top research universities. However, it is possible that a somewhat different rating would result if the set were expanded substantially. One difficulty with such expansion is that the set may begin to include universities not quite at the same high research level, adversely affecting the aim of identifying the most important journals for IS research. This study assumes that the 106 faculty members are sufficiently diverse in their individual research programs to collectively represent a broad spectrum of the most heavily researched IS subject matter. Because they are the senior IS researchers at leading research universities, the 106 are assumed to collectively comprise an excellent research cadre.

To avoid possible researcher bias, it is crucial to have a reputable independent entity (e.g., TheCenter) identify the leading research universities, and have these universities identify their leading IS researchers (via the tenure process). Of course, there are many senior IS professors at other universities who are excellent researchers, but it is unclear that unbiased selection an entirely different set of research-oriented benchmark universities could yield a comparably sized cadre of IS researchers that is obviously “superior” to the one used here. It is also unclear that their collective publishing behavior could be expected to be radically different than what is reported here.

Indeed, an initial pass at this research used a subset of TheCenter’s 31 leading research universities: 20 of them, for a total of 73 senior IS researchers.2 The upper echelons of journal rankings for that initial pass were little different than those reported here. The initial twelve highest-ranked journals in terms of publishing breadth were the same twelve as those reported in Table 2, with the top three positions being identical and the rank of no other journal having changed by more than one position. As for publishing intensity ranks, all but one (Decision Sciences) of the top-10 reported in Table 3 were among the top-10 identified in the initial pass, with no change of more than one position among the five highest intensity journals (and no change in the highest ranked journal). Some modest shuffling of ranks in the lower echelons occurred, but these are so close as to be practically indistinguishable anyway (e.g., an intensity of, say, .19 is not much different than, say, an intensity of .23). Also, seven additional journals appeared near the bottom of the intensity-based ranking. This expansion results from maintaining the cutoff of ten, while adding the publishing records of 35 more researchers.

Second, TheCenter tracks only American Universities. To be applicable in Europe or Asia-Pacific, the set of leading research universities should be representative of the top research institutions in those geographic regions as well. The same methodology could be adopted for this, but would involve data collection for senior faculty members of an independently determined set of leading research institutions covering those regions.

Third, regardless of the set of research universities used, the ratings will exclude some outstanding journals. For example, we should not expect to see IS reference-discipline journals for which publication of IS articles happens, but is relatively rare (for example, Administrative Science Quarterly in the management discipline, Human Communications Research in the communications discipline). Also, we should not expect the rated journals to include those from a reference discipline that is emergent. A good example is the knowledge management (KM) fled, which is fundamental to an understanding of IS, but has just begun to come into its own in past decade. Although some of the 106 have begun to publish in KM journals, this activity is simply too recent to pass the cutoff used here.

Similarly, the ratings also exclude some outstanding journals from IS specialty niches in cases where specialties are too new, too narrow, or too thinly studied by the 106 senior researchers to surpass the cutoff. Because we cannot expect these researchers (or any other large set of senior IS researchers) to cover all IS sub-fields equally, the rankings do not treat specialty journals equally across the various subfields. For instance, no journal devoted to security issues or IS education appears, but this does not mean excellent journals for these special topics are non-existent. The rankings in Table 3 do show that the most prominent IS sub-fields appear to include modeling (INFORMS Journal on Computing, Information Systems, Computers and Operations Research, Computational Economics) and multi-participant/e-commerce systems (e. g., Journal of Organizational Computing and Electronic Commerce, Group Decision and Negotiation, International Journal of Electronic Commerce, Electronic Markets).

It is also possible that some very good, broad-coverage IS journals are excluded because they are too new to have a substantial publication history. In the methodology used here, the importance of a journal is due in part to having built a substantial history as a forum for leading researchers and to its staying-power over time. Nevertheless, the youngest of the 43 rated journals, Information Systems and e-Business Management, had been in publication for less than four years at the time data for this study were collected.

As a fourth limitation, we must note that reported publishing behaviors of the 106 IS researchers do not stop in 2006. Although we can expect them to continue to publish heavily in what they have previously considered top outlets for their work, shifts can occur over time (e.g., comparatively new journals may eventually rise in the ratings). A related dynamic is change in the set of 106 researchers, as tenured faculty membership evolves at the 31 benchmark schools (e.g., retirements, promotions, moves). Moreover, TheCenter’s list of leading research universities is subject to some modest change from year to year.

Back to Top

Conclusion

Pitfalls and shortcomings of opinion surveys and citation analyses are avoided by the new behavior-based approach to rating journals for IS research. Examining the publishing behaviors of full-time, tenured IS faculty members at a sizable set of leading research universities and covering an extended period, it recognizes their collective publishing activities as offering authoritative guidance about the most important journals for IS research. Resultant ratings support a practitioner who is interested in determining a short list of must-read journals for informing his/her IS design, application, and management efforts. Resultant ratings support IS researchers as they ponder where to submit their best manuscripts and where to find concentrations of valuable IS articles. The ratings support administrators in evaluating and establishing IS research standards for promotion, tenure, and merit review comparable to those applied at leading research universities. The ratings support decisions that librarians make about the composition of their IS journal collections. For these applications, it is left to the user of the ratings to determine how many of the core journals to include, how to cluster them into tiers, how many tiers to have, what supplemental journals to add to the core (and rationales for doing so), how to establish relative weights for different tiers, and how to treat publications in the hundreds of IS-related journals not included in Table 3.

The IS research realm is crisscrossed with hundreds of publishing trails. Some are long, wide, heavily-traveled, revealing major vistas of IS research. Some transcend boundaries to link with related fields. Others are side paths of special interest. Some routes stretch to the IS horizons, highlighting pioneering and visionary research. Others run through comparatively developed areas, concentrating on cultivating, expanding, and extracting value from prior findings. The ratings presented here comprise a map that points out premier ways for searching and researching this IS landscape. The behavior-based journal rating methodology is not peculiar to the realm of information systems. For instance, it could be applied in the computer science discipline.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. New and Traditional Approaches to Rating IS Journals.

Back to Top

Tables

T1 Table 1. Examples of Survey Parameters.

T2 Table 2. Journals ranked by publishing breadth and mode.

T3 Table 3. Journals ranked by publishing intensity and weight.

Back to top

    1. Hardgrave, B. and Walstrom, K. Forums for MIS scholars. Commun. of the ACM 40, 11 (Nov. 1997), 119–124.

    2. Holsapple, C. W. A publication power approach for identifying premier IS journals. J. of the American Society for Information Science and Technology 59, 2 (2008), 166–185.

    3. Holsapple, C. W., Johnson, L., Manakyan H., and Tanner J. A citation analysis of business computing research. Information and Managemen. 25, 5 (1993), 231–244.

    4. Holsapple, C.W., Johnson, L., Manakyan, H., and Tanner, J. Business computing research journals: A normalized citation analysis. J. of Management Information Systems 11, 1 (1994), 131–140.

    5. Katerattanakul, P., Han, B., and Hong, S. Objective quality ranking of computing journals. Commun. of the ACM 46, 10 (Oct. 2003), 111–114.

    6. Lowry, P., Romans, D., and Curtis, A. Global journal prestige and supporting disciplines: A scientometric study of information systems journals. J. of the Association for Information Systems 5,2 (2004), 29–75.

    7. Mylonopoulos, N. and Theoharakis, V. On-Site: Global perceptions of IS journals. Commun of the ACM 44, 9 (Sept. 2001), 29–33.

    8. Peffers, K, and Tang, Y. Identifying and evaluating the universe of outlets for information systems research: Ranking the journals. J. of Information Technology Theory and Application 5,1 (2003), 63–84.

    9. Rainer, K. and Miller, M. Examining differences across journal rankings. Commun. of the ACM 48, 2 (Feb. 2005), 91–94.

    10. Saunders, C. Editor's comments. MIS Quarterly 30, 1 (2006).

    11. Trieschmann, J. S., Dennis, A. R., Northcraft, G. B., and Niemi, A. W. Serving multiple constituencies in the business school: MBA program versus research performance. Academy of Management Journal 43, 6 (2000), 1130–1141.

    12. Walstrom, K., Hardgrave, B., and Wilson, R. Forums for management information systems scholars. Commun. of the ACM 38, 3 (Mar. 1995), 93–102.

    13. Whitman, M., Hendrickson, A., and Townsend, A. Academic rewards for teaching, research and service: Data and discourse. Information Systems Research 10, 2 (1999), 99–109.

    DOI: http://doi.acm.org/10.1145/1467247.1467276

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More