University of Akron researchers recently completed a study comparing human graders and software designed to score student essays. The study concluded that human graders and the software achieved virtually identical levels of accuracy, with the software proving more reliable in some cases. The technology could help teachers go through more essays and give more writing assignments to improve student writing, says Akron researcher Mark Shermis.
The study examined more than 16,000 essays from six state departments of education, with each set of essays varying in length, type, and grading protocols. The study challenged nine companies to develop software that could approximate human-graded scores for the essays. "This technology works well for about 95 percent of all the writing that's out there, and it will provide unlimited feedback to how you can improve what you have generated, 24 hours a day, seven days a week," Shermis says.
View a video of Mark Shermis discussing the strengths of the grading software.
The study stems from the Automated Student Assessment Prize contest, which the Hewlett Foundation sponsors to evaluate the current state of automated testing and to encourage further developments in the field. The contest offers $100,000 to anyone who develops new automated essay scoring techniques.
From University of Akron
View Full Article
Abstracts Copyright © 2012 Information Inc., Bethesda, Maryland, USA
No entries found