Opinion
Computing Applications Education

Will Massive Open Online Courses Change How We Teach?

Sharing recent experiences with an online course.
Posted
  1. Introduction
  2. The Fall 2011 Stanford AI-Class.com
  3. On the Course
  4. My Role at UMass Lowell
  5. What Does it Mean?
  6. Conclusion
  7. References
  8. Author
Massive Open Online Course photo

In the last decade, the Creative Commons philosophy of freely sharing information and the pervasiveness of the Internet have created many new opportunities for teaching and learning. MIT OpenCourseWare spearheaded the sharing of high-quality, university-level courses. While these materials were not originally designed for individuals engaged in self-study, approximately half of OCW’s traffic is from these users.6 Recently the use of learning management systems (LMSs), such as the proprietary Blackboard or open-source Moodle software, has become ubiquitous.

In typical use, LMSs support the structure of conventional courses in an online setting. Lectures and reading material are assigned, homework is scheduled, and discussions are facilitated at regular intervals. As in conventional coursework, classes are usually closed communities, with students registering (and paying) for credit-bearing coursework.

One of the first initiatives to bring together open course philosophy and LMSs was David Cormier’s "Massive Open Online Course," or MOOC. In his vision, "Although it may share in some of the conventions of an ordinary course, such as a predefined timeline and weekly topics for consideration, a MOOC generally carries no fees, no prerequisites other than Internet access and interest, no predefined expectations for participation, and no formal accreditation."9

This idea—albeit in a more conventional course structure—exploded into the public consciousness with the massive open artificial intelligence (AI) course developed and conducted by Stanford faculty Sebastian Thrun and Peter Norvig last fall. Announced in the summer of 2011, the course received wide publicity, and attracted about 160,000 registered students by its launch in October 2011. Approximately 23,000 students completed the 10-week course.8 I was one of the 23,000—along with a cohort of 16 students, both graduate and undergraduate, from my home institution (the computer science department at the University of Massachusetts Lowell).

Since the fall AI 2011 course, there has been much activity in this space. Thrun has set up a for-profit company, Udacity, to extend his initial work; Stanford and others are running courses using Coursera; MIT created MITx and is partnering with Harvard on edX; and there are other initiatives.4 The remainder of this column describes my experiences taking the fall 2011 course alongside my students and facilitating their learning. This is followed by some reflections. It seems likely this new breed of MOOCs will have impact on education at the university level, particularly for technical majors such as computer science.

Back to Top

The Fall 2011 Stanford AI-Class.com

The Stanford course consisted of weekly lectures—two or three 45-minute topics that were broken up into 15 or 20 short videos. Most of the individual videos had embedded questions (multiple-choice or fill-in-the-value). At the end of each mini-lesson, the video image transformed into a Web form where students fill in answers. Already logged in, the class server graded the students immediately. After submitting, students were shown an explanation video.

The lectures themselves were inspired by Khan Academy’s casual, teacher-sitting-by-your-side approach. Occasionally, Thrun and Norvig trained the camera at themselves, but the core content was conducted with the camera aimed at a sheet of paper, with Thrun or Norvig talking and writing in real time. I found this format relaxing and engaging; I liked seeing equations written out with verbal narration in sync.

There were weekly problem sets with the same format. These homeworks tracked the lecture material closely. The course included a midterm and a final with the same format as the homework. If you worked through and understood the problems embedded in the lectures, the homework assignments were straightforward. The homework, midterm, and final each had hard deadlines. The server backend kept track of students’ scores on the homework assignments, the midterm, and the final, which all counted toward the student’s "grade"—a ranking within the active student cohort.

Back to Top

On the Course

Thrun and Norvig were strong teachers. They thought through excellent ways of explaining the ideas and quizzing the in-lecture comprehension checks. They often brought fun props or showed research projects in the video recordings.

Thrun and Norvig were only a week or so ahead of the course delivery, and they paid close attention to students’ progress. There was a lot of activity on the Web forums. They recorded several "office hours," where students submitted questions and voted on their favorite ones, and then they picked questions and answered them on camera. In this way, the course was like a typical class—it was not "canned." Thrun’s and Norvig’s enthusiasm was infectious. Collectively, the real-time nature of the experience made it a lot like a well-taught conventional course.

Back to Top

My Role at UMass Lowell

Students registered for my department’s regular AI course, which requires a project. They knew when signing up that I expected them to complete both the Stanford course and a directed project. As mentioned earlier, I had 16 students. We met once weekly for a 75-minute session in a roundtable format. We talked about the Stanford material after each week’s assignment was already due. Because of this, I did not have to present the course material in a lecture format. When we met, most of my students had worked through the lectures and the homework. So I did not have to explain things to students for the first time. Instead, we used in-class time for conversations about material that people found confusing or disagreed upon. We had some great discussions over the course of the semester.

A similar approach was developed by Day and Foley in their HCI course at Georgia Tech.2 They recorded Web lectures, and then used classroom time for hands-on learning activities. Daphne Koller, a colleague of Thrun’s at Stanford (and founder of Coursera), has called this "the flipped classroom." She reported higher-than-usual attendance in her Stanford courses taught this way: "We can focus precious classroom time on more interactive problem-solving activities that achieve deeper understanding—and foster creativity."7

Back to Top

What Does it Mean?

The success of the fall AI course and the bloom of new ones this spring and summer puts real pressure on conventional, lecture-and-test university instruction. Thrun is quoted in several reports as noting that attendance at his face-to-face AI course at Stanford in the fall dropped precipitously. From the 200 registered, after a few weeks, only 30 continued to attend.8 But this really speaks to the failure to have the in-person time deliver anything different from a lecture. In many ways, the carefully crafted online lectures, peppered with probing questions that are autograded for correctness and then explained further, are indeed an improvement over a conventional lecture.

There are many initiatives to improve the quality of face-to-face time in lectures. When used creatively, clickers can be a valuable modification. But more fundamentally, active learning approaches hold much more promise.

Robert Beichner has developed a classroom approach called "SCALE UP" for active learning in the classroom. His work started in physics education, but years of development and collaboration broadened it to many fields, including the sciences, engineering, and the humanities.5 In SCALE-UP, faculty engage students in a structured activities and problem-solving during classroom time. Students work in teams of three, and faculty mingle with them, engaging them in discussions. (The SCALE-UP acronym has had several meanings, including "Student-Centered Active Learning Environment for Undergraduate Programs.")

Eric Mazur, also from the physics education community, has developed a related approach that he calls "peer instruction," in which students work in small groups to answer questions posed in lectures. Like Beichner, Mazur is active in disseminating this method.

However, dissemination and adoption are big challenges. These approaches require substantial new development of the problems and activities with which students are to be engaged in the classroom, and teachers must give up their carefully crafted lecture presentations.

Also, teachers need to be protected from low student evaluation scores. Mazur and others have reported that students give lower evaluations in courses with active learning—even when the evidence shows they have learned more.1,3 Students have grown up with conventional lecture teaching, and just like anyone else, they are resistant to change.

Beyond this, faculty must participate in these active learning approaches as learners, so they understand how to facilitate them as instructors. In my case, in my graduate training I learned how discussion-oriented seminar courses are conducted, so it was natural for me to facilitate the same with my small group of "flipped classroom" AI students.

Back to Top

Conclusion

When Thrun was promoting the fall 2011 online AI course, his Twitter feed included some bold claims: @aiclass: "Advanced students will complete the same homework and exams as Stanford students. So the courses will be equal in rigor."—September 28, 2011

The fall 2011 course for matriculated Stanford students included programming assignments, and the online one did not. This was a clear shortcoming. But the new Udacity courses include programming. Most of my students got a lot out of the fall Stanford course—and our weekly discussion sections made a difference. But the weaker students struggled, and a few strong students were bored. This makes me wonder about the large-scale applicability of the MOOC format. We need to be able to support students who are still learning how to learn, and also challenge our best students. The MOOC concept does not even attempt to address the role of a small, research-oriented project-based course. When we individually mentor each student on his or her own ideas, we are doing something that can never be performed by an autograder.

Part of the excitement around MOOCs is about their potential to change education’s cost equation—put a great course online once, and run it unattended many times. But part of the fun of the fall AI course was that Thrun and Norvig were right there with us, and that we were a large cohort of students there with them.

Thrun also asserted: @aiclass: "Amazing we can probably offer a Master’s degree of Stanford quality for FREE. HOW COOL IS THAT?"—September 23, 2011

As we know, the modern university is a much larger ecosystem than its collection of courses. Among many other things, students derive great value from being in close contact with their peers, participating in leadership opportunities across campus, and being part of our research labs. It may well be that this new breed of MOOC is a decent replacement for an average, large-sized lecture course. But this is a low bar.

In our drive for efficiency, we must aspire to higher than this. At least, we can use MOOCs to create a successful flipped classroom. We can use our "precious classroom time" for meaningful conversations. As Mazur and Beicher have demonstrated, this can be done even in large lectures by having students work in small groups.

At best, we can really add value, by being teachers. We can individually debug students’ thinking, mentor them in project work, and honestly be enthusiastic when they excel.

Back to Top

Back to Top

    1. Crouch, C.H. and Mazur, E. Peer instruction: Ten years of experience and results. Am. J. Phys. 69 (Sept. 2001).

    2. Day, J. and Foley, J. Evaluating a Web lecture intervention in a human-computer interaction course. IEEE Transactions on Education 49, 4 (Nov. 2006).

    3. Fagen, A.P., Crouch, C.H., and Mazur, E. Peer instruction: Results from a range of classrooms. The Physics Teacher 40 (2002).

    4. Fox, A. and Patterson, D. Crossing the software education chasm. Commun. ACM 55, 5 (May 2012), 44–49.

    5. Gaffney, J.D.H. et al. Scaling up education reform. Journal of College Science Teaching 37, 5 (May/June 2008), 48–53.

    6. Giving Knowledge for Free: The Emergence of Open Educational Resources, Organisation for Economic Co-operation and Development, 2007.

    7. Koller, D. Death knell for the lecture: Technology as a passport to personalized education. New York Times (Dec. 5, 2011).

    8. Lewin, T. Instruction for masses knocks down campus walls. New York Times (Mar. 4, 2012).

    9. McAuley, A., Stewart, B., Siemens, G., and Cormier D. The MOOC Model for Digital Practice. 2010; http://www.elearnspace.org/Articles/MOOC_Final.pdf.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More