The recent wave of Massive Open Online Courses (MOOCs) has highlighted the potential for making educational offerings accessible at a global level. The attention MOOCs have received is well deserved, but it belies the fact that various forms of online education have existed for many years. Rather than attempting to catalogue the broad spectrum of online learning resources, we focus on a sampling of initiatives in online educationwith an emphasis on our home institution, Stanford University, with which we are most familiarhighlighting some of the opportunities and challenges at hand.
By way of background, the Stanford Center for Professional Development (SCPD) began offering distance-learning courses via television microwave channels in 1969. Studentsmostly engineers working in the local Silicon Valley areahad the opportunity to watch course lectures on television at off-campus locations and submit course work via a courier system. By 1996, these course offerings had evolved to using streaming video via the Internet and a variety of means for electronic assignment submission and distribution of course materials. While in some respects these course offerings were similar to the more modern MOOCs, a critical differentiator was cost. Students enrolled in SCPD courses paid tuition akin to on-campus Stanford students, received the same level of service in the evaluation of the their work, and received course credit upon passing classes. The costs involved, however, meant the number of remote students enrolled in the Stanford courses was usually dwarfed by the number of on-campus students in the courses.
That equation changed in 2008 with the launch of Stanford Engineering Everywhere (SEE; http://see.Stanford.edu). Through the SEE initiative, Stanford made several of its most popular engineering coursesincluding six CS coursesfreely available online, including full videos of class lectures and all course materials (handouts, assignments, and software). Course videos were released through YouTube, Apple's iTunes University, and on Stanford's own website. While the SEE initiative was not itself novel (MIT's OpenCourseWare project was created years prior to SEE), the response to the open online materials was significant. Among the classes released was an offering of CS106A (Stanford's Java-based CS1 course). That course's (hour-long) lecture videos have been viewed more than two million times on YouTube alone. The availability of such online materials has resulted in universities in the U.S., China, India, and Brazil using these videos as part of teaching their own introductory programming courses. And since the CS106A course covers much of the same material as the AP computer science curriculum, students without access to traditional CS courses at high school have reported being able to study for and pass the AP CS exam as a result of watching the course videos (either alone or through a teacher-guided independent study at their school). Of note is the fact that these online materials generated such a strong response without any of the affordances of MOOCs (which typically offer enrollment, quizzes and assessments, assignment deadlines, statements of accomplishment, and so forth). Importantly, it is these seemingly evolutionary additional features that allowed the recent set of MOOCs to cross a line from being considered yet another free educational resource to being viewed as scalable free courses. This change in perception also brought with it a new set of possibilities and expectations.
Many challenges remain if MOOCs are to become competitive with the "classical" model of in-class education.
The trio of MOOCs released by Stanford faculty in fall 2011courses in artificial intelligence, databases, and machine learningattracted hundreds of thousands of students and spawned two private ventures: Coursera and Udacity. Concurrently, edX evolved from MITx as a non-profit consortium for online education, comprised initially of MIT and Harvard, with UC Berkeley and the University of Texas later joining forces. Stanford has also developed two new online learning platformsClass2Go and Venture Laband committed itself to further work in this area by appointing CS professor John Mitchell as the inaugural Vice Provost for Online Education.
MOOCs have the potential to provide education on a global scale. But many challenges remain if MOOCs, either in a standalone or hybrid context, are to become competitive with the "classical" model of in-class education. Here, we discuss some of the opportunities and challenges facing MOOCs based on our experiences.
Perhaps the most widely discussed challenge in online education is that of validating original work and preventing (or at least detecting) plagiarism. It has been reported that plagiarism is a potentially significant problem in online courses.10 In response, Coursera has stated it may attempt to employ plagiarism-detection software. It is too early to tell the efficacy of automated methods for plagiarism detection, but the clear need may motivate further research in this area. Both edX and Udacity have partnered with Pearson VUE, a provider of testing centers, to validate students taking proctored exams.4,7 While the use of testing centers to validate students' identity and original work seems more straightforward in practice than automated methods for plagiarism detection, it also carries with it cost for the student. How such costs are to be weighed with respect to the costs and benefits of enrolling in a traditional course will be an important factor in the future success of MOOCs.
Another important component of MOOCs is whether and how they provide some form of certification to students. While the experience with SEE (which provides no form of certification) leads us to believe that many students will still pursue online education regardless of certification, we recognize that many students will be concerned they receive external validation/certification of their learning. There are many companies (both non-profit and for-profit) that have experience in awarding various forms of certifications, and several colleges offer purely online certificates and even degrees. Hybrid models are also emerging. For example, the University of Washington has offered to give college credit for some of its courses taken through Coursera for students who pay a fee and complete additional assessments.2 Thus, models for the certification of online work certainly exist. The extent to which such certifications are recognized by others, especially employers, will certainly impact how MOOCs are viewed relative to more traditional courses.
Although the initial set of MOOCs focused on relatively straightforward means for evaluation, such as multiple-choice quizzes or short-answer questions, richer evaluation models measuring student engagement more fully with the material soon emerged. In a computing context, such evaluations include assessing students' programs and assessing larger student projects. While mechanisms such as testing suites can be used to measure aspects of a program's functionality, such tests are not applicable in all contexts, such as with interactive applications. Care must be given in developing assessment platforms allowing for a rich design space of assignments while still making (semi-) automated assessment feasible.
Peer assessments have also been proposed as a means for providing human assessments at scale. Research in peer assessment has shown the potential for this approach.6 For example, Stanford's online Human-Computer Interaction course, taught by Scott Klemmer, emphasizes student design work. Enabling assessment of these designs at scale requires each student in the course to provide an ordering of several designs with which they are presented. The orderings from all students are then combined to get a more global ranking of designs. Interestingly, some of the designs presented to students to order have been graded by human experts (for example, the teaching assistants for the on-campus course). Since these graded designs are now embedded in the global ordering, a grade can be determined for a student-submitted design by determining how it ranks relative to expert-evaluated designs. While such a system is not without issues, it does provide an interesting model for injecting expert evaluation into a primarily peer-based assessment scheme.
The massive scale of MOOCs provides the opportunity to collect unprecedented volumes of data on students' interactions with learning systems. As a result, it becomes possible to use machine learning to gain insight on and potentially personalize human learning. Work in this vein has existed for years under the rubric of intelligent tutoring systems and educational data mining. As one recent example, Piech et al.5 applied machine learning techniques to build probabilistic models of automatically logged intermediate versions of student programs in our CS106A course. Such models, built on initial assignments in the course, were better predictors of students' performance later in the course than the grades on those assignments. Such techniques could be used to identify students who are struggling in an online course and suggest remediation via alternative learning paths through a MOOC.
We need to identify new ways to think about online learning.
MOOCs also have the potential to present information to students using many different pedagogic approaches, allowing each student to select a particular desired approach, or even making such suggestions to the student. A meta-analysis of more than 1,000 online studies8 argues that features such as instructor-directed and collaborative online instruction led to improved learning for students, and that blended learning environments tended to be better for students than purely online ones. Online courses can evolve to incorporate such identified best practices.
As evidenced by Martin,3 some universities are leveraging MOOCs by having their students watch videos from an online courseStanford's artificial intelligence class in Martin's caseprior to attending class at their own university to discuss the material and engage in additional assessments. Such "flipped classrooms," which existed in various forms before MOOCs, enable the instructor to spend less time lecturing and more time interacting with the students. Indeed, we are likely only scratching the surface in exploring ways in which online videos can augment or potentially improve education. More work is needed to determine what instruction students should do on their own in preparing for class, as well as identifying how best to utilize class time given the fact that students have watched videos and engaged in attendant exercises in preparation.
We need to identify new ways to think about online learning. Tools, such as algorithm visualizations (for example, AlgoViz, http://algoviz.org, or Amit Patel's probability visualizations, http://www.redblobgames.com), programming practice environments (such as Nick Parlante's CodingBat, http://codingbat.com, or Amruth Kumar's Problets, http://problets.org), and editable coding visualizers (such as Philip Guo's Online Python Tutor, http://www.pythontutor.com) all offer promising online environments to aid student learning. We believe such innovations can become especially effective in online education, augmenting video presentations with myriad interactive activities for the learner to perform. Perhaps incorporation of appropriate interactive aids can begin to move closer toward identifying and constructing curricula for making Alan Kay's Dynabook1 a reality.
It was Thomas Edison who believed that the advent of the phonograph would completely revolutionize education, rendering teachers obsolete. In the intervening century, similar predictions have been made about many other technological innovations. We do not believe MOOCs are going to render teachers obsolete, certainly not in the foreseeable future. Online education can augment more traditional instruction, and serve as an effective means to scale education to students when other (in-person) forms of instruction are unavailable. Like Vardi,9 we believe MOOCs are here to stay. However, we are much more positive about online education's transformative potential, if we as a community can find solutions to the challenges at hand. It is really up to us.
5. Piech, C., Sahami, M., Koller, D., Cooper, S. and Blikstein, P. Modeling how students learn to program. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education (SIGCSE'12). ACM, New York, 2012,153160.
7. Udacity blog. Udacity in partnership with Pearson VUE announces testing centers. (June 1, 2012); http://blog.udacity.com/2012/06/udacity-in-partnership-with-pearson-vue.html.
8. U.S. Department of Education, Office of Planning, Evaluation, and Policy Development. Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies, Washington, D.C., 2010
The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.
Thanks for the article.
It is interesting that the first third of challenges is actually not about gained knowledge, but about external validation. From my point of view there are so many people coming into MOOC just to get knowledge but not verified certificates, so knowledge-related issues must be at first place.
Topic about hybrid education adds nothing new from my point of view. It's not different with reading suggested book chapters before lection, just using another (yet somehow richer) media.
From knowledge point of view there are some other significant challenges:
1. No one MOOC platform yet provides a complete learning path for specific domain. You have good, very good and even state-of-the-art courses, but you cannon assemble a complete education program from them. And no one can advise you in this business. First, you do not have proper tool to make your own path. Second, you have no enough courses (see next).
2. There are abundance of entry-level courses, but a lack of deep ones. So you can start but then you have nowhere to go. For example, on Coursera you can take basic econometric course, but you cannot take deeper courses, they exist only in university. Same for machine learning and data analysis â€” there are several entry courses (like Andrew Ng's one), but too few advanced (can remember only Geoffrey Hinton's one on Neural Networks). It's a problem.
3. Courses (at Coursera/Udacity) are limited in time with the longest usually lasts approx. 12 weeks. While it's enough time to provide a lot of material, it's too few time to do any meaningful practice and graduate work. At best you'll have a final exam, but it's not enough. It would be better to try gained knowledge on near real-life tasks and provide a second assessment, say 3-6 months later, based on student's project, say on Github. Yes, it's a talk about richer evaluation.
4. Online courses (Coursera/Udacity) provides too few possibilities for a group work. Venture Lab seems to be better at this place. Group work adds another dimension to online learning and reduces the gap between online and university on-site education.
5. MOOCs still cannot replace learning bound to phisical reality, i.e. those when you have to use solderer and oscilloscope on your own. Or when you have to program PIC. This problem can be solved using special toolboxes, better thought-out evaluation, and maybe integration with FabLabs or other DIY facilities. But it's still a problem.
Displaying 1 comment