The open nature of MOOCs affects the design of its resources, its activities, and of course the interactions among participants. As far as the design of the course is concerned, the spread of online tools opens up a universe of possibilities …
The most immediat consequence of the open nature of MOOCs is the diversity of their audience, which consists in a mix of novices and experts of the field, from people who study through MOOCs all day long to those who can hardly spend more than an hour a week on the course. Proposing activities or resources with high prerequisites, in terms of skills, equipment or engagement, will naturally exclude a high proportion of the participants. To adress this issue, it is possible to design different tracks, some with low prerequisites for participants who have little time and/or expertise of the field, and some with high prerequisites and /or expectations for participants who would like to go further.
Secondly, the high number and the diversity of registrants enables various possibilies as far as collective and collaborative activities are concerned. For instance, team projects have been emphasized on increasingly in recent MOOCs, since it may contribute to tackle the dropout issue and can trigger interesting learning outcomes. Some platforms like Novoed have specialized in team projects based MOOCs . It is nevertheless challenging since the different members of the team usually do not know each other prior to the course, and may have very different backgrounds and time constraints. However, this approach enables the participants to work on their own projects, and to a certain extent, to adapt the course to their personnal background, objectives and expectations.
Assessment is one of the main challenges for MOOCs, whether used to train, engage or evaluate registrants  . The potentially high number of participants prevents instructors from giving personnal feedback on each and every artifact. It is therefore needed to rely on automated, peer or self assessment . Those different approaches can of course be combined. Automated assessment includes a large array of techniques, ranging from multiple choice questions to automated essay scoring . Whether they are used for content recall, or as an application of the course, quizzes may be an issue as far as cheating is concerned.
Even if quizz-based examinations can be proctored and closely monitored, participants may be able to find the questions and associated answers prior to examination. Indeed, unless examinations are organized at a given time – wich can be an issue given the diversity of the time constraints of the audience – participants may be able to spread the questions and maybe the answers on the course forums, social networks or even on their own blogs. Designing a large pool of questions in order to create individualized exams for each participant may be one option to tackle the issue. As long as quizzes are used for training and not for certification, cheating is less of a problem . Nevertheless, the design of such quizzes implies a good understanding of Item Response Theory .
Automated assessment has its limitations, and a human eye is required in order to evaluate complex artifacts. Since course instructors cannot assess each and every artificat, peer assessment has become one of the main approaches to adress this problem . Given the fact that participants are not trained examinators, there is a strong variation in the accuracy of the grading process and the quality of the feedbacks. It depends strongly on the motivation of the examinators, and on the design of the assessment process. There are different ways to increase the quality and the accuracy of the process, prior to evaluation, through calibrated peer review [22, 22 bis], or after it, through grades correction . Calibrated peer review means that participants grade calibrated artifacts and compare their own grades with those of a trained instructor before starting evaluating the artifacts of their peers. It is possible to prevent participants to take part in the peer assessment process as long as they grade too differently from the instructors. Finally, plagiarism may also be serious issue in MOOCs since the high number of participants increases the difficulty to detect and to respond to it [25, 20], even though automated detection tools can be used for that purpose .
MOOCs have enabled to spread the use of peer evaluation at an unprecedented scale, providing important amounts of data and knowledge on the topic. In addition to enabling to scale up the evaluation process, it bears a high pedagogical value; taking part in the evaluation process is associated with high learning outcomes. All activities are not meant to be evaluated, and some may be aimed specifically at driving participants to interact with one another. Forums, blogs and to some extent social networks such as Facebook, Twitter or Google+, are the main media of interaction . They can be used to discuss about the course content, to get help or feedback from other participants, and to debate about particular topics. Only a small proportion of participants usually interacts on MOOC forums. In addition to asynchronous interactions, it is possible to organise live events like office hours or webinars open to the whole community of participants. There are different formats of live envents: some professors use them to answer to some questions of participants, to trigger live debates or to present some participants of the course. Chatrooms have also be used to foster interaction between participants but with questionnable impacts on learning outcomes .
One should always keep in mind that most participants of MOOC follow the course on their spare time and won’t be able to dedicate a lot of time per week, neither will they be able to sustain their engagement for several months. We therefore recommand to design short MOOCs rather than very long ones. Activities design is a compromise between the pedagogical objectives of the formation and the time constraints of the participants. MOOCs allow a wide range of pedagogical objectives, but the very high student to instructor ratio prevents instructors from providing a personnal feedback or support for all registrants. Moreover, most participants follow the course during their spare time and consequently have strong time constraints; they may not be able to spend more than a few hours per week on the course. Activities should be designed according to those different constraints …
 Glader, Online Courses Are Expanding, Along With Questions About Who Owns The Material. 2014. Wired Academics http://goo.gl/xZPTeb
 NovoEd, a MOOC platform specialized in team projects https://novoed.com/education
 Sandeen, C. Assessment’s Place in the New MOOC World, Research & Practice in Peer Assessment, 2013 : http://goo.gl/Dno9o2
 Sadler. The impact of peer and self assessment on student learning, 2006 http://goo.gl/lu2Vi
 MOOC Brigade : Can Online Courses keep Students from Cheating http://goo.gl/xOy7p6
 Fair and Equitable Measurement of Student Learning in MOOCs: An Introduction to Item Response Theory, Scale Linking, and Score Equating. Research & Practice in Peer Assessment, 2013. http://goo.gl/BGcGzU
 Kulikarni et al. Peer and Self Assessment in Massive Online Courses http://goo.gl/QXCVl4
 Assessing Writing in MOOCs : a comparison of CPR and AES http://goo.gl/vjCquv
 7 things you should know about CPR : http://goo.gl/omNwxA
 Tuned Model of Peer Assessment (Piech et al.)
 Why Cheat ? Plagiarism in MOOCs. MOOC News and Reviews http://goo.gl/vrpWb6
 Dozens of plagiarism incidents are reported in Coursera free online courses. The Chronicle of Higher Education http://goo.gl/N1tS9
 Mak. Blogs and Forums as communicating tools for MOOCs. 2010. http://goo.gl/g1omu6
 Should your MOOC forum use a reputation system ? http://goo.gl/RGx3tO
 Chat rooms in MOOCs : All talk and no action http://goo.gl/h7qZ73
Un travail réalisé pour le Commonwealth of Learning
2 Responses to MOOC design : from peer assessment to social networks
Pingback: MOOC design: from peer assessment to social networks | weiterbildungsblog