The pedagogy of languages for specific purposes: developing key professional competences through a massive open online course for language teachers

Summary: Although MOOCs dedicated to the teaching and learning of languages - Language MOOCs known as LMOOCs in the published literature - have gained popularity since 2008, this is not the case for language teacher education courses which are still rarely delivered in the form of MOOCs. Unsurprisingly, very little is therefore known about the effectiveness of such courses for Continuing Professional Development (CPD) and initial language teacher education. To fill this gap, a study was carried out based on a MOOC addressing the needs of current and prospective teachers of languages for specific purposes, which was designed by the consortium of the Erasmus+-funded CATAPULT project in 2019


Introduction
Both Massive Open Online Courses (MOOCs) as well as studies into their effectiveness in enabling and fostering peer-to-peer participation and the generation and sharing of knowledge between learners as well as the ethos of new literacies date from the beginning of the millennium (Anderson 2004). Concentrating mainly on the idea of "inclusion, (everyone in), mass participation, distributed expertise, valid and rewardable roles for all who pitch in" (Lankshear & Knobel, 2007, p. 18), a MOOC addressing the needs of current and prospective teachers of languages for specific purposes was designed by the consortium of the Erasmus+funded CATAPULT project. The present paper looks at the LSP teacher education MOOC from three different perspectives. After situating it within the massive open online courses literature, it reports on the process of course preparation and writing, with an account of the structural design and content of the MOOC. The final part looks at data coming from a multifaceted research into the effectiveness of the course as well as participant satisfaction. Drawing on this analysis, we present both the lessons learnt as well as the revisions made to the course in preparation for each of the three iterations of the MOOC.

The Different faces of Massive Open Online Courses -MOOCs
If MOOCs are often considered as one of the most important technological developments in Higher Education in the past decade (Deng et al. 2019), these open (i.e. freely accessible by anyone) large-scale web-based courses are potentially a disruptive innovation (Yuan & Powell 2013). They can be classified into different types. As Anderson (2004) notes, "the greatest affordance of the web for educational use is the profound and multifaceted increase in communication and interaction capability" (p. 42). Where MOOCs enable and foster peer-topeer participation and the generation and sharing of knowledge between learners, the ethos of new literacies is being spread at a massive scale. This is, according to Stewart (2013) what the earliest MOOCs were about. Called "the cMOOCs" (connectivist MOOCs), these courses were experimental, nonlinear, and deeply dialogic and participatory. The present-day MOOCs -the xMOOCs (elitist and formalized) focus predominantly on the delivery of the course content, backgrounding or ignoring participatory learning. This first typology of MOOCs shows, if not a dichotomy, at the very least an implicit hierarchy between xMOOCs and cMOOCs, many viewing cMOOCs as the superior type in both form and function (Sokolik 2014). Yet, looking at the strengths and weaknesses of both MOOC types and at the specific nature of language learning and teaching, one of the latest additions to the MOOC typology, Language MOOCs (or LMOOCs), potentially combines the best of both worlds.

Language MOOCs -LMOOCs
There has been an exponential growth of LMOOCs since they first appeared in 2012, a trend that has been boosted by the recent pandemic (Martin-Monje & Borthwick 2021). As LMOOCs aim at making the most of the best practices in language teaching and learning, they can certainly rely on xMOOC and cMOOC models seems to be relevant for any language teacher education MOOC and that creative solutions exist to address the issue of sufficient instructor presence in such online courses, however open and massive they may be.

Keywords: Languages for Specific Purposes (LSPs), Teacher Education, Continuing Professional Development (CPD), MOOC, LMOOC
The pedagogy of languages for specific purposes: developing key professional competences through a massive open online course for language teachers cMOOCs' interaction and community building functionalities, which perfectly serve the goals of communicative language teaching (CLT) and taskbased language teaching (TBLT). At the same time, they can rely on xMOOCs' designated centralized platforms which offer familiar structures of learning based on syllabi and sequences of activities.
LMOOCs have therefore been defined as "an eclectic mix of practices and tools aiming to engage students in the use of the target language in meaningful and authentic ways" (Sokolik 2014: 20). Still, for Colpaert (2014), the "L" in LMOOC has not been conceptualized enough and very few MOOC platforms offer specific tools necessary for language teaching and learning (such as corrective feedback, error analysis and pronunciation training). The characteristics of an ideal LMOOC, as outlined by Sokolik (2014) from her personal experience, include engagement and interaction, student self-organization, instructor presence, immersive materials such as instructional videos that provide authentic examples of the language and culture of study (as opposed to talking head videos) and a combination of informal peer feedback and self-assessment. More recently, in a systematic review of the literature on LMOOCs, Sallam, Martín-Monje & Li (2020) showed that the three most common characteristics of LMOOCs are (1) communication tools to promote interaction, (2) video materials showcasing linguistic and cultural content and (3) assessment tools relevant to heterogeneous groups of course participants. If LMOOCs are now recognized as an emergent and expanding research field with a great deal of interest being shown to it by researchers (Martin-Monje & Borthwick 2021), then it follows that a similar interest can be shown in language teacher education MOOCs.

Language Teacher Education MOOCs -LTEMOOCs
The potential of e-learning environments for teacher education beyond the spatial and temporal constraints of the classroom has been shown (Reeves & Pedulla 2011), as well as the fact that courses in such environments tend to foster the type of interactions necessary for knowledge construction (Lee & Brett 2015). In addition, they often allow teacher-learners to engage in a learning experience that meets their specific needs (Dede et al. 2009), even more so in the case of continuing professional development courses (Yurkofsky, Blum-Smith & Brennan 2019). The challenge is therefore to identify the specific modalities for such courses to be effective.
Unlike the numerous MOOCs dedicated to the teaching and learning of languages which have emerged since 2008 (as pointed out above), language teacher education courses are still very rarely delivered in the form of MOOCs (Ibanez Moreno & Traxler 2016), which Sarré (2021) proposes to call LTEMOOCs (Language Teacher Education MOOCs). Therefore, it is not surprising that there is a very limited number of published studies on LTE-MOOCs. Nonetheless, these studies have managed to show the positive impact of this type of MOOC in initial teacher education (Orsini-Jones, Gafaro & Altamimi 2017) as well as in continuing professional development courses (Kormos and Nijakowska 2017). The picture is still far from complete, however. Various authors (Dede et al. 2009, Moon et al. 2014, Parsons et al. 2019 point to the lack of empirical studies on the impact of online education courses for language teachers and on their acceptance by the teachers receiving the training offered. The question also remains as to what MOOC design model (xMOOC, cMOOC) is best suited for language teacher education courses.
The present contribution aims to fill these gaps in the literature through the study of the first three iterations of the CATAPULT LTEMOOC (Computer Assisted Training and Platforms to Upskill LSP Teachers).

The LSP CCF
Content selection and sequencing within the MOOC, CATAPULT's third output, was largely based on the LSP Common Competence Framework (CCF) that had been devised as the key component output 2 and published as a research report (Turula & Gajewska 2019). The MOOC developers relied on the 5 areas of competence proposed. These areas ( Figure 1) comprised general teaching, course/ material design, analysis, collaboration and intercultural mediation, and evaluation.
By means of this the LSP Teaching MOOC focused on upskilling general language teachers who want to specialise in LSP pedagogies, as well as LSP teachers interested in updating and expanding their pedagogical repertoire and in integrating the use of technology in their practices.
The design team also took general MOOC principles into account. (Drake, O'Hara & Seaman's 2015 case study, Yousef et al. 's 2015 list of development criteria). By combining these it was hoped to avoid attrition, which is consistently identified as a major problem in the MOOC literature (Liyanagunawardena et al. 2014).

MOOC Platform selection
At the same time, the project team researched MOOC platforms in order to select a platform that would best suit this particular course. The criteria used for selecting the MOOC platform were • the general profile of these platforms (the hosting organization, the types of courses hosted and the languages that each platform supported), • how these platforms supported teaching and learning • technical aspects • accessibility • usability/design The MOOC platforms that made it onto our shortlist were France Université Numérique (FUN), The Course Networking (CN), Open Learning and Eliademy. Eventually the CN was selected as all the above criteria were met. In addition, the CN social platform integrates elements of a VLE, offering social networking, gamification in the form of earning anar seeds based on the type of participation and badges that will automatically appear in the course participant's portfolio.

The blueprint
In order to facilitate planning and to ensure consistency, a blueprint document was created. Through this the team was also able to monitor how the principles from the competence framework (Table 1), together with general principles of MOOC design referred to above, were being implemented. Drake, O'Hara and Seeman (2015) state that a MOOC must be meaningful, engaging, measurable, accessible, and scalable. Yousef, Chatti, Schroeder & Wosnitza (2015) describe 44 design criteria, bundled into 8 clusters: blended learning, flexibility, high-quality content, instructional design and learning methodologies, lifelong learning, network learning, openness and student-centered learning.
Taking all this into account, the MOOC blueprint was organised into four main categories: a. Pedagogical elements including the content, learning objectives and outcomes, type and sequence of activities, and assignments b. Technical elements including the learning environment, the badge system and certificates c. Organisational elements including the pace and timing of the modules, deadlines, different levels of participation d. Blueprint production timeline which included drafting and revisions based on feedback and discussion with the project team.

MOOC content, structure, and level of participation
Six modules were created: LSP concepts; corpus linguistics for LSP teaching; effective communication in LSP teaching; student engagement and participation as part of LSP teaching; collaboration and integration related to LSP teaching; and ePortfolios. Each module contained ICT tools relevant for its content. In addition, the MOOC contained two more modules: a module titled Before You Start providing information on the course organisation, course validation, and platform exploration and a module titled ICT(standalone) collecting in one place the ICT tools from the main modules.
The study modules (Modules 1-5) follow the structure illustrated below. The participants could engage in the MOOC at three different levels ( Table  2): Browser level leading to neither badges nor certification; Tester level leading to badges if scoring 50% + in the module quizzes, and Creator level leading to course certification and badges if all quizzes, assignments and course portfolio were completed.  Module 6, the portfolio module, followed a different structure and was shorter. It outlined the content in the same way as the study modules. It then offered ideas and materials about how to use portfolios in LSP teaching. Finally, it invited participants to produce their own portfolio by compiling the reflections and Creator level activities. This was intended to serve as a means of concluding the course for those seeking certification.
The assessment of the activities was automated for the quizzes. The Creator level activities were graded according to the assessment rubric of each activity. Feedback was also provided by the instructors and the teaching assistants in the third iteration of the MOOC, i.e., Season 3.

MOOC revisions
The MOOC was implemented 3 times, referred to as Seasons: in spring 2020, in autumn 2020 and in spring 2021. Each Season ran for 8 weeks with the exception of Season 3 where a Spring break week was introduced to help the participants catch up with the MOOC workload. The following table summarises the revisions implemented after the first two Seasons based on participant feedback and the MOOC developers' ideas for improvement.

The study
As mentioned previously, each of the three seasons was subject to evaluation for the purpose of course improvement. These evaluations were in turn subjected to a multifaceted study, the results of which are presented and discussed in this section.

The aim and questions
The main objective of the study was to assess how the participants reacted to the LSP Teaching MOOC. It was important to the course developers in particular and to the sustainability of the project in general to know if the materials included as well as the presentation and interaction modes were the answer to the need for quality teacher education in the area of LSP. To ascertain this the following research questions (RQs) were established: The research sample The sample studied consists of 54 respondents, both LSP Teaching MOOC participants (Season 1: 13 persons, Season 2: 22 persons, Season 3: 15 persons) and teaching assistants (Season 3: 4 persons).
The MOOC participants were ongoing or prospective LSP teachers, whose overall expertise in teaching languages for specific purposes is self-assessed as "experienced with no specific training in LSP teaching" for the majority (40%) of Season 1 participants, "experienced with no specific training in LSP teaching" for the majority (37%) of Season 2 participants and "experienced with no specific training in LSP teaching" for the majority (54%) of Season 3 participants. Based on a similar self-gauging, their weekly involvement in the MOOC was 3hrs for Season 1, 3hrs for Season 2 and almost 4hrs for Season 3. As for their level of involvement, the participants chose between 3 different roles and the choices are presented in Figure 3.
The teaching assistants (TAs) were a population of 4 persons who successfully completed Season 2 and were awarded a certificate of achievement as they were some of the most active course participants. Based on the survey, their reasons for volunteering to become a TA are multifaceted, ranging from professional motives (gaining expertise in instructional design) to personal ones (staying involved; fun). The four TAs participated in three focus-group interviews during which, respectively (i) they were briefed on their responsibilities, deadlines, functions of the platform, providing feedback; (ii) were encouraged to discuss matters pertaining to their duties; and (iii) provided feedback and reflected on further development.

The research instruments
The data in the study were gathered in two different ways, through surveys and by means of discourse analysis. The satisfaction and attitudes of the participants were solicited in a survey filled by those enrolled in each course upon its completion. The survey was completed by 13 participants in Season 1, 22 in Season 2 and 15 in Season 3 of the LSP Teaching MOOC. The survey consisted of a number of questions referring to the respondents' objectives and expertise in LSP, the expectations towards the course and how well they were met, their assessment of the interest and utility of individual course modules (1-7) as well as various types of materials (videos, articles, quizzes, polls, forums etc.), their weekly time investment and their attitude towards such workload, and their general view of the course plus suggestions for its improvement. Some of the questions were open-ended and the answers have been categorised and annotated before the analysis. Other questions required rating on a scale, in which case averages and SD scores were calculated for the sake of data presentation and interpretation. In the case of the third type of questions -semi-closed, multiple choice -the number of answers for each option was calculated.
Another survey was addressed to the teaching assistants. It consisted of 11 questions which (i) checked the TAs' reasons for volunteering; (ii) examined their perceptions of the experience by means of statements about challenges, expectations and suggestions; and (iii) sought to find out how, if at all, they benefited professionally from their involvement in the facilitation of the LSP Teaching MOOC in Season 3.
Additionally, the data came from discourse analysis (DA). The text that was subject to DA came from two different sources: (i) instructor evaluation of the work submitted; and (ii) the focus-group interviews with teaching assistants (TAs) who shared the course facilitation load in Season 3. For the instructor feedback, the discourse was analysed based on the criteria for good constructive assessment (whether or not it was specific, personalised and directed the participant in a practical and productive way; for specifics -see Section 4.4). Additionally, the text samples were analysed for the strengths and weaknesses pointed out by the instructors. These strengths and weaknesses were categorised, annotated and included in the description. Finally, the average word count was given for each module feedback. For the TA discourse, all samples were transcribed and annotated, types and tokens of utterances were identified and calculated and then subject to quantitative and qualitative analysis. The data also include the results of the survey on their feedback experience the TAs completed.

The data
One of the most important factors was how the course participants evaluated individual course modules and the variety of activities through which the content was communicated and recycled as well as the general user-friendliness of the platform.
Starting from how interesting and useful the course participants found the individual modules of the LSP Teaching MOOC, tables 4 and 5 show the results for all three seasons. In each case the participants of the LSP Teaching MOOC were asked to rate their perceptions on a scale from 1 (not interesting / useful at all) to 4 (very interesting / useful) plus 0 (I didn't do it).
As can be seen in Tables 4 and 5, for Season 1 Modules 3 (devoted to successful communication) and 4 (focusing on student engagement and participation) were rated the highest with the lowest SD scores, with module 3 having the lowest SD score. This shows that the respondents were in agreement as to the interest and usefulness of the modules. Low averages and high SD scores for Module 6 (portfolio) are most probably the result of a high percentage of 0 answers, showing that the final modules were not covered by a number of course participants. Modules 1 (introductory), 2 (LSP and corpora) and 5 (collaboration) enjoyed similar scores -between 2 (not really interesting / useful) and 3 (rather interesting / useful). With relatively low SD scores it can be inferred that the respondents were in accord in their perceptions of the relatively moderate popularity and utility of these modules.
Both tables show very similar results for Seasons 2 and 3. The respondents find Modules 2 and 3 interesting and useful, and SD scores show, again, that they are in agreement about this. The perceptions of Module 6 are similar, and the relatively high SD numbers can again be ascribed to a considerable number of participants who failed to complete the modules. What has to be noted, however, are much higher ratings of Modules 1, 2 and 6, which is most probably due to the fact that these were revised after the first iteration of the course based on participants' feedback, though for Module 1 this was in the area of interest rather than utility for Season 2. This can be attributed to the fact that this module, which aims at clarifying concepts, is the most theoretical one and, as a result, the least directly applicable in the classroom.
When it comes to the evaluation of the variety of activities and ways in which the content is presented, the opinions of MOOC participants in all three seasons are presented in Table 6.
As can be seen in Table 6 above, there is a general agreement throughout all three seasons about the utility of videos included in the course, with a preference for videos specifically shot for the MOOC (videos, first column) as opposed to pre-existing Youtube videos (YT videos, second column). Other noteworthy trends include: (i) The fact that instructor feedback is valued quite highly in Seasons 1 and 3 and slightly lower in Season 2; (ii) Season 2 is also when instructor posts were rated much lower than peer posts; these two trends can probably be explained by the fact that instructor feedback and posts were relatively less numerous in Season 2 than in Season 1 (in relation to the number of participants enrolled on the course) and that the recruitment of teaching assistants in Season 3 (considered as instructors by participants) then made it possible to offer more instructor feedback and posts to course participants; (iii) Season 3 participants seem the most satisfied as they find all types of materials useful or very useful; (iv) the perceived usefulness of articles and the journal grows in Seasons 2 and 3, which can probably be attributed to the fact that the number of articles to read and the way they were presented were revised (fewer articles were compulsory readings, more articles were The pedagogy of languages for specific purposes: developing key professional competences through a massive open online course for language teachers offered as "going further" resources, each article was introduced in a short paragraph pointing out why it should be interesting to course participants, as explained in section 3.6 above); (v) the comparatively low SD scores show that the respondents are in agreement as to their ratings. What is interesting in the context of the evaluation of the module and activity / presentation mode, are the participants' objectives upon enrol-ment as well as their suggestions for course improvement. Table 7 shows individual categories of answers in the "objectives" question together with the number of responses in each of them as well as "suggestions" categories (open answers, annotated) and relevant calculations.
Based on the numbers in Table 7 above, several facts can be noted. First of all, in all three seasons there is a considerable prevalence of the take- Table 7. Participants' objectives and suggestions away objectives (to learn theory and practical tips) over the interaction objectives, and the proportions are ca. 3:1. As for the other objectives, scarce as they are, they -consistently, for all three seasons -represent answers such as "wanted to see how to design a course" or "was interested in new trends in teacher education". When it comes to the suggestions for course improvement, those pertaining to the learning experience (quality of presentation in videotoo academic; in need of clarification; missing synopses of articles; unwanted activities such as forum discussions etc.) prevail over other categories, with an exception for Season 3, where user experience (of the platform itself -how easy it is to find materials and activity; the behaviour of the quizzes and their accessibility, etc.) come to the fore. Throughout all three seasons there are also complaints about the workload, which is consistently mentioned as the top reason for attrition in MOOCs (Liyanagunawardena et al. 2014).

Table 6. Participants' evaluation of the individual activities and materials of the LSP Teaching MOOC
The correlation between the objectives and the suggestion was not calculated statistically. However, the small number of participants enabled a direct analysis of patterns in this area. This analysis shows that take-away objectives usually match with complaints about unwanted forum discussions or the quality of the videos and articles (too academic, in need of clarification / synopsis) and the interaction objectives -with the inability to follow the forum discussions and participate in them if a participant enrolls late.
Finally, when it comes to the general userfriendliness of the platform, the ratings are 2.46 (SD 0.78) for Season 1, 2.95 (SD 0.72) for Season 2 and 3.27 (SD 0.59) for Season 3, showing that the participants were moderately happy (S1 and 2) and very happy with the learning management system chosen for the LSP Teaching MOOC.
As for instructor feedback, the analysis (cf. Tables 5 -7) was based on an evaluation of the qual- ity of the feedback, wordcount and a focus on recurring themes. Quality of feedback was examined and given points (8 max.) for its sincerity (clichéic: yes=0; no=1); constructiveness (constructive: yes=1; no=0); whether it refers specifically to the submission content (no=0; yes, once=1; yes, several times=2; yes, on multiple occasions=3); and whether it reaches out in terms of suggesting additional sources, encouraging more effort (no=0; yes, once=1; yes, several times=2; yes, on multiple occa-sions=3). Then for each submission the total number of points for the feedback offered was calculated as well as an average score in each case. As can be seen in Tables 8 -10, the overall and average scores for the quality of the feedback offered by the instructors as well as the word count differ considerably throughout the course modules. The differences are not always as significant as in Season 1 (e.g. M1: 160; M5: 166 as opposed to M2: 11) but they are present throughout all three releases of the MOOC, with module 1 instructor(s) offering the longest and the highest quality comments (177/5; 103/4): nonclicheic, constructive, with frequent specific reference to various aspects of the submission. Contrarily, the feedback in modules 2 and 3 of Season 1 as well as M2 (Season 3) is frequently short and reduced to platitudes ("Good job!"; "Well done1"; "Excellent work!"). An interesting fact can be noted in Table 10, presenting the results for Season 3, in which the feedback was offered jointly by the instructors and teaching assistants [TAs]. With the exception of modules 2 and 6, in which the TAs did not comment on the submissions, in every other case the word count for TA comments is much higher (Table 7, in square brackets alongside the instructors' word count). Finally, as regards the strengths and weaknesses of the submissions pointed out in the feedback, there seems to be a consistency throughout the three seasons. The instructors praise the skill of translating the theory learned into classroom practice, good lesson planning skills and the ability to engage learners in collaborative ac- Table 9. Instructor feedback in Season 2 tivities, frequently through telecollaboration. The weaknesses emphasised boil down to weak survey writing and lesson planning skills (the latter often in the area of specifying aims) as well as proposing activities which -being co-operative rather than collaborative -do not foster teamwork. Additionally, the instructors often criticise work for its lack of depth and complexity.
When it comes to the input provided by the teaching assistants, both in the survey and their interviews, a number of observations can be made.
First of all, all 4 TAs are generally satisfied or very satisfied with both the experience and preparation for it. They also appreciate the personal and professional gains, among which they list teaching ideas and the possibility to exchange them in a community of practice (3), better understanding of incourse interaction (1), higher sensitivity to individual differences (1), fun (3). When it comes to suggestions for course improvement, they mirror those of the MOOC participants: to improve the UX as regards the functions of the platform; to ease the workload; to improve the interaction between the instructors and the participants. One of the teaching assistants writes their experience as both a course participant and a TA: As a student, my assignments didn't receive muchin terms of feedback (usually just a grade with one or two words like "good work!"). Then, in the TA induction meetings, we were encouraged to mostly be positive in our feedback to students. It would've been helpful to have some models or examples to follow for giving actual critiques, as I often limited myself to positive feedback.
When it comes to the analysis of the discourse co-produced by the teaching assistants, its quantity and quality depends on a particular meeting. During the first one, in which they were briefed on their responsibilities, deadlines, functions of the platform, providing feedback, three TAs took part. Their contributions (1173 words out of the total 6454) are comments or responses in one of the 6 categories of issues: technical (platform UX; TI), course management (MI), teaching presence (TPI) as well as related to TA labour division in terms of the choice of the module in which to assess (MC; frequently with motives for choice) and TA further training (TAT). Besides, in a number of comments the TAs participating in the meeting refer to their experience as LSP Teaching MOOC participants (PE). the numbers in each category (Table 11) stand for the utterance count.
As can be seen in Table 11 above, the largest number of the contributions -most of which are by TAs 1 and 2 -pertain to management issues (MI -17 utterances), mainly how and when the feedback on assignments is to be provided (the TAs' responsibility). However, as the qualitative analysis of the discourse shows, this is done in combination with reference to participant experience (PE -7 utterances) or to teaching presence issues . In other words, the how-and-when of assessment is considered vis a vis their own perceptions of the quality and timing of instructor feedback (constructive; immediate) and attitudes to it, as well as what, in the TAs' opinion, is pedagogically beneficial (cf. a discourse sample below).
[Y]ou need people to be motivated and low attrition rates. And one of the best ways to motivate people is to show presence. And of course, our posts will show presence on their reflections, but people want to see that their work is recognised. … And I'm sharing that with all honesty as a student. (TA1) In meeting 2 (Table 12), the TAs produced 1598 words out of the total count of 4945, a better ratio than in meeting 1. Most of the categories remain the same in terms of labels, with slight changes as regards the specificity of utterances: management issues (MI -18 utterances) pertain less to course organisation and more to the assessment process; there is a new management category -progress (P -4 utterances) -which results from the agenda of the meeting (reporting on the assessment process); teaching presence issues (TPI) are now an aggregate of questions connected with both teaching and learning (student attitudes and motivation); personal experience (PE -7 utterances) covers both what the TA encountered in the LSP Teaching MOOC as its participant and in a more general sense; another technical issues' (TI -11 utterances) category was introduced -student problems, which are discussed in relation with management issues, as a way of explaining delays (STI -7 utterances). The most popular categories are technical issues -both students' and the TAs' -and management problems. As before, TA1 is the most active, followed by TA2. Table 11. TA discourse analysis, Meeting 1 (word count and utterances) When it comes to meeting 3 (Table 13), the word count, for the first time, is in the TAs favour (3243 out of the total of 5542 words). Very much in line with its aim -reflecting on the experience -there is a new category of utterances: personal experience of the teaching assistants (TAPE -22 utterances). The main takeaways in this category are: the experience of a community of practice; noting ideas and tools they had missed when participating in the course: Other comments are a mixture of reflections on good teaching and mentoring. They include remarks on effective teaching presence (TPI 20 utterances), including the degree and quality of intervention into the forum interactions between course participants as well as different aspects of course management and assessment (MI 33 utterances). They are occasionally referred to personal experience (PE 7 utterances). The latter pertain to teaching presence in the instructor / TA interactions as well as how the TAs could have been better prepared for their task (guidelines / standardisation). Sometimes, as in the case of TA4, the thoughts shared are about reasons for not rising to the challenge of sharing the assessment burden with the instructors. Apart from personal reasons for this, TA4, in unison with their fellow assessors, ascribe this to the lack of platform The pedagogy of languages for specific purposes: developing key professional competences through a massive open online course for language teachers functions (TI 6 utterances) dedicated to notifying teachers of submissions that are ready for grading.

Discussion
In this section the three research questions are addressed based on the data presented in the previous section.

RQ1. What is the satisfaction with the LSP Teaching MOOC as expressed by its participants?
The satisfaction with the course was broken down into two components: "how interesting" and "how useful". If we assume the average values of points 3.0 and higher (i.e. between 3=quite interesting/useful and 4=very interesting/useful) the numbers in Tables 1-3 show that both categories grow throughout the seasons. While in Season 1 only modules 3 and 4 score above 3 points on average, it is the first 4 modules that are seen as quite-to-very interesting in Seasons 2 and 3. This can probably be attributed to the fact that, as indicated in Section 3, module content was systematically revised after each iteration of the course based on participants' feedback. When it comes to usefulness, it is again 2 modules -3 and 4 -for Season 1 as opposed to three modules -3, 4 and 5; 1, 3 and 4 -for Seasons 2 and 3, respectively. As previously noted, the perceived usefulness of each module is probably closely linked to how directly applicable to a classroom context module content is. Two main factors therefore seem to influence the participants' perception of usefulness: the proportion of theory (e.g. Module 1 is the most theoretical module and is consequently not considered as being very useful) and the complexity of content (Module 2 -corpus linguistics -is feared by many participants because of its conceptual and technical complexity and is therefore not always considered as useful). The other modules seem to enjoy at least some popularity, with the exception of Module 6 (portfolios) for Season 1. All in all, based on the numbers alone it can be noted that completing the LSP Teaching MOOC was generally a satisfactory experience, with local variations as to individual modules and seasons. This is definitely a trigger for the course developers to reflect on all course modules, trying to figure out what made the popular modules interesting and useful. This may be done by revisiting the principles of MOOC design by Drake et al. (2015) as well as Yousef et al. (2015). It would mean considering, once again, how meaningful, engaging, measurable, accessible, and scalable the activities were; as well as whether the low-rated modules and the course overall offered enough opportunities for blended learning and flexibility, contained highquality content, were based on sound instructional design and learning methodologies, and whether they encouraged lifelong learning, network learning, openness and student-centred learning.

RQ2. What are the main objectives of LSP Teaching MOOC participants and how well are they met?
In addition to considering the numbers, RQ1 needs to be examined in terms of how well the LSP Teaching MOOC coincided with the participants' objectives as well as what suggestions they made to improve the course. It may also be useful to consider the findings in relation with the instructor and TA perceptions.
Looking at Table 7 we can see that in all three seasons most of the respondents are in favour of the xMOOC rather than the cMOOC model (cf. Anderson 2004, Stewart 2013. In other words, the takeaways -theoretical background, practical ideas, a certificate -seemed more important at first than interactions with fellow participants and instructors. This poses two types of challenges for the course developers. The first -and potentially the easier to address -is how to make the LSP Teaching MOOC flexible enough (cf. Yousef et al. 2015) to ensure both paths, the xMOOC and the cMOOC, considering that the latter, even if less popular, is still in demand. This intuition is endorsed by what we see in Table 6: the popular activities (scoring above 3) are lectures and articles as well as various forms of interactions with peers and instructors, including feedback.
The other challenge, and a more demanding one, is having the courage to differentiate between what the participants want (more xMOOC experience) and what they may need (more cMOOC experience -cf. instructor feedback on the participants' confusion between co-operation vs. collaboration; Table 8-10); and designing the course accordingly, following sound instructional design and teaching methodologies (cf. Yousef et al. 2015, again). This, however, considering the popular demand -xMOOC over cMOOC -may mean that the course developers need to introduce an additional module devoted to orientation. This would provide the rationale for having collaborative activities alongside gaining the knowledge and skills that a participant may find personally interesting and useful to themselves. That being said, the dichotomy between both MOOC types is slightly mitigated by the participants' appreciation of the posts and feedback by fellow course participants and by instructors, graded between 3 and 4 in seasons 1 and 3 of the course ( Table 6). This tends to show that course participants' initial objectives for joining the course (xMOOC experience, cf. Table 7) might have been revised along the way as they seem to value posts and feedback more than other types of course materials/activities (quizzes, surveys, cf. Table 6).
Another suggestion from the participants that could make the course more meaningful and engaging as well as accessible (Drake et al. 2015) is increased teaching presence. This can be considered as mediation (i) between participants and content (=better xMOOC) and (ii) between participants. These suggestions, confirmed by the input offered by teaching assistants (TAs), can be summarised as (cf .  Table 7 and transcripts from TA meetings): (i) better quality of the feedback offered (constructive rather than clicheic; more profound; referring to specifics rather than general); (ii) presenting the participants with content that has been pre-processed for them, or introduced in an inviting way (cognitively accessible); and (iii) offering instruction as to how to navigate the learning environment (technical accessibility -potentially a part of the additional orientation module mentioned above).
This also tends to show that, like LMOOCs, the ideal model for LTEMOOCs design goes beyond the xMOOC/cMOOC dichotomy, but rather could be that of a combination between xMOOC and cMOOC (cf. Deng et al. 2019, Sokolik 2014.

RQ3. What are the LSP Teaching MOOC teachers' attitudes and what conclusions pertaining to LSP Teaching MOOC improvement can be drawn from their input?
Some of the recommendations based on the input offered by the MOOC teachers to be taken into account by course developers have already been mentioned. They include: differentiating between wants and needs (something the MOOC participants themselves have problems with cf. Tables 8-10) -and following sound instructional design and methodologies; increasing accessibility (cf. Drake et al. 2015) in terms of the cognitive effort and the technologies used; and working on the teaching presence, especially in the area of instructor feedback, to make the experience meaningful by showing the participants that "their work is recognised" (to cite one of the TAs).
One more important observation that arises from the data is for the course developers to enhance course design by incorporating preparatory activities for the instructors, with special regard to assessment guidelines and standards. This is noted based on both the TAs reflections (Tables 8-10) as well as considerable differences between feedback quality offered in individual modules (cf. Tables 8-10). Rubrics or analytical scales which can pave the way for such modelling are also, as observed by one of the TAs, a great help to the participants by

Conclusions
Overall, participants' satisfaction was high and steadily grew between S1 and S3, both in terms of interest and usefulness of course content. One interesting finding is that perceived module usefulness seems to be influenced by (1) its direct applicability to the classroom and (2) on content complexity, as shown by the slight variations between modules: there was a clearly stated preference for modules that are directly applicable in the classroom, which is in line with the participants' stated objectives for joining the course (practical tips scored the highest) possibly due to the fact they were all in-service language teachers (no students in initial teacher education courses). From a course design perspective, it is also worth noting that course participants' initial objectives for joining the course (to be provided content on the MOOC, cf. xMOOC's instructionist model) might have been revised along the way as they seem to value posts and feedback (cf. cMOOC's connectivist model) more than oth-er types of course materials/activities at the end of the day. This points to the necessary flexibility of any LTEMOOC, as well as to the fact that MOOC designers should cater for both participants' wants and needs, as these don't always align. In this respect, the ideal LTEMOOC should therefore combine both models, in the same way as LMOOCs tend to. Finally, instructor presence seems to be an important feature of any effective LTEMOOC as it has to be felt by course participants. As this is potentially a problem in MOOCs, especially those with very high numbers of participants, LTEMOOC designers have to be creative: relying on past course participants, whose outstanding contribution has been noted and who have been awarded a certificate of achievement, to support instructors in providing feedback and managing the online learning community through posts and comments is not only a way to address the issue of sufficient instructor presence, but it is also a means for these teaching assistants to develop additional skills which can be acknowledged through the provision of an additional certificate. These conclusions lead us to consider that the LTEMOOC offered by the CATAPULT consortium is indeed a form of teaching innovation which hopefully paves the way for more MOOC-based language teacher education courses.