Investigating Common Attributes of Effective Online SLA Programmes Essay Sample
- Pages: 16
- Word count: 4,295
- Rewriting Possibility: 99% (excellent)
- Category: language
Get Full Essay
Get access to this section to get all help you need with your essay and educational issues.Try it free!
Abstract This study will attempt to identify common attributes of online language learning programmes with the aim of formulating a grading scheme that is predictive of student outcomes. An examination of several programmes representing all pedagogical paradigms will be conducted. The examination will a) compare and correlate the outcomes of each programme with its approach, and b) for programmes in the same paradigm but differing outcomes, attempt to identify the strengths
outcomes, attempt to identify the strengths or weaknesses that may contribute to the differences noted, and c) attempt to analyse to what extent an instructor’s pedagogical bias might affect the effectiveness of a particular programme. These observations are expected to reveal certain common characteristics in successful programmes that will advance the effectiveness of such programmes in general and move toward a formal model of instructional design lacking in today’s programmes.
According to Jackson(2001) developers of online programmes face tremendous difficulty in fashioning effective programmes since advice from the available literature is often misguided or ill-informed about the way participants learn (p. 61). Additionally, there is no recognised professional accreditation for instructional material development (Whitlocke,2000, p. 191). There is, therefore, a tremendous need to develop a development model that takes theories of language learning into account and that can offer a plain language guide for practitioners to follow (Whitlocke, 2000, p.190).
Despite the obvious need for such a model, or at least a method to evaluate a programme’s effectiveness, there has been little active research
into developing such a model or method. Academic and corporate entities continue to invest highly in online programmes without means to predict the effectiveness of such programmes beyond trial and error (Stephenson 2001, p.35). Comparisons between different online programmes should utilise a classification scheme or framework. Such a framework will ensure that the strongest comparisons will occur between similar programmes and that appropriate contrasts of relative strengths and weakness can occur between programmes using differing approaches.
Coomey and Stephenson have provided a framework that will be useful in classifying various programmes by the pedagogical paradigms they use. This framework does not, however, make any predictions or offer any way to grade an online programme in terms of its eventual outcome. This study will attempt to identify the direction research needs to take to produce a set of criteria to allow such predictions.
Identifying the attributes that result in the most effective programmes and a set of predictive criteria will enable both corporate and academic entities to evaluate the programmes they invest time and money in developing. With an effective set of criteria, researchers can create a formal model for instructional design for the SLA and other online programmes. Practitioners will be able to maximise the learning experience for participants in online programmes.
The existing literature on the effectiveness of online programmes focuses mostly on how novel technological methods affect the learning experience or the pedagogical method utilised by an instructor. Each study tended to focus on a single narrow aspect of the technology utilized in an online course; these aspects include email, chat, forums and bulletin boards, and variations of these services such as MOOs.
Blake (2000), for example, examines whether synchronous chat applications used in an online course provide the same opportunities for negotiation of meaning that face-to-face conversations do in L2 Spanish instruction (p. 120). This study showed that synchronous chat does indeed provide opportunities for negotiation of meaning and vocabulary building, but notes that such a tool may not facilitate any improvement in a learner’s syntactical competence; he says:
“This present study yielded only a handful of grammatical negotiations and many of them did not constitute negotiations of meaning in the classical sense, but rather direct questions about linguistic forms. Few interaction studies have demonstrated that incidental negotiations within a task-based approach might stimulate a comprehensive development of the learners’ morphological and syntactic problems on a larger scale…” (Blake 132)
Additionally Blake recommends that future research conduct a comparative study between synchronous and asynchronous learner-learner interactions in the virtual classroom (132).
Kötter’s (2003) study of tandem communication is similar to Blake’s in many respects. This study focuses on synchronous communication that takes place in a MOO for an interesting two-sided L2 German L2 English interaction. One set of participants were Americans learning German and the other set were Germans learning English (p. 145). Like Blake, Kötter found that negotiated meaning did indeed occur in such an environment despite the high output demanded of synchronous communication.
Gonzales-Bueno (1998) meanwhile places her attention upon email. Email is a form of asynchronous communication essential for some online courses. The primary use of email in the classes she observed was to replace traditional in-class paper and pencil dialogue journals. She says the purpose of such Journals is to: …establish a written “dialogue” with the instructor about a topic of their choice, providing a very specific audience/reader and a purpose for communication which, according to the cognitive-process model of writing developed by Flower and Hayes (1981), are necessary components of the writing process. The instructor’s responses act as models of accurate language, so grammatical corrective feedback is provided automatically. (Gonzales-Bueno 1998, 56; 57)
One instructor noted that students writing electronically tended to writer longer, richer passages utilizing more language functions and adopting a more conversational tone than peers using paper and pencil (Gonzalez 58). She also notes that by nature, e-mail encourages an improvement in a learner’s syntactical competence (in contrast to synchronous chat as noted by Blake and Kötter); she says:
Not only were these less desirable features [less grammatical accuracy, continuity of discourse, and les coherence of composition] not observed in the e-mail messages analyzed in this study, but the very nature of e-mail itself prevented them from occurring. Although there were some cases of less accurate productions that could have been the result of the urgency of the communicative flow, like in the case of Kern’s subjects, or simply of communicating in a foreign language, most of the students in this study, as was noted above, took their time to consult references and edit their messages before sending them out, resulting in greater grammatical accuracy and coherence of ideas. (Gonzalez-Bueno 60).
Lamy (1999) focuses her efforts on the effectiveness of asynchronous conferencing. She shows that an asynchronous medium supports written interactions that range from very formal, planned epistle-like passages to the conversational (p 59). She says that reflective writing may allow the learner to better notice the formal features of the target language than they would in other types of exchange (Lamy 1999, p. 60). The difficulty of this form of interaction though, is to provide an environment in which self-sustaining threads involving multiple students in fully contingent conversation can be developed; threads in this environment tend to be student-student or teacher-student monologue or answer-the-teacher dialogues (Lamy 1999, p.60). Reflective writing (Asynchronous journal-like communication) provides the learner a framework that promotes learning since it has the following characteristics: (it provides the ability of the learner to reflect on his own ability, is socially contingent, provides for negotiation of meaning, focuses on form ( or provides time for consideration of form), and is sustained over time (old posts can be accessed and compared to ascertain one’s own progress) (Lamy, 1999, pp..58-60).
Hase & Allen (2001 ) take the effectiveness of email and chat for granted in his discussion of online learning in general. He feels that the difficulties in online learning are more systemic. Rather than any fault or feature of online programmes themselves, he feels that the problem is a combination of learner technophobia, a dependence upon the ability of the learner to communicate well in a written manner, the ability of instructors to manage online discussions, and the design of the learning materials themselves (pp.29-30 ). Further he says that the problems aren’t necessarily new, and are common to all educational institutions. He identifies the the two principle problems with online-learning in general as:
The first of these is the dominance of teacher-centred approaches that needs to be challenged if the best of what technology offers is to be realized. The second of these is the requirement for alignment of the needs of all the stakeholders [learner, teacher, and administrator] in the design and delivery of courses. Progress in both these areas fall short of the potential for learner managed learning that on-line technology offers. (Hase & Allen 2001, p. 34)
None of these studies or reviews approaches approach online programmes holistically; each looks at a specific feature, aspect or attribute of what makes up an online programme. Each approach has identified the relative strengths and weaknesses inherent in each approach but have not examined how or whether they can compensate for each other when they are present together in a single programme. By selecting a number of programs and classifying them according to Coomy’s paradigm map and comparing and contrasting them with each other and this study’s ‘effectiveness’ composite, it should be possible to note whether the approaches complement or detract from one another when present in a single programme as compared to when they are present in similar programmes utilising only a single approach. Multiple communication channels are fairly common in everyday online life including in online programmes, i.e. one interactive environment may include email, interactive one-on-one chat, group or tandem chat, bulletin boards or forums. Together all these channels make up an online programme, and should provide effects analogous to those provided by expanding a student’s options of discourse in a traditional L2 classroom (Belz 2003). Since we know each component’s strengths or weaknesses, it is necessary to evaluate the programmes they construct with some objective grading method. The present study will endeavour to develop such a method.
The proposed study will require a number of different techniques both qualitative and quantitative. Several methods of acquiring the data are necessary since we are concerned with both what an effective online programme is, as well as how to determine whether a given programme is effective. Later, all of the data will be utilized to attempt to synthesise a coherent model for the instructional design of online programmes.
Aims and objects of the proposed study:
1. To determine the composition of an effective online programme in terms of
a. The features and functions of the online programme’s interface
b. Its influence on student L2 acquisition and competence (vocabulary, grammar, use of L2 outside classroom.)
c. The degree of influence that an instructor’s approach or pedagogical bias has on outcome/effectiveness.
d. Student marks and retention (how many students remained enrolled for the entire term?)
e. The dominant pedagogical paradigm utilised in the online programme.
2. To identify common attributes, features, or functions in online programmes deemed ‘effective’ by:
a. Identifying common strengths;
b. Identifying common weaknesses.
3. To attempt to construct a model which
a. Can grade any online programme given its known attributes and performance.
b. Can predict the effectiveness of a programme given only its attributes and functions.
Stages of research
The proposed study will be undertaken in several stages over three years. Each stage will provide setup or preliminary data needed for the following stages.
Stage 1. Identification of Online Programmes for Study
It is necessary to select a number of online L2 programmes from several institutions. Using a number of courses across institutional boundaries should point up any influence institutionally imposed pedagogical bias may have on the outcomes of the programmes.
Additionally, an interview and ethnographic observation of the instructors should be performed so that we can document the instructor’s pedagogical bias. This is important since Jackson states that the value a student places on a learning experience is directly influenced the conceptions of learning possessed by the tutor (60).
Each course selected should be carefully examined in its structure and operation and classified according to Coomy’s (2001) map of online course paradigms (See Fig1). Classifying each course ensures a better set of observations among classes can be made as well as between opposing pedagogical methods. Generally, classroom and on-line courses can be mapped into one of four quadrants based on the give and take between the four principle approaches to pedagogy. For convenience, Coomy labled each quadrant as if it were a compass having Northwest, South West, North East, and South East portions. Many traditional classrooms would fall into the NW quadrant while online programmes tend have SE tendencies or features.
Stage 2. Define “Effectiveness” For Purposes of Study
The next step needed is to define what we mean by the word “effectiveness”. Since we will be measuring this quantity, just what is it? Effectiveness can be measured in a number of ways or viewed as having several components. For the moment this study will consider effectiveness as a composite of:
• Popularity – How large is the initial enrolment? Initial enrolment may be indicative of student interest in new technology, and may show a word-of-mouth recommendation among the student body; indicating that its students regard their experience as having been rewarding. Of course by itself this component does not tell us much as there are any number of reasons why enrolment may be inflated besides a programme’s success; some reasons include the fact that the class or professor has a reputation for not being a strict marker, or the online programme is new and students are enrolling for the course’s novelty as much as for any other reason. Student interviews can be conducted in Stage 3 to further define this variable from the student’s point of view.
• Interest – Of the initial enrolment, how many students complete the course? A high drop rate may indicate that the approach of the particular course being examined is not suitable for a number of students relative to other similar classes and traditional classes covering the same course work. Again there are a number of reasons why a student may choose to stay in a class despite lack of understanding and/or poor marks received in the course. The Student Interview should examine the motivations of those completing the course.
• Marks – What percentage of students completing the course pass it? This is the criterion most students and parents use to gauge “success” in the attempt to complete a course. However, by itself it is not entirely meaningful. For example there may have been a curve associated with the course marks or other grade manipulation for good and ill reasons. Student’s aware of such grade adjustments (such as dropping lowest mark) used by the instructor may stay enrolled after course withdrawal dates. The Instructor Interview should be constructed to probe this statistic and correct for a curve if possible.
• Competence – What degree of competence in the L2did the student achieve over the course of instruction? Is he or she able to utilize his/her L2 class outside classroom situations? Do students with similar marks have similarly improved competencies? Are student competencies improved asymmetrically, i.e. is one area markedly improved while another is much less developed (e.g. Vocabulary becomes stronger, but grammar is negligibly improved)? Competence could be measured by using a native speaker to tally student performance in structurally similar interviews before (if possible), during, and after the completion of the course of instruction.
Together, all of these components will need to be analysed to generate a composite score that we shall call “effectiveness”. Arriving at this number will require data from Instructor Interviews, Student Interviews, mark statistics, and enrolment statistics, and a monitor of student progress throughout their coursework.
Stage 3. Review the Online Programmes
In this step, student and tutor interviews will be conducted, and other data will be collected for the duration of the online class. It may be instructive to observe the course work of two or more cohorts through the same class and instructor to increase the available sample size and offset for incidental defects in data collection.
Stage 4. Analysis of Live data
In this step the effectiveness of the various programmes will be determined and attempt to correlate method or approach with degree of effectiveness. The specific details of this analysis are to be determined at this time.
Stage 5. Synthesis of Grading Criteria or Instructional Design Model.
Once the online classes have been measured for effectiveness; any factors observed to correlate positively with outcome will be used to try and formulate a preliminary set of grading criteria. The criteria derived should predict the effectiveness of any member of the sample of programmes examined by the study without requiring extensive use of collected data.
Stage 6. Testing of Grading Criteria
Finally, a course survey similar to that performed in the beginning of this study (stage 1) should be performed, but this time the grading criteria will be applied to predict the course effectiveness. Stage 2 and Stage 3 should then be conducted on these classes and their effectiveness manually calculated and compared with that predicted. If successful, future studies could build on this criteria to formulate a more formal model of instructional design for online courses; in particular, those involving SLA.
In this study an attempt will be made to investigate, and if possible find answers to the following questions:
1. Is it possible to formulate a set of criteria to objectively rate and predict the effectiveness of online programmes; in particular, those programmes focused on SLA?
2. To what extent is the language learning experience controlled by the learner in online programmes? If the control of the experience is maximised for the learner, is the development of competencies maximised? How much compromise of time for quality is necessary? This should be determined through analysis of courses in the NE and SE quadrants of Coomy’s classification for pedagogical paradigms.
3. How are instructors for online classes selected? What qualities or qualifications help the online learning process to be more successful? Are preconceptions about pedagogical methods important to the effective outcome of online learners?
4. In learner-learner interaction in asynchronous forums, does the same frequency of negotiations of meaning occur as in synchronous [chat] interactions? In a mixed environment using both kinds of interaction, does a synergy develop; each method’s strengths compensating for weakness in the other?
Projected Completion Task
Fall 2005 Stage 1
1. Identify the programmes to be involved in the study.
2. Develop an instructor Interview and/or Ethnographic observation methods to determine instructor’s pedagogical bias.
3. Classify selected courses according to Coomy’s map of pedagogical paradigms.
Spring 2006 Stage 2
1. Conduct Instructor Interviews or Ethnographic observations.
2. Determine a preliminary mathematical model to express “effectiveness” based on initial survey and course selection. This will be used as a working definition and is expected to be refined as more data are collected. This requires the researcher to:
a. Develop Student Interview
i. Questions measuring “Popularity”
ii. Questions measuring “Interest”
iii. Questions measuring “Competence”
iv. Questions measuring student performance in terms of “Marks” received.
b. Develop Questions for Instructor Interviews to measure any bias or nonlinearity existing in student “Marks”
c. Conduct Student and Instructor Interviews for courses selected for study.
d. Determine the preliminary mathematical model for composite “Effectiveness” subject to refinement during the study.
Fall 2006 Spring 2007
1. Conduct online course ethnographic studies to gain insight on the effect of various online communications and to determine which pedagogical paradigm is predominate for the program
a. Note especially effective or ineffective pedagogical methods
b. Note especially effective or ineffective features of the online classroom used by each programme. Such features should become apparent after analysing the data on similar programmes, if any. The effect of unique features in otherwise identical programmes should be revealed after performing effectiveness analysis as changes in the composite score.
2. Conduct Instructor Interviews
a. Determine pedagogical bias
b. Determine any nonlinearity or bias in “Marks” awarded to students
c. Get instructor opinion on what the strong points and weak points are in his online classroom approach.
d. Are there perceived strengths and weaknesses in the online virtual classroom (features, methods, usability et) inherent in the technology being used?
3. Conduct Student Interviews
a. Determine the “popularity” of the course
b. Examine the students’ performance in terms of marks received.
c. Take a baseline competency assessment at the beginning of study and an exit assessment
d. Determine the level of interest
Fall 2007 Stage 4
Analyse the live data and, if necessary refine the mathematical composite of “Effectiveness” further in light of observations made.
Synthesise Grading Criteria from the data collected.
Spring 2008 Stage 6
1. Select courses not in original study; use the grading criteria developed to predict course effectiveness.
2. Perform research as in first portion of study (Stage 2 and Stage 3) on these Courses.
Fall 2008 Stage 6
1. Perform Stage 4 analysis on the second group of courses.
2. Using the same methods of analysis used in Stage 4 and Stage 5 determine these courses “effectiveness”
3. Analyse and compare the predicted value with the empirical value
4. Determine reasons for any discrepancy and the accuracy of the developed grading methodology.
5. Identify any areas requiring further inquiry.
The largest limitations of this study will be the relatively small sample sizes of courses that can be observed at once as well as the sample sizes involving students and faculty members. Acquiring student records may also be difficult due to privacy considerations. Additionally it is possible that there are more variables involved in a programme’s effectiveness than this study takes into account. In that event the study may be able to identify these additional factors and suggest suitable follow-on research to examine the impact of those factors upon a programme’s effectiveness.
By necessity this research will need to contact and collect data from numerous sources. Some suggested places for conducting this research include traditional universities that conduct on-line L2 programmes, non-traditional universities such as the UK’s Open University, and corporate entities conducting similar programmes. Part of the research period will be utilised to locate such programmes and classify them for the study. Once the programmes have been located instructors and students will need to be communicated with. Data will be collected through means that are convenient such as: in-person interviews, online discussion, e-mail, fax, or letter.
Belz, J. A. and Celeste K. (2003) Discourse Options and the Development of Pragmatic Competence by Classroom Learners of German: The Case of Address Form, Language Learning, 53(4), pp. 591-648. http://www.houstonlibrary.org. Accessed 30 May.2005.
Blake, R. (2000) Computer Mediated Communication: A Window On L2 Spanish Interlanguage, Language Learning & Technology, 4 (1), pp.120-136. http://llt.msu.edu. Accessed 30 May 2005.
Chapelle, A. C. (2001) Computer applications in second language acquisition: foundations for teaching, testing and research. Cambridge: Cambridge University Press.
Coomey, M. and John S. (2001) Online Learning: it is all about dialogue, involvement, support and control- according to the research. In S. John (ed.) Teaching and Learning Online: Pedagogies for new technologies. (pp. 37-50). London: Kogan Page.
Gonzales-Bueno, M. (1998) The Effects Of Electronic Mail On Spanish L2 Discourse, Language Learning & Technology, 5 1(2) pp. 55-70. http://llt.msu.edu/vol1num2/article3/default.html. Accessed 29 May 2005.
Hase, S. and Allan E. (2001) Problems with Online Learning Are Systemic, not Technical. In S. John (ed.) Teaching and Learning Online: Pedagogies for new technologies. (pp.27-34). London: Kogan Page.
Ingram, L. A., Hathron, L. G. and Evans, A. (2000) Beyond chat on the Internet, Computer and Education, 35 (1), 21-35.
Jackson, B. and Kyriaki A.( 2001) Making the Right Connections: Improving Quality of Online Learning. In John Stephenson (ed.) Teaching and Learning Online: Pedagogies for new technologies. (pp.54-63). London: Kogan Page.
Kötter, M. (2003) Negotiation Of Meaning And Codeswitching In Online Tandems, Language Learning & Technology, 7(2), pp. 145- 172. http://llt.msu.edu/vol7num2/kotter/default.html. Accessed 30 May 2005
Lamy, M. (1999) Reflective Conversation in The Virtual Language Classroom, Language Learning & Technology, 2(2,) pp. 43-61 http://llt.msu.edu/vol2num2/article2/. Accessed 30 May 2005.
Stephenson, John ed. ( 2001). Intro.:“Researchers”. Teaching and Learning Online:Pedadgogies for New Technologies. P 35. London: Kogan Page.
Swan, K. (2002) Building learning communities in online courses: the importance of interaction, Education, Communication & Information, 2(1), 23-49.
Vonderwell, V. (2003) An examination of asynchronous communication experiences and perspectives of students in an online course: a case study, The Internet and Higher Education, 6 (1), 77-90.
Warschauer, M. and Kern, R. (2000) Networked language teaching: concepts and practice. Cambridge: Cambridge University Press.
Whitlock, Q. (2001) Course Design for Online Learning — What’s gone wrong? In S. John (ed.) Teaching and Learning Online: Pedagogies for new technologies. (pp.182-191) London: Kogan Page.
Sorry, but A and B essays are only available for premium usersChoose a Membership Plan