Showing posts with label evaluation. Show all posts
Showing posts with label evaluation. Show all posts

Wednesday, February 17, 2010

Assessment and Learning Styles

Recently I was reading A Model for Comprehensive Assessment in the
College Classroom (Mueller, Waters, Smeaton, & Pinciotti, 2009) which consists of 5 steps including (1) determine the purpose of the assessment (diagnostic, formative, and summative), (2) match assessments to outcomes and level of thinking, (3) determine the class demographics, (4) choose the assessment method, and (5) determine the scoring tool.

Step (3) include a variety of elements based on students' learning styles. A most popular application of learning styles in classroom was Dunn and Dunn Learning Style Preference Survey (Dunn, Dunn & Price, 1991), which provided information for students and professors about individual and class learning style needs. Also came to my mind are David Kolb's model (Converger, Diverger, Assimilator and Acommodator), Honey and Mummford's model (Activist, Reflector, Theorist, and Pragmatist), and Myers-Briggs Types Indicator. I remember I did my first MBTI test 5 years ago (which cost me 15 bucks) and was rated "Extraverted Feeling with Sensing" (ESFJ) - sociable, cooperative, tactful, practical, realistic, down-to-earth, decisive, thorough, organized, orderly, and consistent. Liked it!:)

Am I an acommodator/pragmatist?

Reference:
 Sound Instruction - Ready to Use Classroom Practice

Thursday, May 29, 2008

Choosing & using educational software: A teachers' guide

Squires, D., & McDougall, A. (1994). Choosing & using educational software: A teachers' guide. London: Falmer.

Chapter 6. Frameworks for studying educational software

1. Categorization (Classification by Application Type)

Two types
  • Content-free (generic): in terms of the tasks it can perform (e.g. word, spreadsheets)
  • Subject-specific: used in the teaching and learning of specific topics (e.g., science simulations, foreign language practice programs, arithmetic drill programs)
Problems
  • criteria implicit, no clear rationale
  • sensitive (increasing range requires constant revision & updating)
  • some integrated software don't fall neatly into any one classification
2. Role (Classification by Educational Role)

Three modes
  • Tutor: a surrogate teacher (e.g., drill & practice exercises, adaptive tutorial programs)
  • Tool: useful capability programed into the computer (e.g., statistical analysis, word, graphics packages, data logging, info handling)
  • Tutee: learners "teach" the computer through expressing their ideas and solutions to problems (e.g., Logo)
Problems
  • Founded on the premise that the scope and nature of the software environment defines educational possibilities
  • Focus on the software rather than the teacher and learner
  • Ignoring important issues of teaching and learning
3. Rationale (Classification by Educational Rationale)

Four paradigms
  • Instructional: mastery of content (sequencing, presentation, feedback reinforcement)
  • Revelatory: learning by discovery & developing an intuitive feel for the field of study. student is the prime focus (e.g., simulation)
  • Conjectural: the articulation and manipulation of ideas and hypothesis testing. Emphasis: development of understanding through the active construction of knowledge (e.g., Logo)
  • Emancipatory: exploits the capacity of the computer to process large amounts of data dand perform many operations very quickly, to save students from spending time on laborious tasks that are necessary but incidental to their learning
Problems
  • Tendency to regard software as belonging exclusively to one paradigm
  • No consideration of the learning process
Chapter 7-10. A Perspectives Interactions Paradigm for studying Educational Software

The focus shall shift from attributes of the software itself (e.g., what does this package do? How does this program run?) to the use of software to enhance teaching and learning (e.g., what kinds of learning experiences might be set up or assisted by this package? What approaches to teaching fit this package?)

Three major "actors": the student(s), the teacher, and the designer
  • Teacher-student link: direct 2-way physical and social interactions initiated or sponsored by the software; students more actively engaged in thinking and learning; teacher roles - Resource provider, manager, coach, researcher, facilitator
  • Designer-student link: how student relate to and use software (cognitive development and human-computer interaction); learning theories: behaviorism (stimulus-response mechanism, e.g., Skinner, 1938) vs constructivism (learning as a process of accommodation and assimilation in which learners modify their internal cognitive structures through experience, e.g., Piaget); Three aspects of software design: learner control, complexity, challenge
    • What are the levels of learner control, task complexity, and challenge offered by the package?
    • How effective is the design in affording learners the intended level of control/
    • How are learners helped to cope with the complexity of the software?
    • What methods and approaches are used to provide learners with a challenge?
  • Designer-teacher link: curriculum and associated pedagogies (curriculum development and approaches to teaching); relationship of the software to the curriculum (implicit, explicit, absent)
    • Identify implicit curriculum aims
    • Match explicit and implicit curriculum aims to perceived specific curriculum requirements
    • Realize the possibilities of 'subverting' explicit and implicit curriculum aims to specific curriculum requirements
    • Realize the educational possibilities of the use of software which initially has no explicit or implicit curriculum aims (e.g., The Geometric Supposer shifts from seeking answers to encouraging inquiry and investigation)
Chapter 11. Choosing and Using Educational Software

  • Teacher/student
    • Selection
      • Implied role(s) of the teacher in the classroom
      • Expectations of the nature of classroom interactions
      • Customization: pedagogy
    • Evaluation
      • Actual role(s) of the teacher in the classroom
      • Observed nature of classroom interactions
      • Customization: pedagogy
  • Designer/student
    • Selection
      • Implicit/explicit/absent theories of learning
      • User (student) access features
    • Evaluation
      • Appropriateness and effectiveness of theories of learning
      • Ease and extent of user (student) access
  • Designer/teacher
    • Selection
      • Implicit/explicit/absent curriculum aims: content and process
      • Customization: content
    • Evaluation
      • Customization: content

Sunday, October 14, 2007

Complexities in the evaluation of distance education and virtual schooling

Vrasidas, C., Zembylas, M, & Chamberlain, R. (2003). Complexities in the evaluation of distance education and virtual schooling. Educational Media International, 40 (3/4), 201-208.

This paper discusses the complexities and issues involved in the evaluation of distance education and virtual schooling. In order to provide an anchor to the issues involved in evaluating online projects, the authors first present the evaluation design of a virtual high school project. Then the emphasis of the paper is on the goals of the evaluation, stakeholder analysis, eevaluator role, data collection, and data analysis. Finally the authors discuss the need for evaluation of distance education and the ethical responsibility of the evaluators involved.

Major project idea?

Still struggling with ideas for EDTEC770 major project (for which I would like to do an evaluation/comparative study for international distance education programs)...

Well, here is where I am:

1. Last year, my colleagues and I did a research on learner perceptions of interaction in the Global Media Network. What we did was a pilot study that examines the learner-learner, learner-instructor, learner-content, and learner-media interaction. We conducted an online survey which resulted in 16 respondents, with 7 from a BSU-located GMN classroom and 9 from their Taiwanese counterparts. It was a fairly small sample size but it was not too bad for a pilot study. The survey consisted of 6 parts: 1. Demographic, 2. Learning Style, 3. Interaction Frequency, 4. Interaction Depth, 5. Satisfaction, and 6. Open-Ended Questions. Likert Scale was applied to all questions except those in Part 1 and Part 6. The reliablity statistics showed our 21-item instrument is reliable (Cronbach's Alpha=.879) and can be distributed to a large sample in the future study. Based on the findings, we suggested that the GMN participants preferred interaction with the classmates and media and they might want to see some improvement in their interaction with the instructors and course content. In addition, findings of group comparison (American VS Taiwanese cross-tab) provided some clue for further study on cross-cultural difference and international collaborative learning in the distance learning environment.


2. For my EDTEC699 class, I planned to develop another evaluation study proposal on the GMN. However, this time the focus is on the faculty and staff perceptions rather than the learners. A possible title for this study might be - Instructional Needs and Technology Support in International Distance Education: A Qualitative Study of Faculty and Technology Staff Perceptions. Professional development and technology support are essential for faculty to successfully develop distance education program. Over the past several years, as the numbers of courses offered through the GMN system has increased, so has the discussion among university faculty regarding the technology support in this environment. Among those discussed issues, what interest me most are the GMN faculty perceptions on their demand for training opportunities and the actual technology assistance they receive, and how such perceptions compare to those from the technology staff involved. In an effort to identify the possible discrepancies between the needs and the actual support which might have an essential impact on the distance course development, I would like to investigate the above-mentioned issues through qualitative methods (e.g., interview, observation, etc.).

3. For EDTEC770 class, I originally planned to do a cross-cultural study on the GMN. However, I found it extremely difficult when cross-cultural issues add to the depth and magnitude of complexities in evaluation. I can still use the methodology from my first study and run some stats to test the difference but I don't know if that is a good idea or not. I really need some help.

P.S.
Lee, C., Clausen, J., & Ma, W. (2007). Learner perceptions of interaction in the Global Media Network. In C. Crawford et al. (Eds.). Proceedings of Society for Information Technology and Teacher Education International Conference 2007, 1800-1806.

Abstract:
Interaction has critical impacts on the effectiveness of distance learning. To understand interaction based on learners' perceptions can assist learner-oriented learning, as well as enhance the instructional design. This paper focuses on the online interaction perceived by the learners in the Global Media Network (GMN) project, an international synchronous distance learning environment initiated in 2005 at Ball State University, Indiana. The primary data collection method is an online survey which focuses on four types of online interaction. The findings reveal the learning preferences and perspectives of the GMN participants toward different types of online interaction, which could be used for further improvement of the GMN course development. Because of the small sample size in this study, further study should include a much larger number of participants.

Simulations and e-learning: An Epic whitepaper

Clark, D. (2006). Simulations and e-learning: An Epic whitepaper. Retrieved on October 1, 2007, from http://www.epic.co.uk/content/resources/white_papers/sims.htm

This Epic white paper articulates that although simulations provide experiential learning that is quick, cheap and safe, they are still all rare in e-learning. This is not only due to the limitations of the current web-based technology, but most importantly, our limited expectations and imagination. The paper details the seven key types of simulation, explores the design implications of producing simulations and gives metrics for their evaluation - also detailing numerous case studies. The author argues that if e-learning is to mature and motivate, it must embrace simulations as a potent and flexible tool for experiential learning. I agree with the author that the chief limitation to more widespread use of simulations is not technological, or cost-related, but a limitation of imagination because “the field of e-learning simulations is much wider and more diverse than many people think.”

Monday, September 17, 2007

Journal Review #4: focus groups and program evaluation

Stitt, B. G., Leone, M., & Jennings-Clawson, H. (1998). Focus groups and evaluation of criminal justice programs. Journal of Criminal Justice Education, 9, 71-80. Retrieved September 30, 2006, from http://ejournals.ebsco.com/direct.asp?ArticleID=UL22137L0684025X

In recent years, higher education was under attack from the stakeholders demanding documentation of accountability. The assessment for producing empirical evidence of accountability was becoming of significant importance. In the light of this increasing demand, the authors came up with an innovative idea for use in evaluating criminal justice programs.

After summarizing some concerns related to the measurement of accountability and the program review process, the authors introduced and discussed the use of focus groups as an assessment tool. The advantages and disadvantages of focus groups as applied to evaluation of criminal justice programs were presented. The authors conducted a self-studying involving focus group while the department was undergoing an external program review. A number of the focus groups’ suggestions for program change were supported by the findings of the criminal justice educators who visited the campus and evaluated the program; thus, the reviewers’ suggestions were supported by students’ observations and opinions. This validation has proved extemely valuable, especially in increasing resources for the criminal justice program.

Based on the findings of the study, the authors concluded that focus groups could be a valuable, viable, cost-effective tool in program evaluation. Therefore, they strongly suggested that many academic departments should use the focus group method in their evaluation processes.

Journal Review #3: Post/Outcome Evaluation Model

Kovalik, C. L., & Dalton, D. W. (1999). The process/outcome evaluation model: A conceptual framework for assessment. Journal of Educational Technology Systems, 27, 183-194.

The adoption of alternative pedagogical philosophies in the classroom had led to an increased use of technology to expand and enhance authentic, contextual learning environments. Correspondingly these new approaches had also led to a growing dissatisfaction with existing evaluation methodologies to evaluate knowledge. Based on the premise that evaluation strategies should reflect the full range of the experiences of learning, the authors proposed the Process/Outcome Evaluation Model (POEM) to guide in the development of more holistic evaluations of both the learning process and the resultant outcomes of that process.

The POEM framework comprised an evaluation matrix that contained four categories of measurements: hard-outcome, hard-process, soft-outcome, and soft-process. These four components employed multiple evaluation techniques and strategies resulting in a composite assessment of the totality of a learning experience by examining both the learning process and the learning outcome. The POEM expanded and integrated existing evaluation models by providing tools that could help decode, interpret, and assess not only what was learned, but also how the learning occurred.

The authors suggested that the POEM should be viewed as a continuum, reliability and predictive validity increase as evaluation strategies moved from “soft” to “hard” categories. The POEM stressed equilibrium between objective/quantifiable and subjective/qualitative evaluation approaches. The value of the model was its depiction of a holistic framework for evaluation.

Journal Review #2: Evaluating public web info

Judd, V. C., Farrow, L. I., & Tims, B. J. (2006). Evaluating public web site information: A process and an instrument. Reference Services Review, 34, 12-32. Retrieved September 22, 2006, from http://www.emeraldinsight.com/0090-7324.htm

In an effort to find an evaluation instrument for undergraduate students to use to evaluate public web sites, the authors analyzed the variety of instruments discovered from an intensive literature review and developed an appropriate instrument and its application in workshops with students.

Although a number of diverse evaluation instruments from the literature and from web-based sources were examined, none was deemed suitable for students to use. Based on the literature review and analysis, the authors asserted that the web evaluation instrument should (1) focus exclusively on the information aspect of a web site, (2) have some basis in theory or be based on accepted model, (3) be parsimonious, (4) be quantitative, with both absolute and relative measures, and (5) indicates whether or not the information should be accepted or rejected. Following these criteria, the authors created their own instrument with the goal of focusing on the process rather than the outcome.

The instrument, which included five ten-point scaled questions, was tested through three trials. A group of students were taught how to use it through two workshops. Based on their assessment of the learning environment, the authors concurred that the new instrument met the students’ needs and suggested the focus of an instrument should be on evaluation as a process.

Journal Review #1: Web Usability

Cook, R. S., Rule, S., & Mariger, H. (2003). Parents’ evaluation of the usability of a web site on recommended practices. Topics in Early Childhood Special Education, 23, 19-27. Retrieved October 8, 2006, from http://ejournals.ebsco.com/direct.asp?ArticleID=4CC788309A0DD582C28A

Based on the need for an accessible, practical, and parent-friendly curriculum on the Internet, the authors designed the Strategies for Preschool Intervention in Everyday Settings for Parents (SPIES) web site that was intended to provide practical information about recommended practices such as activity-based or embedded instruction to families whose young children had disabilities or were at developmental risk. To determine whether the site was an effective tool for disseminating information about recommended practice to families, the authors launched a parents' evaluation of the site.

After three research questions considering the site’s accessibility, practicality, and appropriateness were raised, twenty-one parents from ten states were recruited to conduct an online evaluation of the SPIES site, judging its content and ease of use. The data were analyzed using quantitative and qualitative methods.

Results indicated that the Internet could be an effective medium for disseminating information about recommended practices to families. The parent group found the Web site to be helpful, useful, and responsive to their needs and time constraints. Although they said that textual information was easy to access, some parents did note that they experienced technological problems in downloading video.

From this evaluation, the authors also suggested that a web site providing procedural guidance could complement the provision of direct early intervention services, with content presented in multiple modes and for different learners.