Tuesday, February 3, 2009

Situated Evaluation

Bruce, B. C., & Rubin, A. (1993). Electronic Quills: A situated evaluation of using computers for writing in classrooms. Hillsdale, NJ: Lawrence Erlbaum.

1. Problems with the standard evaluation paradigms (p. 190)

- does not support showing why changes occur, how changes are different across settings, or how they relate to changes in the innovation (p. 197)

-The standard evaluation paradigm focuses on the innovation per se, on its properties, in the case of formative evaluation, or on its effects, in the case of summative evaluation. One consequences of technocentrism is that the process of change is conceptualized as a function of the innovation alone, or else it is effectively ignored.

  • most evaluations do not identify the reasons for the observed phenomena (do not say how the innovation can be improved, nor what aspect of it produced the measured effects)
  • Not being able to account for why changes occur means that it is questionable to generalize to other settings in which the innovation might be used
  • The development process often continues after the evaluation, so that most evaluations are effectively of innovations that no longer exist (without knowing more about the situation and process of use one cannot say whether initial results are still valid for the changed innovation)

2. Idealization v. Realization (p. 198)

  • Idealization: what the developers of an innovation intend(they see the innovation set in an idealized context and used in an idealized way - vision of the changed social system)
  • Realization: what happens when the innovation is realized in a particular social setting (the social system may or may not change at all, and if does change, it may not do so in accord with the developers' goals)

3. Situated evaluation (p. 177)

  • SE analyzes the varities of use of the innovation across contexts and emphasize the unique characteristics of each situation in which the innovation is used
  • Focus: innovation-in-use
  • Purposes: to understand the different ways in which the innovation is realized; to examine the various realizations of an innovation in different settings (p. 204)
    • Explain why the innovation was used the way it was
    • Predict the results of using the innovation
    • Identify dimensions of similarity and difference among settings
    • Improve the use of the innovation
    • Improve the technology
    • Identify variables for later evaluation
  • Procedure: SE cannot be proceduralized. It's a process of discovering relationships (p. 205)
    • The idealization of innovation
      • each innovation emerges from a theory (learning & teaching), articulated to varying degrees in documents about the innovation
      • inclusion of new technologies (tools & artifacts, prescriptions for use, support system)
    • The settings in which the innovation appears
      • cultural context (national, SES, linguistic diversity)
      • institutional context (the school & the classroom)
      • pedagogical context (academic goals of schooling, teacher's instructional role, students' roles in promoting their own learning, the nature of academic tasks, the social environment as the context for individual learning)
    • The realization of innovation
      • Understanding the reasons for change
      • Difference across settings
      • Change in the innovation
      • (research method: field notes, documents, interviews, videotapes, observation - essential to doing a situated evaluation, p. 210)

4. Comparison w/ traditional frameworks for evaluation (p. 213)

  • Fomative
    • Focus: Innovation
    • Audience: Developer
    • Purpose: Improve the innovation
    • Variability of setting: Minimized to highlight technology
    • Measurement tools: Observation/interview/survey
    • Time of assessment: During development
    • Results: List of changes to the technology
  • Summative
    • Focus: Effects of the innovation
    • Audience: user
    • Purpose: Decide whether to adopt innovation
    • Variability of setting: Controlled by balanced design or random sampling
    • Measurement tools: Experiment
    • Time of assessment: After initial development
    • Results: Table of measures contrasting groups
  • Situated
    • Focus: Social practices
    • Audience: User (but also developer)
    • Purpose: Learn how the innovation is used
    • Variability of setting: Needed for contrastive analysis
    • Measurement tools: Observation/interview
    • Time of assessment: During and after development
    • Results: Ethnography

No comments: