EDER 673 Instructional Design
© Eugene Kowch, Assistant Professor, Faculty of Education

Sub Topic 2: Formative and Summative Assessments

(Adapted from "Dick & Carey, 1996: The Systematic Design of Instruction, 4th Ed)




Formative Assessments


Education Context:
Subject matter experts are key - they can identify performance and task inconsistencies , and can provide guides for how to detect, assess and feed back supportive or scaffolded feedback to the learner. One to one learner assessment can be done to give the designer ideas about what is working and what needs improvement in the instructional design. There are sequential 3 stages to a rigorous (reeeealy rigorous) Dick and Carey formative evaluation:

1. 1 to 1 evaluation (designer and learner - it's about the learner)
2. Small (focus) group evaluation (about the instruction)
3. Field Trial (instructor and group)

The One to one evaluation (stage) must to consider the learner time, attitude and skill. Feedback must be constructive and mutual. During this state, the designer works individually with three or more learners who are representative of the target population. There are three criteria by which formative evaluation occurs:

1. Clarity:  Is the message, or what is being presented, clear to the individual target learners?
2. Impact: What is the impact of the instruction on individual learner's attitudes and achievement of the objectives and goals?
3. Feasibility: How feasible is the instruction given the available resources (time/context)?

There are feasibility issues in the one-to-one evaluation stage. Learner capability, the instructional medium, and the instructional environment can impact this work. Ask yourself the following questions before you consider one-to-one formative evaluation (feasibility):

1. How will the maturity, independence, and motivation of the learner influence the general amount of time required to complete the instruction?
2. Can learners such as this one operate or easily learn to operate any specialized equipment required?
3. Is the learner comfortable in this environment?
4. Is the cost of delivering this instruction reasonable given the time requirements?

To begin, the designer picks at least one learner from the target population who is slightly above average in ability, one who is average, and one who is below average to work with individually. Engage in this process and the designer will find many improvements on material and design quality - the following steps outline the one - to one- evaluation stage of an instructional design. Consider which learning theory and paradigm is represented by Dick and Carey's methodology ;-)

  • Select the learners with care (above)
  • encourage learners to be relaxed, and to talk about the materials, pace, technology and content.
  • a positive rapport with the learner is essential
  • have the learner take tests from with in the material
  • this should be an open, interactive process. Ask yourself: What parts of the design work here, what parts do not? You are essentially deconstructing your instructional design with these learners, seeing what works and what could be improved. This takes brass monkeys and patience. If you caught that congratulations, you are reading deeply ;-)
  • Provide assessments and questionnaires after they go through each step or item in the assessment. This info is very valuable in the design revision stages. 
  • Evaluate the instrument: Test directions should also be formatively evaluated - this is the best time to see if the directions are clear enough.
  • Evaluate the usability of the instrument. Check:
    1. the observability of each of the elements to be judged
    2. the clarity of the manner in which they are paraphrased
    3. the efficiency of the sequencing order
  • Check the learning time. Is it what you expected? Is the sequence right? 
  • Check the outcomes of the one to one trials - is the instruction:
    1. proven to contain appropriate vocabulary, language complexity, examples and illustrations for the learner?
    2. yielding reasonable learner attitudes and achievement, or is it revised with the objective of improving learner attitudes in subsequent performance trials?
    3. appearing feasible for the learners, resources and setting?
Small Group ("Focus Group") Evaluation (stage)

Small group evaluation can occur where the whole group is given (similar) feedback on (grouped) learner outcome problems (or successes). The focus is now not the learner, as it was in the last stage. The focus here is the instruction.

There are two primary purposes for the small group evaluation:
  1. The first is to determine the effectiveness of changes made following the one to one evaluation, and to identify any remaining problems that learners may have.
  2. The second is to determine if learners can use the instruction without the instructor.
At this stage you are evaluating more than the process of learning from your instructional design - you are looking for the effectiveness of the instruction here. First, gather information about the feasibility for the instruction:
  1. check the time required for learners to complete both the instruction and the required performance measures
  2. check the costs and viability of delivering the instruction in the intended format and environment and
  3. check the attitudes of those implementing or managing the instruction.
The following steps are recommended for Small Group Evaluation of the design:

  • Select the learners- now that the materials have been revised from the 1 to 1 evaluation, select 8 to 20 people who represent the target learner population 
  • the evaluator explains that materials are in the formative stage of development and that feedback is needed on how to improve the materials.
  • the instructor or evaluator administers the material in themanner in which they are intended to be used in the final form - with as little instructor intervention as possible.
  • Intervene only when technical failure occurs.
  • Note each learner's difficulties for the revision stage.
  • administer an attitude questionnaire:

    1. Was the instruction interesting?
    2. Did you understand what you were supposed to learn?
    3. Were the materials directly related to the objectives?
    4. Were sufficient practice exercises included?
    5. Were the practice exercises relevant?
    6. Did the tests really measure your performance on the objectives? (did they understand the objectives..double-checking on the 1 to 1 stage
    7. Did you receive sufficient feedback on your test results?
    8. Did you feel confident when answering questions on the tests?
The result of the small group evaluation is refined instruction that should be effective for most target learners in the instructional setting.


Field Trial Evaluation


Essentially, the instructor attempts a full-blown instructional event here, with the twice modified course design and materials, in a realistic setting. This is like a simulation, where typical learners learn with their instructor in a typical setting. The desired levels of learning achievement should be realized, and tested here in the context of effective instruction.



Summative Assessments



Formative evaluation is about the process of collecting data to improve the effectiveness of instruction (and your design). By contrast, summative evaluation is the process of collecting data to make decisions about the continued use of instruction (and your design).

The summative evaluation has 2 main phases:

1. Expert Judgment (to decide whether an instruction candidate (human, machine or a combination) has the potential to meet the organization's defined instructional needs)  There are 5 activities undertaken to decide if the candidate instruction is promising:

  1. Evaluate the congruence between the organization's instructional needs and candidate instruction
  2. Evaluate the completeness and accuracy of the candidate instruction
  3. Evaluate the instructional strategy contained in the candidate instruction
  4. Evaluate the utility of the instruction
  5. Determine the current user's satisfaction with the instruction
2. Field Trial (to document the effectiveness of promising instruction with target group members in the instructional setting). The field trial has 2 stages:

  1. Outcomes analysis: determine the impact of instruction on the learner's skills, on the job, and on the organization
  2. Management analysis: assess instructor and supervisor attitudes related to learner performance, implementation feasibility, organization culture match, and costs.
A summary of Summative Evaluation Phases and Specific Decisions:



Summative Evaluation

Expert Judgment Phase

Field Trial Phase

Overall Decisions

Do the materials have the potentials for meeting this organization's needs?

Are the materials effective with target learners in the prescribed setting?

Specific Decisions

Congruence Analysis: Are the needs and goals of the organzation congruent with those in the instruction?
Content Analysis: Are the materials complete, accurate, and current?
Design Analysis: Are the principles of learning, instruction, and motivation clearly evident in the materials?
Feasibility analysis: Are the materials convenient, cost-effective and satisfactory for current users?


Outcomes Analysis:
Impact on the Learner: Are the achievement and motivation levels of the learners satisfactory following instruction?
Impact on the Job: Are learners able to transfer the information, skills and attitudes from the instructional setting to the jog setting or to subsequent units of related instruction?
Impact on the Organization: Are learners' changed behaviors (performance, attitudes) making positive differences in the achievement of the organization's mission and goals?
Management Analysis:
1. Are instructor and manager attitudes satisfactory?
2. Are recommended implementation procedures feasible?
3. Are costs related to time, personnel, equipment and resources reasonable?




Comparing Formative and Summative Evaluation:





Formative Evaluation
Summative Evaluation
Purpose
Locate weaknesses in instruction in order to revise it
Document strengths and weaknesses in instruction in order to decide whether to maintain or adopt it
Phases or Stages
One - to - One
Small Group
Field Trial
Expert judgment
Field Trial
Instructional Development History
Systematically designed in-house and tailored to the needs of the organization
Produced in house or elsewhere not necessarily following a systems approach
Materials
One set of materials
One set of materials or several competing sets
Typically an external evaluator

Position of Evaluator
Member of design and development team
Typically an external evaluator

Outcomes
A prescription for revising instruction
A report documenting the design, procedures, results, recommendations and rationale






crest

<----- Back to the EDER 673 Evaluation Home Page