EDER 673 Instructional Design
© Eugene Kowch,
Assistant Professor, Faculty of Education
Sub Topic 2: Formative and Summative Assessments
(Adapted from "Dick & Carey, 1996: The Systematic Design of Instruction,
Subject matter experts are key - they can identify performance and task
inconsistencies , and can provide guides for how to detect, assess and feed
back supportive or scaffolded feedback to the learner. One to one learner
assessment can be done to give the designer ideas about what is working and
what needs improvement in the instructional design. There are sequential
3 stages to a rigorous (reeeealy rigorous) Dick and Carey formative evaluation:
1. 1 to 1 evaluation (designer and learner - it's about the learner)
2. Small (focus) group evaluation (about the instruction)
3. Field Trial (instructor and group)
The One to one evaluation (stage) must to consider the learner
time, attitude and skill. Feedback must be constructive and mutual. During
this state, the designer works individually with three or more learners who
are representative of the target population. There are three criteria by which
formative evaluation occurs:
1. Clarity: Is the message, or what is being presented,
clear to the individual target learners?
2. Impact: What is the impact of the instruction on individual learner's
attitudes and achievement of the objectives and goals?
3. Feasibility: How feasible is the instruction given the available resources
There are feasibility issues in the one-to-one evaluation stage. Learner
capability, the instructional medium, and the instructional environment can
impact this work. Ask yourself the following questions before you consider
one-to-one formative evaluation (feasibility):
1. How will the maturity, independence, and motivation
of the learner influence the general amount of time required to complete
2. Can learners such as this one operate or easily learn to operate any
specialized equipment required?
3. Is the learner comfortable in this environment?
4. Is the cost of delivering this instruction reasonable given the time
To begin, the designer picks at least one learner from the target population
who is slightly above average in ability, one who is average, and one who
is below average to work with individually. Engage in this process and the
designer will find many improvements on material and design quality - the
following steps outline the one - to one- evaluation stage of an instructional
design. Consider which learning theory and paradigm is represented by Dick
and Carey's methodology ;-)
- Select the learners with care (above)
- encourage learners to be relaxed, and to talk about the materials,
pace, technology and content.
- a positive rapport with the learner is essential
- have the learner take tests from with in the material
- this should be an open, interactive process. Ask yourself:
What parts of the design work here, what parts do not? You are essentially
deconstructing your instructional design with these learners, seeing what
works and what could be improved. This takes brass monkeys and patience. If
you caught that congratulations, you are reading deeply ;-)
- Provide assessments and questionnaires after they go
through each step or item in the assessment. This info is very valuable in
the design revision stages.
- Evaluate the instrument: Test directions should also
be formatively evaluated - this is the best time to see if the directions
are clear enough.
- Evaluate the usability of the instrument. Check:
Check the learning time. Is it what you expected? Is
the sequence right?
Check the outcomes of the one to one trials - is the
- the observability of each of the elements to be judged
- the clarity of the manner in which they are paraphrased
- the efficiency of the sequencing order
Small Group ("Focus Group") Evaluation (stage)
- proven to contain appropriate vocabulary, language complexity,
examples and illustrations for the learner?
- yielding reasonable learner attitudes and achievement, or
is it revised with the objective of improving learner attitudes in subsequent
- appearing feasible for the learners, resources and setting?
Small group evaluation can occur where the whole group is given (similar)
feedback on (grouped) learner outcome problems (or successes). The focus is
now not the learner, as it was in the last stage. The focus here is the
There are two primary purposes for the small group evaluation:
At this stage you are evaluating more than the process of learning from
your instructional design - you are looking for the effectiveness of the
instruction here. First, gather information about the feasibility for
- The first is to determine the effectiveness of changes made
following the one to one evaluation, and to identify any remaining problems
that learners may have.
- The second is to determine if learners can use the instruction
without the instructor.
The following steps are recommended for Small Group Evaluation of the design:
- check the time required for learners to complete both the instruction
and the required performance measures
- check the costs and viability of delivering the instruction
in the intended format and environment and
- check the attitudes of those implementing or managing the instruction.
- Select the learners- now that the materials have been revised
from the 1 to 1 evaluation, select 8 to 20 people who represent the target
- the evaluator explains that materials are in the formative stage
of development and that feedback is needed on how to improve the materials.
- the instructor or evaluator administers the material in themanner
in which they are intended to be used in the final form - with as little instructor
intervention as possible.
- Intervene only when technical failure occurs.
- Note each learner's difficulties for the revision stage.
- administer an attitude questionnaire:
The result of the small group evaluation is refined instruction that should
be effective for most target learners in the instructional setting.
- Was the instruction interesting?
- Did you understand what you were supposed to learn?
- Were the materials directly related to the objectives?
- Were sufficient practice exercises included?
- Were the practice exercises relevant?
- Did the tests really measure your performance on the objectives?
(did they understand the objectives..double-checking on the 1 to 1 stage
- Did you receive sufficient feedback on your test results?
- Did you feel confident when answering questions on the tests?
Field Trial Evaluation
Essentially, the instructor attempts a full-blown instructional event here,
with the twice modified course design and materials, in a realistic setting.
This is like a simulation, where typical learners learn with their instructor
in a typical setting. The desired levels of learning achievement should be
realized, and tested here in the context of effective instruction.
Formative evaluation is about the process of collecting data to improve
the effectiveness of instruction (and your design). By contrast, summative
evaluation is the process of collecting data to make decisions about the
continued use of instruction (and your design).
The summative evaluation has 2 main phases:
1. Expert Judgment (to decide whether an instruction candidate (human,
machine or a combination) has the potential to meet the organization's defined
instructional needs) There are 5 activities undertaken to decide if
the candidate instruction is promising:
2. Field Trial (to document the effectiveness of promising instruction
with target group members in the instructional setting). The field trial has
- Evaluate the congruence between the organization's instructional
needs and candidate instruction
- Evaluate the completeness and accuracy of the candidate instruction
- Evaluate the instructional strategy contained in the candidate
- Evaluate the utility of the instruction
- Determine the current user's satisfaction with the instruction
A summary of Summative Evaluation Phases and Specific Decisions:
- Outcomes analysis: determine the impact of instruction on
the learner's skills, on the job, and on the organization
- Management analysis: assess instructor and supervisor attitudes
related to learner performance, implementation feasibility, organization culture
match, and costs.
|Expert Judgment Phase
|Field Trial Phase
|Do the materials
have the potentials for meeting this organization's needs?
|Are the materials
effective with target learners in the prescribed setting?
Congruence Analysis: Are the needs and
goals of the organzation congruent with those in the instruction?
Content Analysis: Are the materials complete, accurate, and
Design Analysis: Are the principles of learning, instruction,
and motivation clearly evident in the materials?
Feasibility analysis: Are the materials convenient, cost-effective
and satisfactory for current users?
Impact on the Learner: Are the achievement and motivation
levels of the learners satisfactory following instruction?
Impact on the Job: Are learners able to transfer the information,
skills and attitudes from the instructional setting to the jog setting or
to subsequent units of related instruction?
Impact on the Organization: Are learners' changed behaviors
(performance, attitudes) making positive differences in the achievement
of the organization's mission and goals?
1. Are instructor and manager attitudes satisfactory?
2. Are recommended implementation procedures feasible?
3. Are costs related to time, personnel, equipment and resources reasonable?
Comparing Formative and Summative Evaluation:
|Locate weaknesses in instruction in order to revise
|Document strengths and weaknesses in instruction
in order to decide whether to maintain or adopt it
|Phases or Stages
|One - to - One
|Instructional Development History
|Systematically designed in-house and tailored to
the needs of the organization
|Produced in house or elsewhere not necessarily following
a systems approach
|One set of materials
|One set of materials or several competing sets
Typically an external evaluator
|Position of Evaluator
|Member of design and development team
|Typically an external evaluator
|A prescription for revising instruction
|A report documenting the design, procedures, results,
recommendations and rationale