Suggested Readings

  • Chapter 10, Designing and Conducting Formative Evaluations, from Dick and Carey.
  • Tessmer, M. (1997). Planning and conducting formative evaluations. London: Kogan Page.

Introduction

Now that you've developed your multimedia program, you may think that you're finally finished. Hooray! Done! On to the next project! Unfortunately, many projects end this way, and never realize their true potential because they are not run through an evaluation process. Let's take one last look at the Dick and Carey model of instructional design. Notice where we are and you can see that developing your instructional materials is not the last step in their ID model. There's still a couple of boxes dealing with different types of evaluation.

Notice the dotted lines coming out and pointing back to earlier steps, along with the box that says "Revise Instruction". These things indicate that the ID process in not completely linear. Each step can be revisited as feedback is received from the evaluation procedures. Indeed, that's the point of conducting a formative evaluation.

Formative evaluation involves the collection of data and information during the development process that can be used to improve the effectiveness of the instruction. Formative means that the instructional materials are in their formative, or early stages, and evaluation refers to the process of gathering data to determine the strengths and weaknesses of the instruction. "Thus, formative evaluation is a judgment of the strengths and weaknesses of instruction in its developing stages, for the purposes of revising the instruction to improve its effectiveness and appeal" (Tessmer, 1997, p. 11). Any form of instruction that can still be revised is a potential candidate for formative evaluation. This includes paper-based instruction, computer-based instruction, live lectures, and workshops.

Evaluation is one of the most important steps in the design process, and yet it's usually the step that gets left out. Many instructional projects are never evaluated with experts or actual learners prior to their implementation. The trouble with that is that the designers and developers are often "too close" to the project and lose their ability to evaluate the effectiveness of what they are working on (forest - trees - yadda yadda). For that reason, it's imperative that you bring in people from outside of the process to help you determine if you are truly hitting the mark, or if some minor (or major) adjustments are in order to bring the instruction up to its full potential.

Formative evaluation procedures can be used throughout the design and development process. You probably have already formatively evaluated your materials in the process of developing them. You might lay out components on the screen, try them out, and then move them around if they are not exactly right. Or, you might write some instructional text, try it out to see if you think it addresses the objective, and then rewrite it to make a better match. At this point, though, it's time to seek outside help. Even trying out instructional materials with a single learner can point out obvious flaws and lead to revisions that can have a major impact on the effectiveness of the instruction. Think of it more as a problem-finding stage of the instructional design process, not as a separate process altogether.

The other type of evaluation is Summative Evaluation. Summative evaluation is conducted when the instructional materials are in their final form, and is used to verify the effectiveness of instructional materials with the target learners. The main purpose is usually to make decisions about the acquisition or continued use of certain instructional materials, or to determine if the instruction is better than some other form of instruction. We will not deal with summative evaluation in this course, but feel free to read Chapter 12 in the Dick and Carey book if you would like more information about the topic.

Overview of Formative Evaluation Procedures

Martin Tessmer, in his book Planning and Conducting Formative Evaluations, details the stages of the formative evaluation process. According to Tessmer, there are four stages of formative evaluation:

  1. Expert Review
  2. One-to-One
  3. Small Group
  4. Field Test

Each stage is carried out in order to accomplish different things, and to progressively improve the instruction. During the evaluation information is collected from experts and members of the target population. While you may collect performance data during some stages of the process, keep in mind that formative evaluation is not concerned with testing the learners, but with testing the instruction itself.

The tables in the following sections provide a rundown of each stage of formative evaluation.

Expert Review

In this stage, experts review the instruction with or without the evaluator present. These people can be content experts, technical experts, designers, or instructors.

Expert Review
What is the purpose of this evaluation type? The expert review looks at the intrinsic aspects of the instruction. these include things like the content accuracy or technical quality. The instruction is generally not evaluated in terms of learner performance or motivation.
When is this type of evaluation usually conducted? Expert review is usually the first step in the evaluation process and should be conducted as early in the ID process as possible.
Who usually conducts this type of evaluation? Instructional designer(s)
Who should participate in the evaluation? One or more of the following experts "walks" through the material with or without the evaluator present:
  • Instructional design expert
  • Content or subject-matter expert
  • Technology expert
What planning strategies should the evaluator employ prior to conducting the evaluation? Decide what information is needed from the review and prepare questions in advance. The following types of information are usually collected at this stage:
  • Content information - Completeness and accuracy of content
  • Teaching/Implementation information - Appeal to learners & teachers, ease of use and fit for learning environment
  • Technical information - A/V quality, media appropriateness
  • Design expertise - Quality of instructional strategies (Context, Components, Conditions, Message Display)
  • Testing expertise - Validity and reliability of assessment (tests)
Decide which experts can provide that information.
  • SME - Subject matter expert
  • Teachers and trainers
  • Subject sophisticate - Student who has completed similar instruction.
  • Instructional design expert - Besides yourself.
  • Production expert - Audio, video and graphic specialists
Choose a format.
  • Face-to-face (best kind)
  • Phone
  • Written review
Prepare the questions.
  • Questions should help to identify glaring mistakes, concerns, and general areas for improvement
  • Avoid biased questions
Design the data recording tools.
  • Use a data recording form that has prepared questions.
  • Leave room for spontaneous comments.
What procedure should be followed during the evaluation? Prepare the expert for the review.
  • Explain the process
  • Introduce the instructional material
Manage the review.
  • Encourage responses
  • Ask elaborative as well as general questions about the instruction as a whole
What data should be collected? Based on general comments as recorded by the expert as well as designer, the following types of data are usually collected during an expert review:
  • Content quality and clarity
  • Instructional design integrity
  • Technological feasibility
  • General attitudes about motivation and context
How should the data be analyzed? Organize all information to help make revision decisions:
  • Summarize the expert's comments.
  • Reject comments that would lead to pointless or impossible revisions.
  • Compare responses from different experts, noting agreement and disagreement.
What is the final product? "To do" list for all revisions to be made.
What are some of the special problems and concerns facing the evaluator(s)?
  • Unpaid experts can be uncooperative or uninterested.
  • SME's often have a hard time explaining things.
  • Doesn't provide performance data or learner's opinions.
  • Consultants can be costly.

One-to-One

This is probably the most utilized form of formative evaluation. In this stage, one learner at a time reviews the instructional materials with the evaluator present. The evaluator observes how the learner uses the instruction, notes the learner's comments, and poses questions to the learner during and after the instruction.

One-to-One Evaluation
What is the purpose of this evaluation type? The purpose of the one-to-one evaluation is to identify the following:
  • Content clarity
  • Clarity of directions
  • Completeness of instruction
  • Difficulty level
  • Quality
  • Typographical/grammatical errors
  • General motivational appeal
When is this type of evaluation usually conducted? One-to-one evaluations are usually conducted after the expert review evaluation but before any other type of formative evaluation.
Who usually conducts this type of evaluation? Instructional designer
Who should participate in the evaluation? The evaluator "walks" through the material with a trial learner. If possible, this type of evaluation should be repeated with other trial learners representing different skill levels, gender, ethnicity, motivation etc. within the target population.
What planning strategies should the evaluator employ prior to conducting the evaluation?

The most important planning strategy is simply determining the information that needs to be collected. The information will be either intrinsic information about the instructional material, or information about the effects of the instruction. The general criteria for making this determination centers around how "rough" the instruction is at the point of the evaluation. The rougher it is, the more likely intrinsic information will be the most useful in informing future revisions. Some specific criteria in judging the "roughness" of an instructional unit:

  • The unit has not been reviewed by anyone yet.
  • More one-to-one evaluations or small group and field test evaluations will be conducted.
  • Either the learner population, the instructional content, or the instructional strategies are unfamiliar to the designers.
  • The performance measures are in need of revision.
If the thrust of the one-to-one is intrinsic (to gather information about the accuracy of content or the technical quality of the instruction), the following types of information may be appropriate for collection.

  • Is the instruction clear?
  • Are the directions clear?
  • Is the instruction complete?
  • Is the instruction too difficult or too easy?
  • Are the visual or aural qualities accurate?
  • Are there typographical or grammatical errors?
To assess learning effects data (how did the instruction assist in the learner's accomplishment of objectives), performance measures can be used in which the learner not only completes the measures, but critiques them.
What procedure should be followed during the evaluation?
  1. Prepare standard interview questions to ask the learner after each instructional activity.
  2. Design standard forms to record and store learner's reactions during instruction.
  3. Prior to the instruction, prepare the learner for the evaluation experience by explaining the goals of the instructional material and their role in helping to improve it.
  4. Manage the evaluation using questions and data collection tools developed prior to the evaluation as a guide.
  5. Close the evaluation by administering any data collection instruments that have been developed for the procedure, including a debriefing section in which the learner is interviewed for any additional information that may not have found its way into the data collection instruments.
  6. Review and analyze the data to develop recommendations for improving the effectiveness of the instruction based on the learner's viewpoint.
  7. Revise the instruction if needed.
  8. Repeat the evaluation using different learners. It is recommended that three learners be evaluated to validate corrective actions stated in the recommendations.
What data should be collected?
  • Practice and posttest performance
  • General attitudes (from survey as well as interview questions)
  • Procedural problems
  • Time
  • Accuracy of material
  • Ease of use
How should the data be analyzed? All data can be evaluated at a glance, with a list of potential revisions documented.
What is the final product? "To do" list of revisions.
What are some of the special problems and concerns facing the evaluator(s)?

Distant subjects - Some subjects may not be able to make the one-to-one session because of logistical reasons. It is suggested that these learners still be reached through other means. For example, written one-to-one questions can be inserted into the learning materials at logical breaking points in the instruction.

The silent learner - Some subjects will be reluctant to respond, often because they do not feel comfortable criticizing the work in the presence of its creators. This can be addressed by warming them up through initial conversations or by asking them some easy questions up front or questions that put them in a position of authority. Another method is to deliberately insert some errors early in the instruction in an effort to elicit their responses.

Small Group

In this stage, the evaluator tries out the instruction with a small group of learners and records their comments. Small group evaluations typically use students as the primary subjects, and focus on performance data to confirm previous revisions and generate new ones.

Small Group Evaluation
What is the purpose of this evaluation type? The small group evaluation provides a "real world" evaluation setting of learner performance. It confirms successful aspects of instruction and offers suggestions for improvements to the implementation of the instruction and ease of administration.
When is this type of evaluation usually conducted? Small group evaluation occurs prior to the field trial but may, unfortunately, take the place of the field trial depending upon funding and time constraints.
Who usually conducts this type of evaluation? Instructional designer(s)
Who should participate in the evaluation? The instructional material is administered to a group of 5-20 participants representing the target population. If possible, a representative teacher or facilitator from the target population will work closely with a member of the design team to administer the material.
What planning strategies should the evaluator employ prior to conducting the evaluation? Planning strategies are addressed in the procedure section below.  The only additional planning might include determining if the selected learners possess the necessary prerequisite skills needed (might be apparent from pretest performance).
What procedure should be followed during the evaluation?
  1. Prepare evaluation questions (pretest if applicable, posttest, attitude survey, interview questions)
  2. Design additional tools for data collection (observation sheets, computer tracking software, etc.)
  3. Prepare the learning environment
  4. Prepare the instructor (if utilized)
  5. Select and prepare the learner(s)
  6. Manage the evaluation
  7. Ensure that tests and questionnaires are administered appropriately (always administer attitude survey before posttest)
  8. Debrief the learners and instructors at the evaluation's conclusion.
  9. Organize and review the evaluation data gathered to infer relationships and conclusions and to make suggestions for revision of the material prior to the next phase of evaluation.
What data should be collected? During this type of evaluation, the following types of data are usually collected:
  • Time
  • Instructional delivery and implementation concerns (instructor's role, media/technology concerns etc.)
  • Performance on individual practice items
  • Performance on individual posttest items (congruency with objectives)
  • Attitudes from survey and interviews
  • Social/collaborative environment concerns
How should the data be analyzed?
  • Perform an item analysis to determine which practice/posttest items were mastered by most of the learners
  • Summarize attitudinal responses (survey and interviews)
  • Summarize implementation data (time, sequencing, media use etc.)
  • Only descriptive procedures can be used.
What is the final product? A brief report which includes a congruency analysis table (how many learners mastered each objective as indicated by practice/posttest performance), implementation summaries, and attitude summaries. From this report, a "to do" list of revisions is generated.
What are some of the special problems and concerns facing the evaluator(s)?
  • You want the instruction to be fairly polished at this stage, but the more polished it is, the less opportunity you may have to make revisions (if you've put a lot of time and money into creating it, for example).
  • You need to make the evaluation very realistic (often difficult to do) if you will not be conducting more small group evaluations or a field test.
  • You may have too few or too many learners - for the first problem anticipate attrition and ask for a few more learners, for the latter problem select a smaller group of data from the large group to analyze.

Field Test

In a field test, the instruction is evaluated in the same learning environments in which it will be used when finished. At this stage the instructional materials should be in their most polished state, although they should still be amenable to revisions.

Field Test
What is the purpose of this evaluation type?

A field trial represents the first time the material is used in a real setting. All material is evaluated, paying special attention to changes made based on the small group evaluation. Implementation procedures are also closely examined during the field trial to determine the effectiveness and feasibility of program implementation in a real class setting.

The data collected during the field trial stage are similar, if not identical, to the data collected during a summative evaluation (primarily performance and attitudes). In general, the field test answers the following questions:

  • "Does the instructional program meet the instructional need?"
  • "Do learners accomplish the stated goal?"
  • "Does the instructional program solve the instructional problem?"
Note that these are really all the same question, just stated differently.
When is this type of evaluation usually conducted? After all other formative evaluations are completed.
Who usually conducts this type of evaluation? Instructional designer(s) and instructors
Who should participate in the evaluation? Actual members of the target population (individuals and/or classes), including both learners and instructors. If the material is designed for an entire class, try to use a class that is similar in size and variability as the target population (25-30 is often the norm).
What planning strategies should the evaluator employ prior to conducting the evaluation? Same as small group
What procedure should be followed during the evaluation? Same as small group
What data should be collected? Same as small group
How should the data be analyzed? Same as small group
What is the final product? The final product is an evaluation report, emphasizing prescriptions for revision.
What are some of the special problems and concerns facing the evaluator(s)?

Too many sites to observe - You may not be able to go to all of the sites during the course of the evaluation. This will call for the use of a "designated observer," which may cause the data collected to be structurally different.

Too much instruction to evaluate - Due to budget restrictions, you may need to choose 30-50% of the instruction to actually introduce into the field setting.

There is much more to conducting a formative evaluation than we will cover in this course. If you would like more information, we suggest you read Chapter 10 in Dick and Carey, or seek out the Tessmer book.

Multimedia Formative Evaluation

For this lesson, we will be using a mix between expert review and one-to-one evaluation procedures. You will be conducting evaluations of several other students' multimedia programs. At the same time, there will be other students evaluating your program. This means that the people who will be evaluating your program are not from the target group of learners who you designed the program for, plus you will not be interacting with these people face-to-face. However, this evaluation method will ensure that you will receive as much objective feedback as possible, while at the same time allowing you to provide important feedback for others. In addition, you will gain experience with the formative evaluation process.

We have created an online formative evaluation interface to help manage the process of submitting and evaluating the projects. It works in a similar manner to the student interface you have been using to submit assignments. As students submit projects to be evaluated, everyone will automatically be assigned to different groups of not more than 4 students. Your assigned students will show up in the evaluation interface. You will evaluate the submitted multimedia project for the other students in your group, and they in turn will evaluate your project. When everyone is finished you will have several separate evaluations from which to gather feedback that can be used to strengthen your program. You will not be required to actually make the changes for this course, but you may want to in the future. Here's the link to the interface:

Link to Evaluation Interface

As with the student submission interface, log in using your ITMA username and its password. Once you are logged in, select the appropriate module. On the next screen you will then be presented with the evaluation options. First, select the appropriate assignment number from the drop-down box. In the other drop-down box you have three options:

 

  1. You can submit your project to be evaluated. This will officially submit your project and allow your evaluators access to it. When you submit your project, enter the URL for your project web page (mmfinal.htm).
  2. You can review the evaluations of your project. When your evaluators finish reviewing your program their scores and comments can be accessed here.
  3. You can evaluate other student projects. When you select that option you will be taken to a screen listing the three students whose projects you must evaluate. Selecting one will bring up the evaluation form.  The peer’s project page will open in a new browser window.  Be sure your browser allows pop-ups.  Peers in your group will not appear until they submit their projects.

The criteria you use to evaluate other students' programs will be the same as the criteria listed in the last lesson (Development). These criteria will appear on the form that you use to evaluate the programs:

  • Relevance of instructional content - Is the instructional content relevant for the instructional goals and context?
  • Relevance of multimedia content - Is the multimedia content relevant for the instructional goals and context?
  • Appropriateness of instructional content - Is the instructional content appropriate for the audience and the subject matter?
  • Appropriateness of multimedia content - Is the multimedia content appropriate for the audience and subject matter?
  • Sufficiency of instructional content - Is the instructional content sufficient to lead to the achievement of the goal?
  • Sufficiency of multimedia content - Is the multimedia content sufficient to support the instruction?
  • Instructional events - Does the program address all of Gagne's events of instruction, except for assessment? This includes gaining attention, informing learners of the objectives, stimulating recall of prior learning, presenting the materials, providing learning guidance, providing practice and feedback, and encouraging transfer.
  • Technical aspects - Do the technical aspects of the program function properly?

You will be asked to rate each criterion on a scale from 1 to 5, depending on how well the program addresses that point. Check one box next to each of the criterion according to how well you feel the program meets that criterion, with 1 being a low score (does not meet the criterion) and 5 being a high score (meets or exceeds the criterion). In addition, there is a space for you to type comments next to each of the criterion. To add comments click your mouse in the appropriate comment box and start typing. These comments are very important, as they will provide important feedback for the developer of the program. Don't worry if your comments exceed one line - it will just wrap to the next line, which is fine. At the bottom there is space for you to add a summary comment.

To help guide you in answering these questions, we have created an evaluation chart with some relevant questions for each criterion.

Link to Evaluation Chart

Remember, the goal of formative evaluation is to improve the effectiveness of your instructional materials. It consists of identifying the problems and weaknesses of the instruction and making revisions based on the data collected. With that in mind, you should not "shred" somebody's program if it is lacking in some areas. At the same time, you should be honest and constructive in your criticism. Your feedback will be essential to other students as they draw up a plan for revisions. If you give a low score in a particular area make sure to use the "Comments" field to elaborate as to why you did so. Just giving a low score will not provide a student with effective enough feedback to make the required changes; your accompanying comments are essential. We anticipate this process will adhere to the highest standards of professional communication practices, as we are a community of learners in which respect is an integral component. In other words, be fair, be honest, and be respectful in your review process.

Evaluation Report

Using the feedback you receive from others, you will now prepare a report summarizing the observations made by the evaluators and the outlining the revisions you would make to your program based on that feedback. You will not actually have to make the revisions in this course. You are merely drawing up a plan that summarizes what you learned from the evaluation process and what you would do to effect changes to your program.

In the first part of the report, summarize what came out of the evaluations. What did the three evaluators say about your program? What is your response to their comments? Discuss the things that are fine the way they are as well as the things that will need revising. This summary should cover the same areas that were covered in the evaluations: relevance, appropriateness, sufficiency, instructional events, and functionality.

In the second part of the report, for the things that need revising, describe how you would go about making those revisions. What would you do to solve the deficiencies? If you have content deficiencies, describe how you will fill in these sections (e.g. with what content?) If you have stylistic deficiencies, describe how you will make the necessary changes to make things more attractive or functional.

Submitting Your Evaluation Report

Your evaluation report should be created in Microsoft Word. At the top of the paper type Multimedia Formative Evaluation. Below that include your name, email address, and the date. When you save the file name it "mmevaluation". Next, create a link to this document from the project web page you created in the last lesson (mmfinal). If you used the template we provided, add a row to the bottom of the table and make this the fifth link.

When you are finished, upload the Word document and the revised web page to your Filebox. When you have finished uploading your files, proceed to the online student interface to officially submit your activities for grading. Once again, submit the URL to your web page, not to the evaluation report. When you are done with that you are done with the course!

Please Note: It is very important that you complete your evaluations by the listed due date, or sooner, if possible. Other students will be depending on the feedback you provide in order to create their final report. Please refer to the "Course Overview" document for the semester's assignment due dates.

Assignment: Formative Evaluation
Points:
75

Grading Criteria:

  • Evaluations of other students' programs completed in a timely manner. (5)
  • Points given to other students' multimedia programs are consistent with the comments provided and with the overall quality of the program. (10)
  • Thoughtful evaluations given. Good notes provided, criticism is constructive. (10)
  • Evaluation Report includes a summary of strengths and weaknesses of the multimedia program as reported by the three evaluators. (10)
  • Evaluation Report encompasses all six criteria - Relevance, Appropriateness, Sufficiency, Instructional Events, and Functionality. (15)
  • Evaluation Report describes revisions that should be made to the multimedia program. (20)
  • Link to Evaluation Report added to project web page. (5)