Now that you've developed your multimedia program, you may think that you're finally finished. Hooray! Done! On to the next project! Unfortunately, many projects end this way, and never realize their true potential because they are not run through an evaluation process. Let's take one last look at the Dick and Carey model of instructional design. Notice where we are and you can see that developing your instructional materials is not the last step in their ID model. There's still a couple of boxes dealing with different types of evaluation.
Notice the dotted lines coming out and pointing back to earlier steps, along with the box that says "Revise Instruction". These things indicate that the ID process in not completely linear. Each step can be revisited as feedback is received from the evaluation procedures. Indeed, that's the point of conducting a formative evaluation.
Formative evaluation involves the collection of data and information during the development process that can be used to improve the effectiveness of the instruction. Formative means that the instructional materials are in their formative, or early stages, and evaluation refers to the process of gathering data to determine the strengths and weaknesses of the instruction. "Thus, formative evaluation is a judgment of the strengths and weaknesses of instruction in its developing stages, for the purposes of revising the instruction to improve its effectiveness and appeal" (Tessmer, 1997, p. 11). Any form of instruction that can still be revised is a potential candidate for formative evaluation. This includes paper-based instruction, computer-based instruction, live lectures, and workshops.
Evaluation is one of the most important steps in the design process, and yet it's usually the step that gets left out. Many instructional projects are never evaluated with experts or actual learners prior to their implementation. The trouble with that is that the designers and developers are often "too close" to the project and lose their ability to evaluate the effectiveness of what they are working on (forest - trees - yadda yadda). For that reason, it's imperative that you bring in people from outside of the process to help you determine if you are truly hitting the mark, or if some minor (or major) adjustments are in order to bring the instruction up to its full potential.
Formative evaluation procedures can be used throughout the design and development process. You probably have already formatively evaluated your materials in the process of developing them. You might lay out components on the screen, try them out, and then move them around if they are not exactly right. Or, you might write some instructional text, try it out to see if you think it addresses the objective, and then rewrite it to make a better match. At this point, though, it's time to seek outside help. Even trying out instructional materials with a single learner can point out obvious flaws and lead to revisions that can have a major impact on the effectiveness of the instruction. Think of it more as a problem-finding stage of the instructional design process, not as a separate process altogether.
The other type of evaluation is Summative Evaluation. Summative evaluation is conducted when the instructional materials are in their final form, and is used to verify the effectiveness of instructional materials with the target learners. The main purpose is usually to make decisions about the acquisition or continued use of certain instructional materials, or to determine if the instruction is better than some other form of instruction. We will not deal with summative evaluation in this course, but feel free to read Chapter 12 in the Dick and Carey book if you would like more information about the topic.
Martin Tessmer, in his book Planning and Conducting Formative Evaluations, details the stages of the formative evaluation process. According to Tessmer, there are four stages of formative evaluation:
Each stage is carried out in order to accomplish different things, and to progressively improve the instruction. During the evaluation information is collected from experts and members of the target population. While you may collect performance data during some stages of the process, keep in mind that formative evaluation is not concerned with testing the learners, but with testing the instruction itself.
The tables in the following sections provide a rundown of each stage of formative evaluation.
In this stage, experts review the instruction with or without the evaluator present. These people can be content experts, technical experts, designers, or instructors.
Expert Review | |
What is the purpose of this evaluation type? | The expert review looks at the intrinsic aspects of the instruction. these include things like the content accuracy or technical quality. The instruction is generally not evaluated in terms of learner performance or motivation. |
When is this type of evaluation usually conducted? | Expert review is usually the first step in the evaluation process and should be conducted as early in the ID process as possible. |
Who usually conducts this type of evaluation? | Instructional designer(s) |
Who should participate in the evaluation? |
One or more of the
following experts "walks" through the material with or
without the evaluator present:
|
What planning strategies should the evaluator employ prior to conducting the evaluation? |
Decide what information
is needed from the review and prepare questions in advance. The
following types of information are usually collected at this stage:
|
What procedure should be followed during the evaluation? |
Prepare the expert for
the review.
|
What data should be collected? |
Based on general
comments as recorded by the expert as well as designer, the
following types of data are usually collected during an expert
review:
|
How should the data be analyzed? |
Organize all information
to help make revision decisions:
|
What is the final product? | "To do" list for all revisions to be made. |
What are some of the special problems and concerns facing the evaluator(s)? |
|
This is probably the most utilized form of formative evaluation. In this stage, one learner at a time reviews the instructional materials with the evaluator present. The evaluator observes how the learner uses the instruction, notes the learner's comments, and poses questions to the learner during and after the instruction.
One-to-One Evaluation | |
What is the purpose of this evaluation type? |
The purpose of the
one-to-one evaluation is to identify the following:
|
When is this type of evaluation usually conducted? | One-to-one evaluations are usually conducted after the expert review evaluation but before any other type of formative evaluation. |
Who usually conducts this type of evaluation? | Instructional designer |
Who should participate in the evaluation? | The evaluator "walks" through the material with a trial learner. If possible, this type of evaluation should be repeated with other trial learners representing different skill levels, gender, ethnicity, motivation etc. within the target population. |
What planning strategies should the evaluator employ prior to conducting the evaluation? |
The most important planning strategy is simply determining the information that needs to be collected. The information will be either intrinsic information about the instructional material, or information about the effects of the instruction. The general criteria for making this determination centers around how "rough" the instruction is at the point of the evaluation. The rougher it is, the more likely intrinsic information will be the most useful in informing future revisions. Some specific criteria in judging the "roughness" of an instructional unit:
|
What procedure should be followed during the evaluation? |
|
What data should be collected? |
|
How should the data be analyzed? | All data can be evaluated at a glance, with a list of potential revisions documented. |
What is the final product? | "To do" list of revisions. |
What are some of the special problems and concerns facing the evaluator(s)? |
Distant subjects - Some subjects may not be able to make the one-to-one session because of logistical reasons. It is suggested that these learners still be reached through other means. For example, written one-to-one questions can be inserted into the learning materials at logical breaking points in the instruction. The silent learner - Some subjects will be reluctant to respond, often because they do not feel comfortable criticizing the work in the presence of its creators. This can be addressed by warming them up through initial conversations or by asking them some easy questions up front or questions that put them in a position of authority. Another method is to deliberately insert some errors early in the instruction in an effort to elicit their responses. |
In this stage, the evaluator tries out the instruction with a small group of learners and records their comments. Small group evaluations typically use students as the primary subjects, and focus on performance data to confirm previous revisions and generate new ones.
Small Group Evaluation | |
What is the purpose of this evaluation type? | The small group evaluation provides a "real world" evaluation setting of learner performance. It confirms successful aspects of instruction and offers suggestions for improvements to the implementation of the instruction and ease of administration. |
When is this type of evaluation usually conducted? | Small group evaluation occurs prior to the field trial but may, unfortunately, take the place of the field trial depending upon funding and time constraints. |
Who usually conducts this type of evaluation? | Instructional designer(s) |
Who should participate in the evaluation? | The instructional material is administered to a group of 5-20 participants representing the target population. If possible, a representative teacher or facilitator from the target population will work closely with a member of the design team to administer the material. |
What planning strategies should the evaluator employ prior to conducting the evaluation? | Planning strategies are addressed in the procedure section below. The only additional planning might include determining if the selected learners possess the necessary prerequisite skills needed (might be apparent from pretest performance). |
What procedure should be followed during the evaluation? |
|
What data should be collected? | During this
type of evaluation, the following types of data are usually
collected:
|
How should the data be analyzed? |
|
What is the final product? | A brief report which includes a congruency analysis table (how many learners mastered each objective as indicated by practice/posttest performance), implementation summaries, and attitude summaries. From this report, a "to do" list of revisions is generated. |
What are some of the special problems and concerns facing the evaluator(s)? |
|
In a field test, the instruction is evaluated in the same learning environments in which it will be used when finished. At this stage the instructional materials should be in their most polished state, although they should still be amenable to revisions.
Field Test | |
What is the purpose of this evaluation type? |
A field trial represents the first time the material is used in a real setting. All material is evaluated, paying special attention to changes made based on the small group evaluation. Implementation procedures are also closely examined during the field trial to determine the effectiveness and feasibility of program implementation in a real class setting. The data collected during the field trial stage are similar, if not identical, to the data collected during a summative evaluation (primarily performance and attitudes). In general, the field test answers the following questions: Note that these are really all the same question, just stated differently. |
When is this type of evaluation usually conducted? | After all other formative evaluations are completed. |
Who usually conducts this type of evaluation? | Instructional designer(s) and instructors |
Who should participate in the evaluation? | Actual members of the target population (individuals and/or classes), including both learners and instructors. If the material is designed for an entire class, try to use a class that is similar in size and variability as the target population (25-30 is often the norm). |
What planning strategies should the evaluator employ prior to conducting the evaluation? | Same as small group |
What procedure should be followed during the evaluation? | Same as small group |
What data should be collected? | Same as small group |
How should the data be analyzed? | Same as small group |
What is the final product? | The final product is an evaluation report, emphasizing prescriptions for revision. |
What are some of the special problems and concerns facing the evaluator(s)? |
Too many sites to observe - You may not be able to go to all of the sites during the course of the evaluation. This will call for the use of a "designated observer," which may cause the data collected to be structurally different. Too much instruction to evaluate - Due to budget restrictions, you may need to choose 30-50% of the instruction to actually introduce into the field setting. |
There is much more to conducting a formative evaluation than we will cover in this course. If you would like more information, we suggest you read Chapter 10 in Dick and Carey, or seek out the Tessmer book.
For this lesson, we will be using a mix between expert review and one-to-one evaluation procedures. You will be conducting evaluations of several other students' multimedia programs. At the same time, there will be other students evaluating your program. This means that the people who will be evaluating your program are not from the target group of learners who you designed the program for, plus you will not be interacting with these people face-to-face. However, this evaluation method will ensure that you will receive as much objective feedback as possible, while at the same time allowing you to provide important feedback for others. In addition, you will gain experience with the formative evaluation process.
We have created an online formative evaluation interface to help manage the process of submitting and evaluating the projects. It works in a similar manner to the student interface you have been using to submit assignments. As students submit projects to be evaluated, everyone will automatically be assigned to different groups of not more than 4 students. Your assigned students will show up in the evaluation interface. You will evaluate the submitted multimedia project for the other students in your group, and they in turn will evaluate your project. When everyone is finished you will have several separate evaluations from which to gather feedback that can be used to strengthen your program. You will not be required to actually make the changes for this course, but you may want to in the future. Here's the link to the interface:
As with the student submission interface, log in using your ITMA username and its password. Once you are logged in, select the appropriate module. On the next screen you will then be presented with the evaluation options. First, select the appropriate assignment number from the drop-down box. In the other drop-down box you have three options:
The criteria you use to evaluate other students' programs will be the same as the criteria listed in the last lesson (Development). These criteria will appear on the form that you use to evaluate the programs:
You will be asked to rate each criterion on a scale from 1 to 5, depending on how well the program addresses that point. Check one box next to each of the criterion according to how well you feel the program meets that criterion, with 1 being a low score (does not meet the criterion) and 5 being a high score (meets or exceeds the criterion). In addition, there is a space for you to type comments next to each of the criterion. To add comments click your mouse in the appropriate comment box and start typing. These comments are very important, as they will provide important feedback for the developer of the program. Don't worry if your comments exceed one line - it will just wrap to the next line, which is fine. At the bottom there is space for you to add a summary comment.
To help guide you in answering these questions, we have created an evaluation chart with some relevant questions for each criterion.
Remember, the goal of formative evaluation is to improve the effectiveness of your instructional materials. It consists of identifying the problems and weaknesses of the instruction and making revisions based on the data collected. With that in mind, you should not "shred" somebody's program if it is lacking in some areas. At the same time, you should be honest and constructive in your criticism. Your feedback will be essential to other students as they draw up a plan for revisions. If you give a low score in a particular area make sure to use the "Comments" field to elaborate as to why you did so. Just giving a low score will not provide a student with effective enough feedback to make the required changes; your accompanying comments are essential. We anticipate this process will adhere to the highest standards of professional communication practices, as we are a community of learners in which respect is an integral component. In other words, be fair, be honest, and be respectful in your review process.
Using the feedback you receive from others, you will now prepare a report summarizing the observations made by the evaluators and the outlining the revisions you would make to your program based on that feedback. You will not actually have to make the revisions in this course. You are merely drawing up a plan that summarizes what you learned from the evaluation process and what you would do to effect changes to your program.
In the first part of the report, summarize what came out of the evaluations. What did the three evaluators say about your program? What is your response to their comments? Discuss the things that are fine the way they are as well as the things that will need revising. This summary should cover the same areas that were covered in the evaluations: relevance, appropriateness, sufficiency, instructional events, and functionality.
In the second part of the report, for the things that need revising, describe how you would go about making those revisions. What would you do to solve the deficiencies? If you have content deficiencies, describe how you will fill in these sections (e.g. with what content?) If you have stylistic deficiencies, describe how you will make the necessary changes to make things more attractive or functional.
Your evaluation report should be created in Microsoft Word. At the top of the paper type Multimedia Formative Evaluation. Below that include your name, email address, and the date. When you save the file name it "mmevaluation". Next, create a link to this document from the project web page you created in the last lesson (mmfinal). If you used the template we provided, add a row to the bottom of the table and make this the fifth link.
When you are finished, upload the Word document and the revised web page to your Filebox. When you have finished uploading your files, proceed to the online student interface to officially submit your activities for grading. Once again, submit the URL to your web page, not to the evaluation report. When you are done with that you are done with the course!
Please Note: It is very important that you complete your evaluations by the listed due date, or sooner, if possible. Other students will be depending on the feedback you provide in order to create their final report. Please refer to the "Course Overview" document for the semester's assignment due dates.
Assignment:
Formative Evaluation Points: 75 Grading Criteria:
|