Evaluation and Implementation

These two reports exhibit instances where I have implemented interventions, assessed the quality of instructional products, and optimized the products based on an efficient understanding of evaluation data.

Evaluating a Data Set

EDIT 7350e: Evaluation and Analytics in Instructional Design, Dr. Bagdy, Fall 2022

An evaluation of a hypothetical professional development program focusing on the integration of technology into elementary school teaching. This evaluation is based on a data set that resulted from applying three different data collection tools to participant teachers and analyzing the results to determine the efficacy of the program as well as to provide recommendations regarding potential changes and further interventions which would benefit teachers.

View Details

Data Wrangling

As a part of EDIT 7350e, we were presented with a "raw" data set from a hypothetical professional development program and were tasked with using the data to evaluate the efficacy of that program and make recommendations. The first task was to take all of the data and structure it so that we could more easily interpret the information and identify patterns. Our data came from the results of a survey of 20 teachers in the program, observation results from 16 lessons in the school, and interview results from 10 teachers.

The first step was to identify basic central tendencies for the survey and observation data which was largely numeric since it was created using Likert scale measurements. For the interviews, responses had to be coded and categorized to identify basic frequencies for the responses. Once those central measures were completed, some more complex comparisons and analyses could be made. The data also included some basic demographic information about the participating teachers like their years of experience and the grade level they currently taught. I created some additional tables to depict that information generally and for each data collection tool. I also decided to use Excel to generate some scatter plots comparing the demographic information of the teachers to the results they provided from the surveys. Lastly, I used Excel's conditional formatting tool to automatically highlight results in the data that were strongly positive or strongly negative in order to provide some visual indicators for trends and patterns.

Interpreting the Data

I noticed some common trends in the data which would be used to evaluate the program and ultimately make recommendations. The survey included 15 questions split across 5 different categories and the measures of central tendency clearly indicated a category of weakness: time. I was able to compare the measures of central tendency not only for the entire survey, but also for the individual categories and survey items to highlight this major weakness. The fact that the category scored so low (1.85 out of 4.00) indicated that this was still a struggle experienced by teachers. When "wrangling" the data, I decided to plot the demographic information against the results of each survey item and found two results with a noticeable trend. The scatter plots showed that more experienced teachers reported their technology as being more reliable, and that younger teachers felt like it took less time to integrate technology than expected.

Similar methods were used to analyze the results of the observations. By comparing the results of each observational item or event to the mean, I could highlight low and high scores which were significant. The analysis shows that teachers are succeeding in effective use and assessment of lesson objectives (scoring 3.94 and 4.00 out of 4.00 respectively), that teachers are assessing technology skills in authentic contexts (4.00 out of 4.00), and that teachers appear prepared for their lessons (3.94 out of 4.00). The major weaknesses surround the use of technology lessons to create persuasive arguments (1.69 out of 4.00), and a group of items that cover technology issues during lessons like interruptions (scoring at 3.06).

Interpreting the interview data requires looking at the frequency of different response codes throughout the interview to identify which comments and concerns were brought up most commonly among the interviewees. A more frequent and consistent response code suggests that the code is more universal among the population or is more pervasive in the system. The most frequent code, at n=14 for the frequency of response, was a concern regarding the amount of time needed to plan or implement technology lessons. This echoes the results of the survey where time concerns scored poorly there as well. The second most frequent code, n=13, covered comments saying that the interviewee's knowledge and experience regarding technology integration has increased over time.

Limitations

When analyzing a data set, it is important to look for limitations and weaknesses of the data before drawing sweeping conclusions based on the numbers alone. In this case, some of the data are limited in critical ways. The observations only covered 3 teachers with more than 10 years of experience, and half of the 8 teachers observed teach 4th grade. In fact, no 1st grade teachers were observed at all. This meant that the observation data is skewed. It heavily represents younger teachers while underrepresenting the performance of experienced teachers. It also heavily represents the 4th grade, underrepresents other grades, and has no representation for the 1st grade at all.

Similarly, the interview data is limited due to the participant demographics. 10 teachers were interviewed and only 3 have 10 or fewer years of experience. All of the interviewees come from the 3rd, 4th, or 5th grade; none from 1st or 2nd. As a result, the interview results also have representation issues. The interviews can not accurately speak for younger teachers or for teachers for teachers from the younger grades.

Recommendations

The general evaluation of the results suggests that the program is working well and should be continued. Teachers have responded positively to the supports provided by the program and the data gathered suggests that technology integration is improving despite some concerns which still need to be addressed. The mentor position, while not adequately evaluated due to a lack of data collection specific to the position, also seems to be positively viewed.

A few additional interventions are also recommended: based on the data from all 3 data collection tools, an audit concerning the reliability of the existing technology seems warranted. It may be necessary to improve the existing hardware or to improve supporting infrastructure like internet bandwidth as technology-heavy lessons become more emphasized. It is also recommended that the administration clarify what the purpose of technology lessons should be, as the observation instrument suggests that developing persuasive arguments is a desired outcome which is not being accomplished. The results also showed that teachers want more professional development. This is a major reason why the program should be continued; assuming that the necessary resources are still available, the teachers are requesting that the program continue.

Lessons Learned

I really enjoyed the opportunity to work with a rich and varied data set, and found it rewarding to "dig" through the numbers, charts, and codes to find the story that the data could tell. I learned skills and strategies necessary to convert data into more easily manipulated tables, and some personal systems for structuring, reading, and displaying the data in formats that highlight the important results. The ability to work with large data sets is essential for a number of instructional design and human performance technology tasks, and I've already been able to apply these methods and systems to some assessment and evaluation work in the field as a consultant.

Download Evaluation

Employee Onboarding Needs Assessment and Evaluation

EDIT 7150e - Principles of Human Performance Technology and Analysis, Dr. Stefaniak, Spring 2022

A needs assessment and evaluation for a healthcare company concerning the efficacy of a distance-learning, onboarding and training model which included the development and implementation of a new learner feedback system. This needs assessment, evaluation, and the subsequent non-instructional intervention, is based on data collected by the design team from current employees regarding their experiences in the training program.

View Details

Evaluating Efficacy

My team and I conducted a needs assessment for a healthcare company. Our primary objectives were to evaluate the efficacy of the new distance-training model, make recommendations concerning potential adjustments and changes to the model, also to create a new process to gather more quality data to help the training model become self-regulating. We created three data collection tools to gather information from employees who complete the training, the instructional design team who manages and runs the training, and leadership employees who are directing initiatives for improvement and development. While the data gathered are extensive, we were able to determine that the training program is functioning as intended and is being received well by employees. There was a concern among leadership employees that the pacing of the content was an issue, but the results of the evaluation do not indicate that pacing in later lessons is an issue.

Optimizing Feedback

When conducting the needs assessment, we noticed a gap in the amount of employee feedback the client was receiving compared to the amount and quality of data they were seeking. We determined that a new method for seeking feedback from employees after onboarding was needed, so we began designing and developing a new instrument for this purpose. The original feedback system was a survey that required more than 20 minutes to complete, which is far too long. Our new collection method split the survey into 2 parts, one for each week of training. This helped to shorten the time needed to gather information, and also meant that they employee was not required to give detailed feedback about material that occurred weeks prior. This reduced the time and mental load required for an employee to complete the feedback survey. By making these revisions, we were able to optimize the non-instructional intervention. Response times now average at less than 5 minutes and while the response percentage still needs improvement, it has grown by more than 30%.

Implementing New Systems

Knowing that the training and onboarding system was effective, we wanted to be certain that the client would be able to gather regular, quality data surrounding the training model so that the client's instructional design teams could make periodic revisions and changes. We also changed the way that the employee feedback surveys were implemented to improve response rates.  The original survey was sent out by email after 2 weeks of onboarding and training which meant it was sent out after business hours on a Friday. It's no surprise that few employees completed the survey and that those who did provided very limited responses. We created and implemented a new delivery system. Rather than send it out by email, we also asked the design team to integrate the feedback survey into the lesson itself by building time on each Friday for employees to complete the feedback before wrapping up the day. As mentioned before, we split the survey into two parts so that each week ended with a shorter feedback request. With these shorter surveys now also integrated into the lesson structure directly, employees are being given time during work hours to complete it prior to logging off for the day. This means that fewer surveys get buried in email inboxes, ignored over the weekend, or quickly skimmed through the following week.

Lessons Learned

A needs assessment can start with a few set goals in mind and then very quickly and easily divert toward new directions as more things are uncovered. I learned the importance of clear goal setting and project definition as it relates to the client, because this helps the instructional designer(s) focus on the established goals rather than having to frequently divert attention to new initiatives. I also learned about the breadth of a needs assessment and an evaluation, and the amount of time needed to create and disseminate data collection tools. We also found that it was sometimes frustrating to try and implement new systems or changes because, even if they are necessary and beneficial to the organization, people are often resistant to change.

Download ReportView Presentation