Welcome | Welkom

Zoomers Learn

Zoomers Learn (Pseudonym) is a nonprofit organization dedicated to worldwide education accessibility. It strives to create learning experiences that are as fun and addictive as social media. The micro-learnings, called “Zooms,” cover a wide variety of topics and are aimed at young adults in the 16-24 year old age range. Topics vary from soft skills for young professionals to common school subjects.


The learning experiences are crafted by a global network of volunteer Learning Designers (LDs). New volunteers join the program every four months, at which time they undergo a three to five-week onboarding process, which includes self-paced e-learning training and several review rounds of their first Zooms. This Evaluation focuses on how well the onboarding program prepares the volunteers to contribute.


During this project I had the privilege to work with three extremely talented peers:
Adam Minahan, Allyson Briner, and Grace Yim. I recommend and can vouch for each of these professionals.

Evaluation Methodology


Using a systematic approach, following Dr. Yonnie Chyung's 10 steps for evaluation (2019), this evaluation aimed to provide insight into the effectiveness of the onboarding process and community support system for new LDs. As a formative evaluation, it sought to identify areas for improvement to better support LDs and facilitate their contribution to the organization.


Evaluation Purpose and Type


The primary intent for conducting this evaluation was to provide insight into the effectiveness of the onboarding process and community support system for new Learning Designers. The results of this evaluation would help the Learning Director to make informed adjustments to both processes. In that way, the evaluation would serve primarily a formative purpose, focusing on identifying areas where the existing system could be enhanced.


Secondly, the team would assess the motivation of Learning Designers, both in terms of how motivated they were to create Zooms and how motivated they felt to keep volunteering with the organization. While this approach was goal-based, it also employed a goal-free methodology in order to uncover potential unknown needs of stakeholders. 

Stakeholders


As a team, we first identified stakeholders in order to ensure that the evaluation was conducted with their best interests in mind and discovered that there are multiple stakeholders involved in the onboarding process at Zoomers Learn. Their roles and impacts are detailed in the tabs below.

The Upstream Stakeholder of the Zoomers Learn Onboarding Program is the current Learning Director who oversees the content development and performance of LDs in addition to providing support. 


The Direct Downstream Impactees are the volunteers who contribute to the Zooms’ production. These can be broken down into three groups:


1. New LDs who will onboard for future cycles starting May 2024

The results of this evaluation could strengthen the onboarding training and result in a stronger learning experience that enables LDs to develop more successful Zooms, perform higher-quality peer reviews, and build their careers in learning design.


2. Team Leads

These volunteers help with logistical aspects of the organization as opposed to Zoom authoring. Team Leads answer any questions LDs may have regarding Slack, ClickUp, their Zoom drafts, or scheduling. With a more effective onboarding experience, Team Leads will have fewer confused new LDs to assist. In addition, the Team Leads can use the evaluation findings to focus on where new LDs need the most improvement.


3. Editors

With improved onboarding, new LDs will author better Zooms, which will make the Stage 3 editing process more efficient. In addition, the Editors can use the evaluation findings to identify weaknesses and gaps in the new LDs’ knowledge and skill application. 

The Indirect Impactees include:


Veteran LDs who conduct peer reviews of new LDs. 

These are Indirect Impactees because they will be minimally impacted by the evaluation results; namely, they will have an easier time doing peer reviews. They are also not directly responsible for the success of the onboarding as are the Editors and Team Leads. Additionally, their contributions via interviews would not be as helpful as interviews with the Editors since Editors are trained to identify weak points in Zooms and review them more deeply while peer review contributions are more superficial.



2+ million learners who access Zooms to educate themselves and develop critical life and career skills. Their learning experiences depend on the success of the onboarding and peer review systems as they anticipate receiving and completing a steady stream of high-quality Zooms.

Dimensions and Importance Weighting


Once we were clear on the evaluation purpose and type and the stakeholders were documented, we then analyzed the onboarding program as a whole using a Program Logic Model (PLM) to identify the resources needed, activities performed, outputs produced, outcomes generated, and broader impact expected of the program.


Having completed the analysis, we selected dimensions for evaluation. In dialogue with the client we established how he intended to use the evaluation findings and what information was particularly of interest to him. From there we determined the importance levels attributed to the two selected dimensions.


  1. The Quality of Onboarding Training and Community Support (critically important)
    How well is the training program as well as the systems and people in place helping/supporting Learning Designers produce Zooms that meet organizational standards?

  2. Volunteer Motivation (fairly important)
    How motivated are the Learning Designers to continue producing Zooms for the organization and recommend this volunteer opportunity to others? 
    

Evaluation process


Once the dimensions were clear, we then evaluated by distributing a survey to new LDs (n = 70) to gain their personal perspectives on the onboarding process and to gauge their motivation to continue volunteering with the organization. We only received 6 responses. From there we scheduled semi-structured interviews with two of those new LD's.


For the first dimension, we also interviewed three Editors/Team Leads to gain their perspective on the quality of the LDs’ Zooms, to gain an understanding of how effective the onboarding training is, and we wanted to hear about how they felt the new LDs understood other aspects of the organization (e.g. use of tech tools).


For the analysis we created and analyzed codebooks, reviewed extant data that we collected on Zoom production, and exit survey responses. We then also reviewed the onboarding training material that our interviewees identified as potential problematic sections in order to triangulate with the other data for our second dimension.


A detailed overview of the survey and other instruments and rubrics used can be found here.

Project Documentation: Models & Instruments


Below you will find the links to the project documentation related to models  and instruments used during this project, as described in above sections.

Evaluation Results


Overall, the new LDs felt satisfied with the onboarding training and felt motivated to continue volunteering with the organization, as outlined in the rubrics for each dimension (click on the links below to see the rubrics for each dimension):


Rubric Dimension 1: The Quality of Onboarding Training and Community Support (critically important)

Rubric Dimension 2:  Volunteer Motivation (fairly important)
 

Since the response rate was low, we could not summatively assess the quality of the onboarding training and volunteer motivation, but we did uncover areas for improvement.

Recommendations


Our recommendations come from both the survey and interview participants as well as more that we formulated based on a review of the problematic sections of the onboarding (per our collected data). Our three major recommendations are as follows:

#1 - Redesign the “Designing Zooms with Accessibility in Mind” module of the onboarding training

Emphasize the importance of accessibility in e-learning by integrating testimonials from learners who benefit from accessible materials and implementing periodic reminders for including alt-text in Slack messages.

#2 - Improve the Peer Feedback process

Showcase realistic examples of both high-quality and weaker peer reviews during onboarding, offer practice opportunities through simulated peer review exercises, pair new LDs with veteran LDs for initial peer reviews, and send occasional Slack reminders with feedback tips.

#3 - Provide opportunities for continued professional development

Foster continued professional development by providing internal Zooms for volunteers who teach instructional design theories and best practices.

Final Report

Limitations


Throughout the evaluation process, we conducted meta-evaluations of our own performance as evaluators. These helped us identify how we could have conducted our evaluation better and determine which limitations were unavoidable. Here were our two main limitations*:


  • We made an oversight regarding a conflict of interest due to scheduling challenges. As our Project Lead is a Team Lead in the organization and he conducted the interviews, it may have influenced interview participants to answer in particular ways. As a corrective measure, the other members of the group reviewed the recordings and transcripts.
  • The low response rate limited our data collection, with only five interviews conducted and six survey responses received, despite sending the survey to 70 LDs. This makes it difficult to draw conclusive insights, highlighting the need for careful interpretation of the available data.


* For a more comprehensive list of risks and limitations, see Appendix A in the final report shared in the above section.

Reflections

Throughout the evaluation project, we continually assessed our methodologies, and have learned some lessons that we will take with us into future evaluation initiatives. 


One critical lesson learned was recognizing and addressing conflicts of interest. We were aware of a potential risk from the beginning of the project since one of our team members works at our client organization. We covered this high level in our team charter, but in retrospect, we should have spent more time on this to consider the exact ways in which it could manifest. When our team encountered some unforeseen scheduling challenges due to time zone constraints and work commitments, we were reliant on this team member to conduct the interviews. In the process, we lost sight of the ethical implications. Reflecting on it afterward, we took corrective action by delegating the review of the recordings, transcripts, and coding to other team members.


Our project’s collaboration and communication framework proved highly effective, as we were guided by a team charter that clearly defined roles, expectations, and commitments. This framework not only facilitated the interactions between team members but also ensured a trust-based relationship with our clients and stakeholders, enabling us to show up professionally and stay on target. As such, we did not experience any interpersonal conflict or obstacles that could not be resolved through constructive criticism and dialogue.


The success of our evaluation relied partially on the willingness of volunteers to participate in interviews and respond to surveys. Since we were reliant on volunteers who are generally busy with full-time jobs and other responsibilities, we were concerned from the outset that response rates could turn out to be lower than desired. We took careful consideration in our planning process in terms of how to promote the evaluation to our target participant audience, involving our client in sending out the requests for participation and reminders, and to accommodate possible delays in responses. Despite these proactive mitigation strategies, we did indeed experience lower-than-anticipated response rates, which resulted in a modest five interviews and six survey completions. This outcome prevents us from being able to form definitive conclusions, but it is equally important to not discount the results, as they can help suggest areas of improvement and further study. 


While we anticipated limited availability for document reviews by the client, our client’s willingness to contribute and prompt response, in combination with our ability to make efficient use of our scheduled contact meetings, ensured this was not a problem.

References


Chyung, S. Y. (2019). 10-step evaluation for training and performance improvement. Sage.


Giacumo, L. A., Villachica, S. W., & Stepich, D. A. (2023). Instructional design for organizational justice: A guide to equitable learning,                      training, and performance in professional education and workforce settings [Unpublished manuscript]. Prototype.


Waddington, L., & Dell, D. (n.d.). ARCS-MVP model. Pressbooks. https://pressbooks.bccampus.ca/arcanddl/chapter/arcs-mvp-model

Share by: