#23 How I took completion rates on our programs to more than 85%
—————————————————————————————————————-
MOOCs have been critiqued widely critiqued for their low completion rates.
However, I think MOOCs have done an incredibly good job of scaling access to high-quality learning content around the world. That shouldn’t be confused with scaling access to good mentorship. Creating high-quality online content is a fixed cost and a solid first step.
Building engaging learning experiences around that content is something that requires ongoing investment and a lot of human factors.
Completion rate is the holy grail in online education. I have been obsessed with it as well. We built highly immersive, mentor-led programs at GreyAtom and we constantly see high completion rates. We have tried this across different kinds of models - pay upfront, pay later, pay partial upfront, but I consistently see more than 85% of learners finish the programs.
Some of the biggest drivers of these improved metrics include making people pay for online programs(full or partial), having a learner selection criteria, building learner engagement plans on slack, not making the entire content available at one go and combining the best of synchronous and asynchronous learning
Here are some best practices for moving the needle on completion rates for online programs for us, and I think will do the same for anyone in general
1. Make learners put skin in the game
Learners need to feel accountable.
The most obvious way to do this is to simply charge a fee - we have various kinds of pricing models across affordability spectrum and the outcome learners seek.
Another effective strategy can be making students complete an application that asks them to reflect on how they will apply what they learn and why they are prepared for this experience. Some kind of skin in the game is definitely needed. At different points in our journey, we took a combinational approach to make sure that the learners had the skin in the game.
Ideas from EdSurge
2. Build blended learning experiences - sync and async
We designed a learning approach wherein absorption of new concepts takes place online at a learner’s own pace, while mentor time is reserved for hands-on work and peer collaboration. The difference in results is unmistakable.
I have been guided by a few broad principles here -
Always start async - We want learners to almost always do something before a session and prepared. We call it course pre-work/session pre-work/pre-reads. It's the best way to get all learners at a similar starting point - each time.
Most content interactions (and individual processing) are ideally be done asynchronously and most social interactions (and group processing) happen in done sync
Offering more time to process with asynchronous elements is especially important when cognitive load is high. We define the complexity of each module and have additional async individual processing activities such as additional coding challenges, quizzes, and scenarios that help learners know that they got it.
I will be writing a separate blog on this
3. Set Deadlines
Each of our Sprint projects across our programs have a due date. Instead of a course content constantly available, impose a final deadline, and make the learning materials available only for certain intervals. For our Hackathons, we do not accept submissions once the learners are "Past Due".
Another useful strategy is to Unlock further learning content only when learners complete the previous learning.
(Check the due dates below with each Sprint)
4. Check for Learnability & Grit
When you build a sense of selectivity into registration, people generally value subsequent experience more.
When we started GreyAtom, back in 2017 - for the initial 3 cohorts we were extremely selective about who we took in to do the programs. We were not checking for technical aptitude in a person, but a key skill we were looking out for was LEARNABILITY and GRIT.
How would we check for those? Well, in very many ways -
We had a micro-learning path on Python before folks got to the Data Science bit - even if someone had no prior background in coding we would get them started. We would give them 2 weeks to go through and came back. We would evaluate them again. In some interviews, we would ask them if they knew what thermodynamics was. If they didn't, we would give them a few hours to grasp some basic principles - no equations, no formulas, but what it is and how to apply it. We were constantly checking if people were able to get through discomfort and pick up something that they dint know before. Turns out it was a key skill that differentiated our successful learners.This is quality in someone that we couldn’t change, so we just wanted to be sure that they came in with this.
When we kicked off our Income Share Agreement programs in early Jan 2020, we made sure to check for these skills beyond everything else
The line of questioning we stuck to (borrowing also from Angela Duckworth's GRIT scale)
Have you done online learning before
Were you able to complete the online courses you started? How did you utlize your learning or build on top of it?
What do you do when you are stuck in learning?
How do you deal with setbacks in your life?
What is your process to resolve a programming challenge or any other issue that you face?
What is the most impressive thing you have built so far?
You could argue that these are subjective, but 8 out of 10 times we got the right learner in.
This process wasn’t scalable - but we wanted to do those things anyways, things that wouldn’t scale!
In between, we dropped the selection and made entry open to our larger programs - but then brought in an endorsement framework at the end of the program - to really rally behind learners who were giving their best to get them the jobs.
5. Build up peer competition
We gamified a cohort leaderboard on the platform - where every learner sees where they stand in their cohort. We soon also realized that if someone had fallen behind for the initial couple of months, they would lose hope to ever be on top of the leaderboard.
So we built in a monthly view as well, to give a fair chance to everyone. Turns out, the leaderboard is one of the most viewed pages on the web app
6. Track the Right Metrics
Completion rates are a lagging metric. Engagement is a leading metric - that can truly enable us to push the learners in the right direction - if they are falling behind. But how do we track engagement
So what really defines engagement, breaking it down?
Cognitive Engagement
Extent to which individuals are
Paying attention to course content
Processing the information
Performing tasks
Behavioral engagement
Extent to which individuals
Help each other
Participate in discussions
Respond to GreyAtom conversations
More on how we defined engagement in a separate post
📉 Metrics we pay close attention to
On schedule completion of sprint projects
Task completion on platform
Participation in program hackathons
Session attendance
Mock interview performance
Internal employability skill scores
As educators we have an obligation to use tools and insights from fields like behavioral science, psychology, and pedagogy to help more students finish what they start.One way or another we made sure that we are able to elevate learners to be fully committed to the cause of their own transitions.
Thanks for reading all the way to the bottom. I know your attention is demanded everywhere so I’m grateful that you even gave a minute of it to this post. If you ever want to reach out, please feel free to reply here, find me on Linkedin or message me on twitter.
Here’s a crazy idea: if you received this in your inbox, can you please forward it to a friend who you think might enjoy my writing? It’s easy. And I would be forever grateful.