Queen Rania Foundation

5 TIPS FOR A BETTER M-LEARNING PROGRAM EVALUATION

Apps

In 2017 the Queen Rania Foundation (QRF) ventured into the m-Learning apps universe with the Karim & Jana (K&J) app, with the hope of improving numeracy skills among Arabic-speaking children between the ages of 3-5 years across Jordan and the Arab world. The app introduces children to Arabic numbers using multi-level games. It was designed to be used by children in different learning environments - both the school and home environments.

QRF was not only interested in developing and launching the app, but also keen on assessing and evaluating its usage to understand to what extent it was helping users become more numerate. Although, the first efficacy study of the K&J app showed inconsistent improvements in the users numeracy skills (you can learn more about that here [1]), QRF still learned a lot from the evaluation process.

QRF faced several challenges during the app evaluation process. Such challenges presented great learning opportunities that could help in contributing to an effective and coherent m-learning apps evaluation process. In summary, there were 5 key lessons that we learned from this experience:

Concept testing research: Before embarking on the actual evaluation, it is important to conduct primary research to validate the concept behind the m-learning modality and implementation. The qualitative concept testing research should gauge potential users’ engagement with the app and its content. Users’ feedback will then allow for the creation of a more appealing and attractive app for the program piloting phase – where we will measure the impact the app will have on users.

Internet and Smartphone penetration:  Obviously, m-learning apps depend on the usage of smartphones or tablets (it’s in the name!) to deliver their educational outcomes. Therefore, having high smartphone and internet penetration rates among the targeted audience is extremely important to ensure accessibility to the app. Although obvious, this is a step that is often taken for granted and overlooked. Moreover, it’s important to develop a sophisticated and nuanced definition of “connectivity”. More often than not, it’s not enough for users to have access to a connected device: they need to have the right behaviors and capabilities to perform the app’s critical tasks.  

Learning outcomes categorization: Any m-learning app should aim to deliver specific learning benefits to its users. As such, it is critical for the content of the app to be examined, weighted and evaluated by an expert who is able to categorize and associate the app’s learning benefits with specific learning outcomes. Such a task is usually performed by a Social Science Psychometrician in order to help identify the type and level of impact the application is expected to deliver.

Measuring the learning Impact: To assess the impact of m-learning apps on its desired target audience and the type of benefit it is delivering to them, we recommend conducting an impact evaluation pilot using qualitative and quantitative research. Such an evaluation process will help in measuring the benefits users have received from using the app and if it actually matches the outcome it is meant to deliver. An experimental or quasi-experimental research design, such as Randomized Control Trial (RCT), can be highly effective for measuring the impact of using the app. However, in order to get accurate and reliable data, it is strongly suggested to conduct the RCT in closed environments such as nurseries/schools/village/ specific community, etc. This approach ensures the participating users behavior is monitored properly. Furthermore, in order to have a clear and coherent measurement of the impact the app will deliver; it is suggested to have clear guidelines on the amount of time each participating child needs to spend using the app.  The rationale here stems from ensuring consistency in the final data and unbiased results.

Insights on parents: Given that parents are often the gatekeepers and guardians to their children’s time, it is extremely important to understand the parents’ perception, attitude and behavior towards m-learning apps. Going directly to children or only focusing on them is almost a guaranteed path towards failure. Gleaning insights on parents’ attitudes and preferences will assist in understanding their motivations and concerns around the usage of the app by their children – some of which might be culturally informed. These insights can then all be integrated into both the app and evaluation designs.

M-Learning is a growing category and therefore it requires more effort in exploring how it’s reshaping the concept of learning, especially for the early years of childhood development. To that extent, the Queen Rania Foundation hopes the above tips are helpful in guiding others on the same journey.

We are still learning and iterating! We would love to hear about the experience of your organization in designing and evaluating m-learning interventions.

 

 

[1]  http://karimandjana.com/