To work out how to evaluate an educational app, my first step, as usual, was to search the scholarly literature. A search for “
educational app evaluation” produced 199,000 results. This is a few too many to look at, so I limited the search to
just this year (2015) and got a more manageable 11,400. This is still too many to read so I added “
rubric” to the search, reduced it further to 1,740 results.
Weng, and Taber-Doughty (p. 56, 2015) prepared a three page rubric
for evaluating iPad apps for students with disabilities. Eight
practitioners at schools in the US Midwest evaluated nine commercially
available iPad apps designed for students with disabilities.
The criteria were rated on a three point scale: Disagree, Neutral, Agree (or not applicable, uncertain).
Criteria
Quality of Feedback/Correction
- Feedback is accurate and clear
- Correction is accurate and clear
- Feedback doesn’t reinforce behavior or distract students
Quality of Design
- Layout is simple and clear
- Layout is consistent
- Easy to navigate
- No distracting features
- Speech is clear and easy to understand
Content
- Various levels of content difficulty are available
- Appropriate for the target developmental level
- Content is appropriate for the target area
- No unnecessary or unrelated information
Usability
- Students can use independently after set up
- Only minimal adult supervision is needed after training
- Constant adult supervision is needed
Ability to be individualized
- Able to individualize levels of difficulty
- Able to individualize content to meet a student’s need
- Able to individualize speed of speech
- Able to adjust size of pictures, fonts, etc.
- Multiple voices available for selection
- Able to choose modalities
The rubric produced mixed results with some criteria producing consistent results between evaluators and others not.
Campbell, Gunter and Braga (2015) used the Relevance Embedding
Translation Adaptation Immersion & Naturalization (RETAIN) model to
evaluate educational games. The RETAIN rubric was developed by Gunter,
Kenny, and Vick (p. 524, 2008). The model has six criteria, each with
four levels (0 to 3). The criteria are:
- relevance: to the learner’s life,
- embedding: the educational content is integrated with the game content,
- transfer: what is learned is applicable outside the game,
- adaptation: encourages active learning beyond the game scenario,
- immersion: player becomes involved in the game,
- naturalization:players learn to learn.
Having been through the 20 top results for 2015, I widened the search to
include 2014, resulting in 3,370 results.
Green, Hechter, Tysinger and Chassereau (2014) developed the Mobile
App Selection for Science (MASS) rubric for mobile apps for 5th to 12th
grade science, valuated with 24 Canadian teachers. One thing I have
learned so far is that your mobile App evaluation rubric needs a snappy
acronym, like MASS or RETAIN.
;-)
More seriously, MASS is based on the m-learning framework by
Kearney, Schuck, Burden and Aubusson (2012). This framework has
three characteristics: Personalisation, Authenticity and Collaboration
(further divided into sub-scales).
The MASS rubric has six criteria assessed at three levels (Green, Hechter, Tysinger & Chassereau, p. 70, 2014) :
- Accuracy of the content,
- Relevance of Content,
- Sharing Findings (Student’s work can be exported as a document),
- Feedback to student,
- Scientific Inquiry and Practices: Allows for information gathering through observation,
- Navigation of application (interface design).
Having looked at five papers it is time to draw some general points.
One is that evaluation of m-learning Apps might be divided into two sets
of criteria: as a software application and as an educational
experience. There are some general criteria for the evaluation of
software, such as the accessibility of the interface for those with a
disability.
Apps are a subset of software applications, but curiously none of the
authors of these Apps rubrics appear to have looked to work on the
evaluation of desktop educational applications to draw inspiration from.
Given the size of the market for educational software and that it has
been in existence for decades, there must be an extensive literature on
this topic.
References
Campbell, L. O., Gunter, G., & Braga, J. (2015, March). Utilizing
the RETAIN Model to Evaluate Mobile Learning Applications. In
Society for Information Technology & Teacher Education International Conference (Vol. 2015, No. 1, pp. 8242-8246). Retrieved from
http://webcache.googleusercontent.com/search?q=cache:lS7EOK9SQa8J:www.editlib.org/p/150079/proceeding_150079.pdf+&cd=3&hl=en&ct=clnk&gl=au
Green, L. S., Hechter, R. P., Tysinger, P. D.,
& Chassereau, K. D. (2014). Mobile app selection for 5th through
12th grade science: The development of the MASS rubric. Computers & Education, 75, 65-71.
Gunter, G. A., Kenny, R. F., & Vick, E. H. (2008). Taking
educational games seriously: using the RETAIN model to design endogenous
fantasy into standalone educational games.
Educational Technology Research and Development,
56(5-6), 511-537.
Kearney, M., Schuck, S., Burden, K., & Aubusson, P. (2012). Viewing mobile learning from a pedagogical perspective. Research In Learning Technology, 20(1), 1-17. doi:10.3402/rlt.v20i0/14406
Weng, P. L., & Taber-Doughty, T. (2015). Developing an App Evaluation Rubric for Practitioners in Special Education.
Journal of Special Education Technology,
30(1).