Wednesday, November 25, 2015

How do you evaluate an educational app?

To work out how to evaluate an educational app, my first step, as usual, was to search the scholarly literature. A search for “educational app evaluation” produced 199,000 results. This is a few too many to look at, so I limited the search to just this year (2015) and got a more manageable 11,400. This is still too many to read so I added “rubric” to the search, reduced it further to 1,740 results.

Weng, and Taber-Doughty (p. 56, 2015) prepared a three page rubric for evaluating iPad apps  for students with disabilities. Eight practitioners at schools in the US Midwest evaluated nine commercially available iPad apps designed for students with disabilities.

The criteria were rated on a three point  scale: Disagree, Neutral, Agree (or not applicable, uncertain).

Criteria

Quality of Feedback/Correction

  1. Feedback is accurate and clear
  2. Correction is accurate and clear
  3. Feedback doesn’t reinforce behavior or distract students

Quality of Design

  1. Layout is simple and clear
  2. Layout is consistent
  3. Easy to navigate
  4. No distracting features
  5. Speech is clear and easy to understand

Content

  1. Various levels of content difficulty are available
  2. Appropriate for the target developmental level
  3. Content is appropriate for the target area
  4. No unnecessary or unrelated information

Usability

  1. Students can use independently after set up
  2. Only minimal adult supervision is needed after training
  3. Constant adult supervision is needed

Ability to be individualized

  1. Able to individualize levels of difficulty
  2. Able to individualize content to meet a student’s need
  3. Able to individualize speed of speech
  4. Able to adjust size of pictures, fonts, etc.
  5. Multiple voices available for selection
  6. Able to choose modalities
The rubric produced mixed results with some criteria producing consistent results between evaluators and others not.
Campbell, Gunter and Braga (2015) used the Relevance Embedding Translation Adaptation Immersion & Naturalization (RETAIN) model to evaluate educational games. The RETAIN rubric was developed by Gunter, Kenny, and Vick (p. 524, 2008). The model has six criteria, each with four levels (0 to 3). The criteria are:
  1. relevance: to the learner’s life,
  2. embedding: the educational content is integrated with the game content,
  3. transfer: what is learned is applicable outside the game,
  4. adaptation: encourages active learning beyond the game scenario,
  5. immersion: player becomes involved in the game,
  6. naturalization:players learn to learn.
Having been through the 20 top results for 2015, I widened the search to include 2014, resulting in 3,370 results.

Green, Hechter, Tysinger and Chassereau (2014) developed the Mobile App Selection for Science (MASS) rubric for mobile apps  for 5th to 12th grade science, valuated with 24 Canadian teachers. One thing I have learned so far is that your mobile App evaluation rubric needs a snappy acronym, like MASS or RETAIN. ;-)

More seriously, MASS is based on the m-learning framework by Kearney,  Schuck, Burden and Aubusson (2012). This framework has three characteristics: Personalisation, Authenticity and Collaboration (further divided into sub-scales).
The MASS rubric has six criteria assessed at three levels (Green, Hechter, Tysinger & Chassereau, p. 70, 2014) :
  1. Accuracy of the  content,
  2. Relevance of Content,
  3. Sharing Findings (Student’s work can be exported as a document),
  4. Feedback to student,
  5. Scientific Inquiry and Practices: Allows for information gathering through observation,
  6. Navigation of application (interface design).
Having looked at five papers it is time to draw some general points. One is that evaluation of m-learning Apps might be divided into two sets of criteria: as a software application and as an educational experience. There are some general criteria for the evaluation of software, such as the accessibility of the interface for those with a disability.

Apps are a subset of software applications, but curiously none of the authors of these Apps rubrics appear to have looked to work on the evaluation of desktop educational applications to draw inspiration from. Given the size of the market for educational software and that it has been in existence for decades, there must be an extensive literature on this topic.

References

Campbell, L. O., Gunter, G., & Braga, J. (2015, March). Utilizing the RETAIN Model to Evaluate Mobile Learning Applications. In Society for Information Technology & Teacher Education International Conference (Vol. 2015, No. 1, pp. 8242-8246). Retrieved from http://webcache.googleusercontent.com/search?q=cache:lS7EOK9SQa8J:www.editlib.org/p/150079/proceeding_150079.pdf+&cd=3&hl=en&ct=clnk&gl=au

Green, L. S., Hechter, R. P., Tysinger, P. D., & Chassereau, K. D. (2014). Mobile app selection for 5th through 12th grade science: The development of the MASS rubric. Computers & Education, 75, 65-71.
Gunter, G. A., Kenny, R. F., & Vick, E. H. (2008). Taking educational games seriously: using the RETAIN model to design endogenous fantasy into standalone educational games. Educational Technology Research and Development, 56(5-6), 511-537.
Kearney, M., Schuck, S., Burden, K., & Aubusson, P. (2012). Viewing mobile learning from a pedagogical perspective. Research In Learning Technology, 20(1), 1-17. doi:10.3402/rlt.v20i0/14406
Weng, P. L., & Taber-Doughty, T. (2015). Developing an App Evaluation Rubric for Practitioners in Special Education. Journal of Special Education Technology, 30(1).

No comments:

Post a Comment