Research Article

Effects of Segmentation and Self-Explanation Designs on Cognitive Load in Instructional Videos

Hua Zheng 1 , Eulho Jung 2 , Tong Li 3 , Meehyun Yoon 4 *
More Detail
1 Charles R. Drew University of Medicine and Science, USA2 School of Education, Johns Hopkins Univeristy, USA3 Arizona State University, USA4 Ewha Womans University, South Korea* Corresponding Author
Contemporary Educational Technology, 14(2), April 2022, ep347, https://doi.org/10.30935/cedtech/11522
Published: 09 January 2022
OPEN ACCESS   2350 Views   1687 Downloads
Download Full Text (PDF)

ABSTRACT

This experimental study examined the effects of segmentation and self-explanation designs on cognitive load in instructional videos. Four types of instructional videos (segmentation, self-explanation, combined, and control) were created and tested by 121 undergraduate students randomly assigned to one of four research groups. The results of students’ self-ratings on the cognitive load survey showed that the segmenting design produced a significantly less germane cognitive load than the two non-segmenting designs (self-explanation and control). The self-explanation design did not produce a significantly more germane load than the control design. However, students’ dispositions toward segmentation and self-explanation designs were generally positive and supported the theoretical justifications reported in the literature. The findings are discussed, along with segmentation dilemmas, limitations, and future study implications.

CITATION (APA)

Zheng, H., Jung, E., Li, T., & Yoon, M. (2022). Effects of Segmentation and Self-Explanation Designs on Cognitive Load in Instructional Videos. Contemporary Educational Technology, 14(2), ep347. https://doi.org/10.30935/cedtech/11522

REFERENCES

  1. Afify, M. K. (2020). Effect of interactive video length within e-learning environments on cognitive load, cognitive achievement, and retention of learning. Turkish Online Journal of Distance Education, 21(4), 68-89. https://doi.org/10.17718/tojde.803360
  2. Ayres, P. (2006). Using subjective measures to detect variations of intrinsic cognitive load within problems. Learning and instruction, 16(5), 389-400. https://doi.org/10.1016/j.learninstruc.2006.09.001
  3. Biard, N., Cojean, S., & Jamet, E. (2018). Effects of segmentation and pacing on procedural learning by video. Computers in Human Behavior, 89, 411-417. https://doi.org/10.1016/j.chb.2017.12.002
  4. Bodily, R., Kay, J., Aleven, V., Jivet, I., Davis, D., Xhakaj, F., & Verbert, K. (2018). Open learner models and learning analytics dashboards: A systematic review. Proceedings of the 8th International Conference on Learning Analytics and Knowledge, 41-50. https://doi.org/10.1145/3170358.3170409
  5. Carmichael, M., Reid, A., & Karpicke, J. D. (2018). Assessing the impact of educational video on student engagement, critical thinking and learning: The current state of play. https://group.sagepub.com/white-paper-archive/assessing-the-impact-of-educational-video-on-student-engagement-critical-thinking-and-learning-the-current-state-of-play
  6. Chi, M. T. H. (2018). Learning from examples via self-explanations. In L.B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 251-282). Lawrence Erlbaum Associates, Inc. https://doi.org/10.4324/9781315044408
  7. Chi, M. T. H., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement to active learning achievements. Educational Psychologist, 49(4), 219-243. https://doi.org/10.1080/00461520.2014.965823
  8. Chinangkulpiwat, I., Chaveesuk, S., & Chaiyasoonthorn, W. (2021, June). Developing an acceptance model for use with online video-sharing platforms. Proceedings of the 3rd International Conference on Computer Communication and the Internet, 194-198. https://doi.org/10.1109/ICCCI51764.2021.9486788
  9. Creswell, J. W. (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (3rd ed.). Pearson.
  10. Crippen, K. J., & Earl, B. L. (2007). The impact of web-based worked examples and self-explanation on performance, problem-solving, and self-efficacy. Computers & Education, 49(3), 809-821. https://doi.org/10.1016/j.compedu.2005.11.018
  11. Deslauriers, L., McCarty, L. S., Miller, K., Callaghan, K., & Kestin, G. (2019). Measuring actual learning versus the feeling of learning in response to being actively engaged in the classroom. Proceedings of the National Academy of Sciences, 116(39), 19251-19257. https://doi.org/10.1073/pnas.1821936116
  12. Dieker, L. A., Lane, H. B., Allsopp, D. H., O’Brien, C., Butler, T. W., Kyger, M., & Fenty, N. S. (2009). Evaluating video models of evidence-based instructional practices to enhance teacher learning. The Journal of the Teacher Education Division of the Council for Exceptional Children, 32(2), 180-196. https://doi.org/10.1177/0888406409334202
  13. Doolittle, P. (2010). The effects of segmentation and personalization on superficial and comprehensive strategy instruction in multimedia learning environments. Journal of Educational Multimedia and Hypermedia, 19(2), 159-175.
  14. Doolittle, P. E., Bryant, L. H., & Chittum, J. R. (2015). Effects of degree of segmentation and learner disposition on multimedia learning. British Journal of Educational Technology, 46(6), 1333-1343. https://doi.org/10.1111/bjet.12203
  15. Dousay, T. A. (2014). Effect of multimedia design principles on the situational interest of adult learners [Unpublished doctoral dissertation, The University of Georgia].
  16. Durkin, K., & Rittle-Johnson, B. (2012). The effectiveness of using incorrect examples to support learning about decimal magnitude. Learning and Instruction, 22(3), 206-214. https://doi.org/10.1016/j.learninstruc.2011.11.001
  17. Evans, C., & Gibbons, N. (2007). The interactivity effect in multimedia learning. Computers & Education, 49(4), 1147-1160. https://doi.org/10.1016/j.compedu.2006.01.008
  18. Fiorella, L., & Mayer, R. E. (2018). What works and doesn’t work with instructional video. Computer in Human Behavior, 89, 465-470. https://doi.org/10.1016/j.chb.2018.07.015
  19. George, D., & Mallery, M. (2010). SPSS for windows step by step: A simple guide and reference (6th ed.). Pearson.
  20. Gerjets, P., Scheiter, K., & Catrambone, R. (2004). Designing instructional examples to reduce intrinsic cognitive load: Molar versus modular presentation of solution procedures. Instructional Science, 32(1-2), 33-58. https://doi.org/10.1023/B:TRUC.0000021809.10236.71
  21. Guo, P. J., Kim, J., & Rubin, R. (2014). How video production affects student engagement: an empirical study of MOOC videos. In Proceedings of the First Annual Association for Computing Machinery Conference on Learning at Scale (pp. 41-50). https://doi.org/10.1145/2556325.2566239
  22. Hayter, A. J. (1986). The maximum familywise error rate of Fisher’s least significant difference test. Journal of the American Statistical Association, 81(396), 1000-1004. https://doi.org/10.1080/01621459.1986.10478364
  23. Huk, T., & Ludwigs, S. (2009). Combining cognitive and affective support in order to promote learning. Learning and Instruction, 19(6), 495-505. https://doi.org/10.1016/j.learninstruc.2008.09.001
  24. Joseph, S. (2013). Measuring cognitive load: A comparison of self-report and physiological methods [PhD dissertation, Arizona State University].
  25. Kim, D., Jo, I.-H., Song, D. Zheng, H., Li. J, Zhu, J. Huang, X., Yan, W., & Xu, Z. (2021). Self-regulated learning strategies and student video engagement trajectory in a video-based asynchronous online course: A Bayesian latent growth modeling approach. Asia Pacific Education Review, 22, 305-317. https://doi.org/10.1007/s12564-021-09690-0
  26. Kurz, T. L., & Batarelo, I. (2010). Constructive features of video cases to be used in teacher education. TechTrends, 54(5), 46-53. https://doi.org/10.1007/s11528-010-0436-x
  27. Kwon, K., Kumalasari, C. D., & Howland, J. L. (2011). Self-explanation prompts on problem-solving performance in an interactive learning environment. Journal of Interactive Online Learning, 10(2), 96-112. https://doi.org/10.32469/10355/15840
  28. Lee, C. J., & Kim, C. (2017). A technological pedagogical content knowledge-based instructional design model: A third version implementation study in a technology integration course. Educational Technology Research and Development, 65(6), 1627-1654. https://doi.org/10.1007/s11423-017-9544-z
  29. Leppink, J., Paas, F., Van der Vleuten, C. P., Van Gog, T., & Van Merriënboer, J. J. (2013). Development of an instrument for measuring different types of cognitive load. Behavior Research Methods, 45(4), 1058-1072. https://doi.org/10.3758/s13428-013-0334-1
  30. Mayer, R. E., & Pilegard, C. (2014). Principles for managing essential processing in multimedia learning. In R. E. Mayer (Ed.), Cambridge Handbook of Multimedia Learning (2nd ed., pp. 316-344). Cambridge University Press.
  31. Mayer, R. E., Dow, G. T., & Mayer, S. (2003). Multimedia learning in an interactive self-explaining environment: What works in the design of agent-based microworlds? Journal of Educational Psychology, 95(4), 806-812. https://doi.org/10.1037/0022-0663.95.4.806
  32. Mayer, R. E., Fiorella, L., & Stull, A. (2020). Five ways to increase the effectiveness of instructional video. Educational Technology Research and Development, 68(3), 837-852. https://doi.org/10.1007/s11423-020-09749-6
  33. McEldoon, K. L., Durkin, K. L., & Rittle‐Johnson, B. (2013). Is self‐explanation worth the time? A comparison to additional practice. British Journal of Educational Psychology, 83(4), 615-632. https://doi.org/10.1111/j.2044-8279.2012.02083.x
  34. McGinn, K., Young, L., & Booth, J. (2019). Self-explanation prompts explained. Australian Primary Mathematics Classroom, 24(4), 18-22.
  35. Paas, F., Tuovinen, J. E., Tabbers, H., & Van Gerven, P. W. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist, 38(1), 63-71. https://doi.org/10.1207/S15326985EP3801_8
  36. Plass, J. L., & Kaplan, U. (2016). Emotional design in digital media for learning. In S. Y. Tettegah & M. Gartmeier (Eds.), Emotions, technology, design, and learning (1st ed., pp. 131-161). Academic Press. https://doi.org/10.1016/B978-0-12-801856-9.00007-4
  37. Rey, G. D., Beege, M., Nebel, S., Wirzberger, M., Schmitt, T. H., & Schneider, S. (2019). A meta-analysis of the segmenting effect. Education Psychology Review, 31(2), 389-419. https://doi.org/10.1007/s10648-018-9456-4
  38. Rittle-Johnson, B., & Loehr, A.M. (2017). Eliciting explanations: Constraints on when self- explanation aids learning. Psychonomic Bulletin & Review, 24(5),1501-1510. https://doi.org/10.3758/s13423-016-1079-5
  39. Rittle-Johnson, B., Loehr, A. M., & Durkin, K. (2017). Promoting self-explanation to improve mathematics learning: A meta-analysis and instructional design principles. ZDM Mathematics Education, 49(4), 599-611. https://doi.org/10.1007/s11858-017-0834-z
  40. Seidel, N. (2020, July). Video segmentation as an example for elaborating design patterns through empirical studies. Proceedings of the European Conference on Pattern Languages of Programs (pp. 1-15). https://doi.org/10.1145/3424771.3424778
  41. Senko, C., & Miles, K. M. (2008). Pursuing their own learning agenda: how mastery-oriented students jeopardize their class performance. Contemporary Educational Psychology, 33, 561-583. https://doi.org/10.1016/j.cedpsych.2007.12.001
  42. Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory. Sage Publications, Inc.
  43. Sweller, J. (2017). The role of independent measures of load in cognitive load theory. In R. Z. Zheng (Ed.), Cognitive load measurement and application (1st ed., pp. 3-7). Routledge. https://doi.org/10.4324/9781315296258-1
  44. Sweller, J. (2020). Cognitive load theory and educational technology. Educational Technology Research and Development, 68(1), 1-16. https://doi.org/10.1007/s11423-019-09701-3
  45. Sweller, J., Ayres, P., & Kalyuga, S. (2011). Measuring cognitive load. In J. Sweller, P. Ayres, & S.Kalyuga (Eds), Cognitive load theory (1st ed., pp. 71-85). Springer. https://doi.org/10.1007/978-1-4419-8126-4_6
  46. Williams, L. J., & Abdi, H. (2010). Fisher’s least significant difference (LSD) test. Encyclopedia of Research Design, 218, 840-853.
  47. Wong, J., Baars, M., Davis, D., Van Der Zee, T., Houben, G. J., & Paas, F. (2019). Supporting self-regulated learning in online learning environments and MOOCs: A systematic review. International Journal of Human-Computer Interaction, 35(4-5), 356-373. https://doi.org/10.1080/10447318.2018.1543084
  48. Wylie, R., & Chi, M. T. H. (2014). The self-explanation principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp.413-429). Cambridge University Press. https://doi.org/10.1017/CBO9781139547369.021
  49. Yeh, Y.-F, Chen, M.-C., Hung, P.-H., & Hwang, G.-J. (2010). Optimal self-explanation prompt design in dynamic multi-representational learning environments. Computers & Education, 54(4), 1089-1100. https://doi.org/10.1016/j.compedu.2009.10.013
  50. Zacks, J. M., & Swallow, K. M. (2007). Event segmentation. Current Directions in Psychological Science, 16(2), 80-84. https://doi.org/10.1111/j.1467-8721.2007.00480.x
  51. Zheng, H., Branch, R. M., & Lee, H. (2019). Creating animated videos as an innovative instructional alternative to writing essays for presenting research. TechTrends, 63, 533-542. https://doi.org/10.1007/s11528-019-00400-7
  52. Zheng, H., Ding, L., Lu, Z., & Branch, R. M. (2020) The motivational effects of involving students in rubric development on animation instruction. TechTrends, 64, 137-149. https://doi.org/10.1007/s11528-019-00443-w