Research Article

Identification of Cognitive Learning Complexity of Assessment Questions Using Multi-class Text Classification

Syaamantak Das 1 * , Shyamal Kumar Das Mandal 1, Anupam Basu 2 3
More Detail
1 Centre for Educational Technology, Indian Institute of Technology Kharagpur, India2 Department of Computer Science and Engineering, Indian Institute of Technology Kharagpur, India3 National Institute of Technology Durgapur, India* Corresponding Author
Contemporary Educational Technology, 12(2), October 2020, ep275, https://doi.org/10.30935/cedtech/8341
OPEN ACCESS   3506 Views   2795 Downloads
Download Full Text (PDF)

ABSTRACT

Cognitive learning complexity identification of assessment questions is an essential task in the domain of education, as it helps both the teacher and the learner to discover the thinking process required to answer a given question. Bloom’s Taxonomy cognitive levels are considered as a benchmark standard for the classification of cognitive thinking (learning complexity) in an educational environment. However, it was observed that some of the action verbs of Bloom’s Taxonomy are overlapping in multiple levels of the hierarchy, causing ambiguity about the real sense of cognition required. The paper describes two methodologies to automatically identify the cognitive learning complexity of given questions. The first methodology uses labelled Latent Dirichlet Allocation (LDA) as a machine learning approach. The second methodology uses the BERT framework for multi-class text classification for deep learning. The experiments were performed on an ensemble of 3000+ educational questions, which were based on previously published datasets along with the TREC question corpus and AI2 Biology How/Why question corpus datasets. The labelled LDA reached an accuracy of 83% while BERT based approach reached 89% accuracy. An analysis of both the results is shown, evaluating the significant factors responsible for determining cognitive knowledge.

CITATION (APA)

Das, S., Das Mandal, S. K., & Basu, A. (2020). Identification of Cognitive Learning Complexity of Assessment Questions Using Multi-class Text Classification. Contemporary Educational Technology, 12(2), ep275. https://doi.org/10.30935/cedtech/8341

REFERENCES

  1. Agrawal, R., Gollapudi, S., Kannan, A., & Kenthapadi, K. (2014). Study navigator: An algorithmically generated aid for learning from electronic text-books. Journal of Educational Data Mining, 6(1), 53-75.
  2. Andre, T. (1979). Does answering higher-level questions while reading facilitate productive learning? Review of Educational Research, 49(2), 280-318. https://doi.org/10.3102/00346543049002280
  3. Bhatia, P., Celikkaya, B., Khalilia, M., & Senthivel, S. (2019). Comprehend medical: a named entity recognition and relationship extraction web service. arXiv preprint arXiv: 1910.07419. https://doi.org/10.1109/ICMLA.2019.00297
  4. Bicalho, P., Pita, M., Pedrosa, G., Lacerda, A., & Pappa, G. L. (2017). A general framework to expand short text for topic modeling. Information Sciences, 393, 66-81. https://doi.org/10.1016/j.ins.2017.02.007
  5. Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of machine Learning research, 3(Jan), 993-1022.
  6. Bloom, B. S., et al. (1956). Taxonomy of educational objectives. vol. 1: Cognitive domain. New York: McKay, 20-24.
  7. Dalton, J., & Smith, D. (1989). Extending children’s special abilities: strategies for primary classrooms. Office of Schools Administration, Ministry of Education, Victoria.
  8. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. https://doi.org/10.18653/v1/N19-1423
  9. Hamilton, R. (1992). Application adjunct post-questions and conceptual problem solving. Contemporary Educational Psychology, 17(1), 89-97. https://doi.org/10.1016/0361-476X(92)90050-9
  10. Hamilton, R. J. (1985). A framework for the evaluation of the effectiveness of adjunct questions and objectives. Review of Educational Research, 55(1), 47-85. https://doi.org/10.3102/00346543055001047
  11. Howard, J., & Ruder, S. (2018). Universal language model ne-tuning for text classification. arXiv preprint arXiv:1801.06146. https://doi.org/10.18653/v1/P18-1031
  12. Jain, M., Beniwal, R., Ghosh, A., Grover, T., & Tyagi, U. (2019). Classifying question papers with bloom’s taxonomy using machine learning techniques. In International conference on advances in computing and data sciences (pp. 399-408). https://doi.org/10.1007/978-981-13-9942-8_38
  13. Jansen, P., Surdeanu, M., & Clark, P. (2014). Discourse complements lexical semantics for non-factoid answer reranking. In Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 1: Long papers) (pp. 977-986). https://doi.org/10.3115/v1/P14-1092
  14. Jones, K. O., Harland, J., Reid, J. M., & Bartlett, R. (2009). Relationship between examination questions and bloom’s taxonomy. In 2009 39th IEEE frontiers in education conference (pp. 1-6). https://doi.org/10.1109/FIE.2009.5350598
  15. Krathwohl, D. R. (2002). A revision of bloom’s taxonomy: An overview. Theory into practice, 41(4), 212-218. https://doi.org/10.1207/s15430421tip4104_2
  16. Krathwohl, D. R., & Anderson, L. W. (2010). Merlin c. wittrock and the revision of bloom’s taxonomy. Educational psychologist, 45(1), 64-65. https://doi.org/10.1080/00461520903433562
  17. Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 159-174. https://doi.org/10.2307/2529310
  18. Lee, Y.-J., Kim, M., Jin, Q., Yoon, H.-G., & Matsubara, K. (2017). Revised blooms taxonomy the swiss army knife in curriculum research. In East-asian primary science curricula (pp. 11-16). Springer. https://doi.org/10.1007/978-981-10-2690-4
  19. Li, X., & Roth, D. (2002). Learning question classifiers. In Proceedings of the 19th international conference on computational linguistics-volume 1 (pp. 1-7). https://doi.org/10.3115/1072228.1072378
  20. Long, G., Chen, L., Zhu, X., & Zhang, C. (2012). Tcsst: transfer classification of short & sparse text using external data. In Proceedings of the 21st ACM international conference on information and knowledge management (pp. 764-772). https://doi.org/10.1145/2396761.2396859
  21. Luo, L., & Wang, Y. (2019). Emotionx-hsu: Adopting pre-trained bert for emotion classification. arXiv preprint arXiv:1907.09669.
  22. Massey, L. (2011). Autonomous and adaptive identification of topics in unstructured text. In International conference on knowledge-based and intelligent information and engineering systems (pp. 1-10). https://doi.org/10.1007/978-3-642-23863-5_1
  23. Pan, S. J., & Yang, Q. (2009). A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10), 1345-1359. https://doi.org/10.1109/TKDE.2009.191
  24. Peverly, S. T., & Wood, R. (2001). The effects of adjunct questions and feed-back on improving the reading comprehension skills of learning-disabled adolescents. Contemporary Educational Psychology, 26(1), 25-43. https://doi.org/10.1006/ceps.1999.1025
  25. Phan, X.-H., Nguyen, L.-M., & Horiguchi, S. (2008). Learning to classify short and sparse text & web with hidden topics from large-scale data collections. In Proceedings of the 17th international conference on world wide web (pp. 91-100). https://doi.org/10.1145/1367497.1367510
  26. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
  27. Ramage, D., Hall, D., Nallapati, R., & Manning, C. D. (2009). Labeled lda: A supervised topic model for credit attribution in multi-labeled corpora. In Proceedings of the 2009 conference on empirical methods in natural language processing, 1(1), 248-256. https://doi.org/10.3115/1699510.1699543
  28. Redfield, D. L., & Rousseau, E. W. (1981). A meta-analysis of experimental research on teacher questioning behavior. Review of educational research, 51(2), 237-245. https://doi.org/10.3102/00346543051002237
  29. Rothkopf, E. Z. (1970). The concept of mathemagenic activities. Review of educational research, 40(3), 325-336. https://doi.org/10.3102/00346543040003325
  30. Stanny, C. (2016). Reevaluating blooms taxonomy: What measurable verbs can and cannot say about student learning. Education Sciences, 6(4), 37. https://doi.org/10.3390/educsci6040037
  31. Swart, A. J., & Daneti, M. (2019). Analyzing learning outcomes for electronic fundamentals using blooms taxonomy. In 2019 IEEE global engineering education conference (educon) (pp. 39-44). https://doi.org/10.1109/EDUCON.2019.8725137
  32. Uys, J., Du Preez, N., & Uys, E. (2008). Leveraging unstructured information using topic modelling. In Picmet’08-2008 portland international conference on management of engineering & technology (pp. 955-961). https://doi.org/10.1109/PICMET.2008.4599703
  33. Wang, P., Xu, B., Xu, J., Tian, G., Liu, C.-L., & Hao, H. (2016). Semantic ex-pansion using word embedding clustering and convolutional neural network for improving short text classification. Neurocomputing, 174, 806-814. https://doi.org/10.1016/j.neucom.2015.09.096
  34. Yahya, A. A., Toukal, Z., & Osman, A. (2012). Blooms taxonomy-based classi-cation for item bank questions using support vector machines. In Modern advances in intelligent systems and tools (pp. 135-140). Springer. https://doi.org/10.1007/978-3-642-30732-4_17
  35. Zarei, F., & Nik-Bakht, M. (2019). Automated detection of urban flooding from news. In Proceedings of the 36th international symposium on automation and robotics in construction (pp. 515-521). https://doi.org/10.22260/ISARC2019/0069
  36. Zhang, H., & Zhong, G. (2016). Improving short text classification by learning vector representations of both words and hidden topics. Knowledge-Based Systems, 102, 76-86. https://doi.org/10.1016/j.knosys.2016.03.027