Machine Assessment of Student Debate Postings

Faculty Sponsor

Michael Glass

Discipline(s)

Computer Science

Presentation Type

Poster Presentation

Symposium Date

Fall 4-28-2022

Abstract

The COMPS project aims to provide a computer-assisted tool for problem-solving discourse and collaborative idea learning. COMPS studies students working together on class problems via typed-chat or discussion board postings. Among the research goals are computer algorithms that inspect the student typed dialogue in order to help study and assess the conversations.

This project works with logs of students debating topics in a class on educational technology. Each debate consists of roughly a hundred individual postings, from 15 to 20 students in the class. In formal debates, students are assigned to one side. In their postings, they must adopt a particular argumentative role: an initial argument for or against the proposition, a rebuttal, an explanation, or providing further evidence. Formal debate assignments not only engage students with the class topic, they also reinforce critical thinking skills.

This project builds machine-learning classifiers that read the students’ posts and identify or measure aspects of debating and thinking skills. The algorithms try to identify the side of the debate the student is on and the argumentative role of the post. Current work includes trying to measure engagement, by trying to predict how many follow-on responses that a debate post attracts. The models are written in Python. Kappa statistics measure agreement between the machine predictions and the data collected. Preliminary classification accuracy on these tasks is significantly better than chance. They are not good enough to reliably assess individual students, but can give the professor/researcher an average assessment of the conversation.

This document is currently not available here.

Share

COinS