# Comparison And Evaluation Of Geometry Explanation Tutor

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Over the last few decades the access to digital media has become popular among most American students with 90% of them having access to computers and the internet, both at home and at school. But many teachers have reported that the presence of these systems is being underutilized and these concerns eventually led to the evolution of computer based tutoring systems; but their potential in terms of actual classroom use is yet to be realized. Here we discuss and compare two such Intelligent Tutoring Systems (ITS): The Geometry Explanation Tutor and AnimalWatch Tutor which both use Interactive Learning Environments for teaching various concepts of mathematics to middle and high school students.

The AnimalWatch tutor aims at developing word problem solving skills of 9-12 year olds as these are said to be more difficult for students because of their requirement for multiple skills such as understanding the problem, constructing equations to compute the answer and then finally checking if the answer is plausible and correct in the context of the given problem. AnimalWatch includes approximately 1100 word problems and each problem is accompanied by narratives about endangered species and each narrative is organized into four sections or contexts which include details about the animal's background ranging from its introduction with its authentic background information to which natural reserve is the animal currently in, and thus integrating mathematics, narrative and biology. Students are asked to write their answers in an answer box and they receive immediate feedback of their answers with hints and explanations according to their answers. This helps the system to adapt its problems to a student's proficiency level and is said to dramatically accelerate learning as student works on the given problems at the edge of his or her understanding.

The Geometry Explanation Tutor is built on top of an existing geometry problem solving tutor called the Geometry Cognitive Tutor. It covers the Angles unit which is one of the six units of the original tutor and it deals with the geometric properties of angles in different types of diagram configurations. The tutor requires that students state explanations of each problem solving step in their own words, supporting self explanation through a restricted form of dialog. The system provides help in the form of context-sensitive hints and provides feedback in order to help students arrive at more precise mathematical explanations. The main aim of the tutor is to test the hypothesis that when students explain in their own words they pay more attention to the crucial features of the problem and hence acquire more transferable knowledge. A number of cognitive studies have shown that such types of self-explanation techniques like clarifications of vague explanations and prompting students to self-explain leads to better learning in students.

Comparison of Pedagogical content of each model:

In a GET the explanation given by the students is classified with respect to a set of categories illustrated in the Figure, right most column. These categories are stored in the knowledge base as a hierarchal structure, representing correct and partially-correct ways of stating the geometry rules. The figure shows an example dialog which highlights the major drawbacks of the system. The first part of the dialog is omitted with two other steps which contained spelling errors as at the time of study the tutor did not have a spell correction mechanism.

Example Dialog with the GET

In step 9 the student is very close to the correct explanation with only the term "base angles" missing from his explanation. The system's response in step 10 is designed to point this as a hint but it does not do this appropriately leaving the student confused. In step 11 student adds the word "congruant" to his explanation making no improvement over his previous attempt at step 9. This may be a sign that the student does not fully understand what he typed but the system is ignorant to this problem and simply repeats the feedback message. Instead the system should provide a more helpful feedback, which is possible if the system has multiple and more specific messages associated with each explanation category. Also, in order to detect situations where students stagnate or regress as in the above example the system needs to keep a history list of categories under which the previous explanation attempts were classified.

AnimalWatch uses artificial intelligence to model individual student performance. The most important part of the AnimalWatch tutor is its "student model" function which evaluates when a student is ready to go to the next environment that is the word problems are customized according to their level of knowledge and understanding. This estimate is continuously updated on student's performance during sessions with the tutor and the difficulty of the problems given is adapted according to their performance in a particular domain. If a student makes errors on a problem involving prerequisite skills, the tutor provides with easier problems so that the student master's the domain and the success in problem solving also boosts the student's confidence. Another important feature is the hints presented by the system. When the student makes an error for the first time they get a simple hint like "Try Again" and as the number of errors increases the tutor provides more extensive hints that range from fairly concrete to highly structured and interactive. As the student progresses the hints become less procedural and abstract. For example, AnimalWatch might reason that a student who has made several errors in adding unlike fractions might need a hint about how to find the least common multiple, whereas a classmate who has solved such problems successfully in the past might simply need a reminder to check the denominators. Also, during evaluation studies it was found that girls preferred more structured and interactive responses and this tutor due to its adaptive feedback instigated an interest of mathematics in girls. Evaluation of AnimalWatch tutor with hundreds of students showed that it provided effective, confidence-enhancing arithmetic instruction (Arroyo et al., 2001; Beal et al., 2000).

AnimalWatch has the following Artificial Intelligence features: It can generate appropriate problems, hints and help 'on the fly' that is according to a student's proficiency level, it can reason about a student's current knowledge and learning needs with the help of the student model function, it provides an interactive learning environment for students and the model also demonstrates self-improvement by monitoring and evaluating its own performance based on the past sessions with students. This is usually implemented through machine learning and data mining techniques. In the case of AnimalWatch, machine learning was used to predict the how long a student would need to solve a problem and accordingly the system adapted its response leading to generalization of the learning component for the new students.

The problems presented by the network were based on a topic network that included concepts such as "subtract fractions " or "multiply whole numbers " resolved into subtopics such as "find least common denominator " and "subtract numerators. " The student retention and acquisition of subtasks was tracked on the basis of the average time required to solve each problem with snapshots of their performances.

A deep formative evaluation of AnimalWatch was conducted by analyzing tens of thousands of interactions made by 350 students. The two evaluation goals were how the students responded to the system and how the system adapted to the student. The features assessed to evaluate these two goals were: the student learning, accuracy of the student model and appropriateness of the problem selection behaviour of the teaching model. The results showed that the impact on students mathematics attitudes after using AnimalWatch was highly positive and students self-confidence and liking of mathematics also improved significantly (Arroyo et al., 2003c). It was also seen that AnimalWatch was selecting problems that were 'too easy'. One of the possible reasons for this was the model not being able to accurately estimate students' knowledge, as it initially assumes that the student knows too little and it takes up a while for the system to catch up with the students' actual knowledge level and also the system lacked 'ideal' problems that is when the system could not find the problem with level of difficulty it was looking for, it ended up relaxing its constraints until it found a reasonable problem. Making the student model an average of the class proficiency level can lead to enhancement of the model accuracy.

The fourth feature of intelligent tutors is mixed-initiative, or the ability for either

student or tutor to take control of an interaction. Mixed-initiative is only partially

available in current tutors as most tutors are mentor-driven, e.g., they set the agenda,

ask questions, and determine the path students will take through the domain. True

mixed-initiative supports students to ask novel questions and set the agenda and typically

requires understanding and generating natural language answers (see Sections

5.5 and 5.6) . Some tutors pose problems, and students have limited control over

which steps to take.