Computer Science Students Fooled By Artificially Intelligent TA
George Dvorsky
2016-05-20 00:00:00
URL

Now, being “certain” is not the same as being correct. But computer science professor Ashok Goel felt it was a sufficient level of confidence to allow the virtual TA, named Jill Watson, to answer student inquiries on her own. For nearly the entire month of April, Jill was responding directly to the class through an online student forum, but only when she was 97 percent sure her answers were correct.

The students weren’t told that they were interacting with a virtual TA until April 26. According to a Georgia Tech release, the student response was “uniformly positive.” One student said “her mind was blown” when the truth came out, while another jokingly asked if Jill could “come out and play.”



The course, titled Knowledge Based Artificial Intelligence (KBAI), is a core requirement of Georgia Tech’s online masters of science in computer science program. Around 300 students take the course each year, and they post roughly 10,000 messages in the online forums which is more than the course’s eight TAs can handle. To offset this workload, Goel and his graduate students built the virtual teaching assistant, which was built on IBM’s Watson platform (yes, the same Watson that defeated the world’s greatest Jeopardy players back in 2011).

To develop the system, Goel and his team collected roughly 40,000 questions that had been asked in the class forums since the course was launched back in 2014. They then fed Jill all these questions and answers.

“One of the secrets of online classes is that the number of questions increases if you have more students, but the number of different questions doesn’t really go up,” noted Goel. “Students tend to ask the same questions over and over again.”

This scenario is perfect for the Watson platform, a powerful natural language processor that specializes in answering questions with distinct, clear solutions.

When the project first got started back in January, Jill wasn’t very good at answering the questions and would often give strange and irrelevant answers. These responses weren’t shared with the KBAI students.

“Initially her answers weren’t good enough because she would get stuck on keywords,” said Lalith Polepeddi, one of the grad students working on the project. “For example, a student asked about organizing a meet-up to go over video lessons with others, and Jill gave an answer referencing a textbook that could supplement the video lessons—same keywords—but different context. So we learned from mistakes like this one, and gradually made Jill smarter.”

Eventually, Jill got so good at these tasks that the researchers were ready to post her answers directly to the KBAI student forum, which they started to do in late March.

Goel would like to replicate the project again next semester, but with a different name. The goal is to have the virtual TA answer 40 percent of all questions by the end of the year (just to re-iterate, no responses below the 97 percent certainty threshold were posted to the student forums).

Naturally, it’s upsetting to hear that bots could replace yet another job. But this might actually turn out to be a good thing, as many students drop out of classes because they don’t receive enough teaching support. Jill was developed with this exact need in mind. Of course, Jill, or her immediate digital descendants, won’t be able to answer more complicated or nuanced questions, particularly those posed in other courses (such as philosophy or the social sciences). So it’ll be a long time (if ever) before we see teaching assistants—or professors for that matter—completely replaced by artificial intelligence. But as this research clearly shows, the writing is most certainly on the wall.

[Georgia Tech]