Harvard’s new computer science teacher is a chatbot
Students enrolled in the university’s flagship CS50 course will be presented with the AI teacher in September

Students enrolled in Harvard University’s Introduction to Computer Science will be encouraged to use the CS50 bot from Fall Semester 2023
Harvard University plans to use an AI chatbot similar to ChatGPT as an instructor on its flagship coding course.
Students enrolled on the Computer Science 50: Introduction to Computer Science (CS50) programme will be encouraged to use the artificial intelligence tool when classes begin in September.
The AI teacher will likely be based on OpenAI’s GPT 3.5 or GPT 4 models, according to course instructors.
“Our own hope is that, through AI, we can eventually approximate a 1:1 teacher:student ratio for every student in CS50, as by providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually,” CS50 professor David Malan told The Harvard Crimson.
“Providing support that’s tailored to students’ specific questions has long been a challenge at scale via edX and OpenCourseWare more generally, with so many students online, so these features will benefit students both on campus and off.”
The AI teaching bot will offer feedback to students, helping to find bugs in their code or give feedback on their work, Professor Malan said.
Its arrival comes amid a huge surge in popularity of AI tools, with OpenAI’s ChatGPT becoming the fastest growing app of all time since its launch in November 2022.
The chatbot reached 100 million active users within two months of being unveiled, with users enticed by its ability to perform a range of tasks – from writing poetry and essays, to generating computer code.
Artificial Intelligence The Courts
Other AI tools that have since launched to compete with ChatGPT include Google’s Bard, which features similar capabilities to its rival.
A recent update for Bard has allowed it to not just write code but also execute it by itself, which Google claims allows it to figure out problems on a far deeper level than current generative AI systems.
Accuracy and AI “hallucinations” remain a significant issue with such technology, with Google warning that Bard “won’t always be right” despite the upgrade.
Professor Malan said students would be warned of the pitfalls of the AI, saying they should “always think critically” when presented with information.
“But the tools will only get better through feedback from students and teachers alike,” he said. “So they, too, will be very much part of the process.”
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies