Just two years ago, students in China were told to avoid using AI for their assignments. At the time, to get around a national block on ChatGPT, students had to buy a mirror-site version from a secondhand marketplace. Its use was common, but it was at best tolerated and more often frowned upon. Now, professors no longer warn students against using AI. Instead, they’re encouraged to use it—as long as they follow best practices.
Just like those in the West, Chinese universities are going through a quiet revolution. The use of generative AI on campus has become nearly universal. However, there’s a crucial difference. While many educators in the West see AI as a threat they have to manage, more Chinese classrooms are treating it as a skill to be mastered. Read the full story.
—Caiwei Chen
If you’re interested in reading more about how AI is affecting education, check out:
+ Here’s how ed-tech companies are pitching AI to teachers.
+ AI giants like OpenAI and Anthropic say their technologies can help students learn—not just cheat. But real-world use suggests otherwise. Read the full story.
+ The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better. Read the full story.
+ This AI system makes human tutors better at teaching children math. Called Tutor CoPilot, it demonstrates how AI could enhance, rather than replace, educators’ work. Read the full story.
Why it’s so hard to make welfare AI fair
There are plenty of stories about AI that’s caused harm when deployed in sensitive situations, and in many of those cases, the systems were developed without much concern to what it meant to be fair or how to implement fairness.
But the city of Amsterdam did spend a lot of time and money to try to create ethical AI—in fact, it followed every recommendation in the responsible AI playbook. But when it deployed it in the real world, it still couldn’t remove biases. So why did Amsterdam fail? And more importantly: Can this ever be done right?
Join our editor Amanda Silverman, investigative reporter Eileen Guo and Gabriel Geiger, an investigative reporter from Lighthouse Reports, for a subscriber-only Roundtables conversation at 1pm ET on Wednesday July 30 to explore if algorithms can ever be fair. Register here!