Your Teachers Are Drowning in Paperwork While You Debate AI Ethics: The Case for Computer Use Agents in Education
A third of American teachers considered quitting last year. Not because of the kids. Not because of the pay, though that's terrible too. Because of the paperwork. According to a 2025 Learnosity survey, teachers spend an average of 9.9 hours every single week on grading alone. That's a full extra workday, every week, just marking papers. On top of that, 84% say they don't have enough time during regular work hours to handle grading, lesson planning, and admin tasks. These aren't lazy people. They're professionals being crushed under busywork that a computer could handle, and has been able to handle for years. So why is it still happening? Because the edtech industry keeps selling dashboards and portals instead of actual automation. Because 'AI in education' has meant chatbots and quiz generators, not tools that genuinely take work off a teacher's plate. That's the gap. And it's embarrassing.
The Numbers Are Ugly and Nobody Wants to Say It Out Loud
Let's be blunt about what's happening. Teachers average 57 hours of work per week according to EdWeek research, and less than half of that time is spent actually teaching. The rest goes to grading, lesson planning, emails, data entry, compliance paperwork, and a hundred other tasks that have nothing to do with standing in front of students. Sixty-two percent of teachers reported frequent job-related stress in 2025, nearly double the rate of comparable working adults. Seventy-eight percent cite excessive workload as the reason they'd consider leaving the profession. And one in three did seriously consider leaving last year. This isn't a morale problem. It's an operations problem. You have highly trained, expensive, irreplaceable humans spending enormous chunks of their professional lives on tasks that are fundamentally mechanical. Formatting rubrics. Inputting grades into portals. Generating differentiated worksheets. Sending templated parent emails. Compiling attendance reports. None of that requires a teaching degree. All of it can be automated. The fact that it isn't automated in 2025 is a genuine institutional failure.
South Korea Spent $850 Million on AI in Schools and Got Nothing
Here's the cautionary tale that every edtech CEO should be forced to read before their next funding pitch. South Korea launched a national AI-powered textbook program with enormous fanfare. The government committed hundreds of millions. Publishers spent $567 million developing the content. The rollout was supposed to personalize learning at scale, reduce teacher workload, and modernize classrooms across the country. It collapsed. The program failed so badly that publishers are now suing the government for damages. The opposition turned it into a political crisis. The total bill to taxpayers: $850 million, for nothing. Why did it fail? Because the people building it focused on the content layer, the AI-generated text and adaptive quizzes, while ignoring the operational layer entirely. Teachers still had to manually integrate everything. Administrators still had to babysit the system. The actual workflow didn't change. You can pour a billion dollars into AI content and still leave teachers drowning in the same busywork if you haven't touched the underlying processes. That's the lesson South Korea paid $850 million to learn.
Teachers spend 9.9 hours per week on grading alone. That's 50+ full workdays per year, per teacher, on a single task that AI can handle right now. Every year you wait is another year that doesn't happen.
Duolingo Went 'AI-First' and Torched Its Own Product
While we're doing the autopsy on bad AI-in-education decisions, let's talk about Duolingo. In April 2025, CEO Luis von Ahn announced the company was going 'AI-first' and effectively replaced its human translators and cultural experts with AI-generated content. The backlash was immediate and brutal. Users noticed the quality drop almost instantly. Long-time learners started reporting courses that felt hollow, culturally tone-deaf, and riddled with errors that a human expert would have caught in thirty seconds. The CEO had to walk it back publicly in August, admitting the memo 'did not give enough context.' That's corporate speak for 'we broke our product and got caught.' The lesson here isn't that AI is bad for education. It's that swapping humans out entirely, without understanding which tasks actually benefit from automation, is a disaster. The right move is using AI to eliminate the mechanical work so the humans can do more of the work that actually requires them. Duolingo did the opposite. They automated the craft and kept the busywork. That's backwards.
What 'AI in Education' Actually Needs to Mean
- ●Grading short-answer and essay responses against a rubric, not just multiple choice, with a computer use agent that opens the actual grading tool, reads submissions, and inputs scores directly
- ●Generating and formatting differentiated lesson plans inside whatever platform a district already uses, not a new app that requires a separate login
- ●Pulling student performance data from multiple systems, cross-referencing it, and producing a readable summary for a teacher in under two minutes
- ●Sending personalized parent communication drafts based on individual student progress, ready for a teacher to review and send in one click
- ●Filling out compliance and administrative forms automatically, the kind of work that eats hours every week and requires zero pedagogical judgment
- ●Running parallel tasks simultaneously, so while one process handles grading batch one, another is generating next week's quiz materials, and a third is updating the gradebook
Why a Real Computer Use Agent Is the Only Tool That Actually Works Here
Most AI tools built for education are still just chat interfaces with a school logo slapped on them. You type a prompt, you get text back, you copy-paste it somewhere. That's not automation. That's a slightly faster version of doing it yourself. A real computer use agent is fundamentally different. It doesn't just generate content. It operates the actual software, the gradebook, the LMS, the spreadsheet, the email client, exactly the way a human would, but without the fatigue, the distraction, or the 9.9 hours of grading every week. That's what Coasty does. It's a computer-using AI that sits at 82% on the OSWorld benchmark, which is the gold standard for measuring whether an AI can actually do real computer tasks, not just answer questions about them. No other agent is close to that number. Coasty controls real desktops, real browsers, and real terminals. It can run as agent swarms for parallel execution, meaning multiple tasks happen simultaneously instead of sequentially. It's not a chatbot for teachers. It's the thing that actually does the work teachers shouldn't have to do. And there's a free tier, so the barrier to finding out if it works for your institution is basically zero.
The education system doesn't have an AI problem. It has an implementation problem. Everyone is arguing about whether AI belongs in classrooms while teachers are silently burning out under a mountain of mechanical tasks that should have been automated five years ago. South Korea blew $850 million building AI content while ignoring AI operations. Duolingo automated the wrong things and destroyed user trust. Meanwhile, a third of teachers are thinking about quitting, and the actual solution, a computer use agent that takes the busywork off their plates entirely, exists right now and most school administrators have never heard of it. Stop buying dashboards. Stop piloting chatbots. Start automating the actual work. If you're in edtech, school administration, or just someone who thinks teachers deserve better than spending a full workday every week on grading, go look at what a proper computer use agent can do at coasty.ai. The technology isn't the bottleneck anymore. The willingness to use it correctly is.