Comparison

Automation Anywhere Is Losing to AI Agents and Everyone Can See It Except Automation Anywhere

James Liu||7 min
+Tab

Gartner just announced that over 40% of agentic AI projects will be canceled by end of 2027. The RPA vendors are printing that stat on t-shirts because they think it means their old-school bots are safe. It doesn't. It means badly designed automation is getting cut, and scripted RPA bots are the most badly designed automation on the planet. Automation Anywhere has been selling the same core promise since 2003: record what a human does, replay it forever. That worked fine when enterprise software never changed and nobody expected results in days instead of quarters. In 2025, it's a liability disguised as a product. Real computer use AI agents, the kind that actually see a screen and reason about what to do next, are making the entire RPA category look like it belongs in a museum next to fax machines and Lotus Notes.

The Dirty Secret RPA Vendors Don't Put in Their Case Studies

Here's what an Automation Anywhere deployment actually looks like in the wild. You hire a consultant, spend three to six months mapping the process, build a bot that works perfectly on the exact screen layout you had in November, and then your ERP vendor pushes an update in January and the whole thing silently fails at 2am on a Tuesday. Nobody notices until a manager realizes 4,000 invoices haven't been processed. That's not a horror story. That's a Tuesday. The core problem with RPA is that it's brittle by design. It doesn't understand what it's looking at. It memorizes pixel coordinates and element selectors. Change the font size on a form, rename a dropdown option, or switch to a new browser, and the bot is completely lost. Maintenance costs for enterprise RPA deployments routinely run 30 to 50 percent of the original build cost every single year. You're not buying automation. You're buying a part-time job for an RPA developer whose entire role is duct-taping bots back together after routine software updates.

What Automation Anywhere Actually Costs You (The Number They Don't Advertise)

  • Enterprise licensing starts at $750,000+ per year for meaningful scale, before professional services, which routinely double that number
  • The average RPA bot requires 3-5 months to build and deploy, meaning your team is doing the work manually the entire time you're paying for 'automation'
  • Bot maintenance eats 30-50% of initial build cost annually, turning a one-time project into a permanent overhead line item
  • Gartner found that 30% of RPA projects fail outright, not counting the ones that technically run but produce wrong outputs nobody catches
  • The average enterprise manages 500+ bots, meaning you need a dedicated Center of Excellence just to track what's broken this week
  • A single Automation Anywhere bot developer earns $90,000 to $130,000 per year, and you need several of them to maintain a real deployment
  • Every time your UI changes, your bot breaks. SaaS apps update constantly. Do the math on how often that is.

"RPA is dead. The enterprise has outgrown what scripted bots can do." That's not a startup founder talking. That's the IBM community blog, December 2025. When IBM says your product category is dead, it's dead.

AI Agents Don't Memorize. They Understand. That's the Whole Difference.

A computer use agent doesn't record a script. It looks at the screen, the same way you do, and figures out what needs to happen. Tell it to log into your procurement portal and pull all invoices over $10,000 from the last quarter, and it does it. Move the button. Change the layout. Update the portal. The agent adapts because it's reasoning about the interface, not reciting a memorized sequence of clicks. This is not a small improvement. This is a fundamentally different category of tool. Automation Anywhere knows this, which is why they've been frantically bolting AI features onto their platform and calling it 'agentic automation.' But wrapping a large language model around a brittle bot framework doesn't make it an AI agent. It makes it a brittle bot that can write its own error messages. The real computer use AI space is being benchmarked on OSWorld, the industry standard test for how well an AI can actually operate a computer. Anthropic's Claude Sonnet 4.5 scored 61.4% on OSWorld. That sounds okay until you realize it's still failing on nearly 4 out of 10 real-world tasks. This is why the benchmark matters and why the gap between tools is enormous.

Why Automation Anywhere Is Scrambling and What That Tells You

Look at what's happening in the market right now. There are active merger talks between C3.AI and Automation Anywhere as of early 2026. That's not a growth story. That's two companies trying to find safety in numbers while the ground shifts under them. UiPath, Automation Anywhere's biggest rival, is telling its own partners that 70% of enterprises will pivot to consolidated automation platforms by 2030. They're admitting the old model is going away while simultaneously trying to be the new model. It's like Blockbuster launching a streaming service in 2012. The pivot is real but the head start is gone. Meanwhile the Reddit threads where actual RPA developers hang out tell you everything. 'Is RPA dead and where should I pivot?' is a top post in r/rpa with hundreds of responses. These are people who built careers on Automation Anywhere and UiPath, and they're asking whether they need to learn something else. They do. The answer is yes. AI agents that can genuinely use a computer, navigating real desktops and browsers through visual understanding rather than scripted selectors, are not a future trend. They're here and they're faster to deploy, cheaper to maintain, and more capable out of the box.

Why Coasty Exists (And Why 82% on OSWorld Actually Matters)

I'm not going to pretend I don't have a dog in this fight. I think Coasty is the best computer use agent available right now, and I can back that up with a number: 82% on OSWorld. That's not a marketing claim. OSWorld is the hardest independent benchmark for computer-using AI, testing real tasks on real software, and 82% is the highest score any agent has posted. For comparison, Claude Sonnet 4.5 hit 61.4%. OpenAI's computer use efforts are in the same neighborhood. Coasty isn't in that neighborhood. It's in a different zip code. What that score means practically is that Coasty handles the messy, real-world stuff that breaks every other tool. Ambiguous UI, multi-step workflows across different applications, tasks that require actual judgment about what to do when something unexpected appears on screen. It controls real desktops, real browsers, and real terminals. Not API wrappers, not simulated environments, actual computer use the way a human does it. You get a desktop app, cloud VMs if you need them, and agent swarms for parallel execution when you need to run the same task across 50 accounts simultaneously. There's a free tier so you can actually test it before committing. BYOK is supported so you're not locked into someone else's model choices. Compare that to an Automation Anywhere deployment that takes six months to stand up and breaks on the first software update. The choice isn't even close.

Here's my actual take. Automation Anywhere isn't evil. It solved a real problem for a decade and made a lot of enterprises more efficient than they would have been otherwise. But the world moved on and they didn't move fast enough. Scripted bots were always a workaround for the fact that software didn't have good APIs and AI wasn't good enough to understand screens. Both of those things changed. The companies still paying $750,000 a year to maintain a fleet of fragile bots while their competitors deploy AI agents in days are going to feel that gap very soon if they don't already. Stop maintaining. Start automating for real. If you want to see what a genuine computer use agent looks like at 82% on the hardest benchmark in the industry, go to coasty.ai and run it yourself. The free tier exists for exactly this reason.

Want to see this in action?

View Case Studies
Try Coasty Free