Your AI Agent Is Holding Your Passwords. Here's Why That Should Terrify You.
In August 2025, hackers compromised Salesloft's Drift AI chat agent and walked out with OAuth tokens, AWS keys, Snowflake credentials, and raw passwords from dozens of companies, including Cloudflare. The attack window ran for ten straight days before anyone noticed. That's not a security failure. That's a fire alarm going off while everyone's asleep at the wheel. And here's the thing nobody wants to say out loud: this exact scenario is playing out in slow motion across every team that has handed credentials to an AI agent without thinking hard about what that actually means. We're in a gold rush moment for computer use agents. Everyone's building them, deploying them, and giving them keys to the kingdom. But the conversation about how these agents store, access, and protect credentials? Almost nonexistent. That silence is going to cost us.
The Breach That Should Have Changed Everything (But Probably Didn't)
Let's talk about what actually happened with Salesloft and Drift, because the technical details are genuinely alarming. Threat actor UNC6395 didn't break through a firewall or exploit some zero-day vulnerability. They targeted the OAuth integration between Drift's AI agent and Salesforce. Once inside, they harvested refresh tokens, which don't expire quickly, and used them to move laterally across customer environments. AWS keys. Snowflake credentials. Passwords in plain text. The whole buffet. Cloudflare confirmed they were affected. Google's Threat Intelligence Group tracked the campaign. The attack ran from August 8 to August 18 before tokens were finally revoked on August 20. Ten days. Think about how much data moves through a sales automation platform in ten days. This wasn't some obscure edge case. Drift is one of the most widely deployed AI chat agents in B2B software. And the vulnerability wasn't in some exotic corner of the codebase. It was in the credential handling of the agent itself, specifically in how OAuth tokens were stored and how much access they silently carried. That's the pattern that should scare you.
The Numbers Are Worse Than You Think
- ●IBM's 2025 Cost of a Data Breach Report found that 13% of organizations reported breaches of AI models or applications, and 97% of those organizations lacked proper AI access controls. Ninety-seven percent.
- ●GitGuardian's State of Secrets Sprawl 2025 found a 25% year-over-year increase in leaked credentials on GitHub, with hardcoded passwords appearing 3x more often in private repos than public ones.
- ●GitHub Copilot adoption rose 27% between 2023 and 2024, and secrets leaks spiked 40% in the same period. Correlation isn't always causation, but come on.
- ●The 2026 GitGuardian report (yes, already) shows AI-service credential leaks surged 81% in a single year, and nearly 70% of credentials exposed in 2025 were still valid and exploitable.
- ●Prompt injection, the attack where malicious content in a webpage or document hijacks what an AI agent does next, is now the most commonly reported AI exploit vector, with 89.3% of incidents targeting credential theft according to Vectra AI.
- ●The average cost of a data breach in 2025 hit a new record high according to IBM, and breaches involving AI systems are climbing as a share of that total.
- ●A Reddit thread from February 2026 compiling every documented AI agent security incident from 2025 had one recurring theme: agents operating with way too much implicit trust and zero credential hygiene.
97% of organizations that suffered AI application breaches in 2025 lacked proper AI access controls. These weren't startups running on vibes. These were enterprises. And they handed their computer use agents the keys without building the locks first.
Why Computer Use Agents Make This Problem Exponentially Harder
Here's where it gets specific. A basic API-calling bot is one thing. A computer use agent is something else entirely. When an AI agent can see your screen, control your browser, type into forms, and navigate desktop applications, it's not just reading credentials. It's actively using them in real time, against real systems, in ways that look completely legitimate to every security tool watching the traffic. That's what makes computer use security so uniquely tricky. Traditional DLP tools flag weird API calls. They don't flag an agent that's logged into your CRM and is quietly exfiltrating data through the normal UI because it got prompt-injected by a malicious email it was asked to process. The attack surface isn't the code anymore. It's the agent's behavior. And most computer-using AI deployments right now are operating with what security researchers are calling 'implicit trust,' meaning the agent gets credentials once and holds them indefinitely, with no rotation, no scope limitation, and no audit trail of what it actually did with them. Claude Code's default behavior of reading .env files set off a genuine firestorm on Reddit in June 2025 for exactly this reason. Developers were handing their AI assistant database passwords, API keys, and private tokens without realizing those files were being processed and sent upstream. The outrage was justified. The practice is still widespread.
What Good Credential Handling Actually Looks Like
The good news is this isn't unsolvable. The bad news is most teams aren't doing any of it. Proper credential handling for a computer use agent means a few non-negotiable things. First, least-privilege access. Your agent should have the minimum credentials required for the specific task it's running, not a master key to every system it might theoretically need someday. Second, just-in-time credential injection. Credentials should be passed to the agent at runtime for a specific session and revoked when that session ends. Not stored. Not cached. Not sitting in a config file somewhere. Third, scope-limited OAuth tokens. If your agent needs to read a Google Sheet, it should have a token that reads Google Sheets and nothing else. Not a token that also happens to have Gmail access and Drive write permissions because someone clicked 'allow all' during setup. Fourth, audit logging at the action level. Not just 'agent ran at 2pm.' Every click, every form fill, every credential use should be logged and reviewable. Fifth, prompt injection defenses. Your agent needs to be hardened against instructions embedded in the content it's processing, because that's how attackers are hijacking computer-using AI right now, not through the front door but through a malicious PDF the agent was asked to summarize. None of this is exotic. All of it is being ignored at scale.
Why Coasty Exists
I've spent a lot of time with different computer use agents, and the credential handling question is one of the clearest ways to separate the ones built by people who thought hard about production deployments from the ones that are basically demos with a pricing page. Coasty was built from the ground up for real-world computer use, meaning the kind where you're actually giving an agent access to live systems with real data and real consequences. At 82% on OSWorld, it's the highest-performing computer use agent benchmarked anywhere right now, and that benchmark matters because it measures whether an agent can actually complete complex, multi-step tasks on real software, not toy environments. But raw performance means nothing if the agent is a credential sieve. Coasty's architecture supports isolated cloud VMs for agent execution, which means your agent isn't running in a shared environment where one compromised session bleeds into another. Agent swarms for parallel execution are scoped per task, not per user account, which limits the blast radius if something goes wrong. And the BYOK support means you control the keys, not a third-party service holding them on your behalf. I'm not saying Coasty is perfect and every other computer use agent is a disaster. I'm saying the questions you should be asking any computer use agent vendor, including Coasty, are: where are my credentials stored, what happens to them after a session ends, and what's the audit trail? If a vendor can't answer those questions clearly, that's your answer.
The Salesloft Drift breach happened because an AI agent was trusted with credentials it shouldn't have held, in a scope it shouldn't have had, for longer than any session should last. That's not a one-company problem. That's the default configuration for most agentic AI deployments right now. We're building a generation of computer use agents that are genuinely powerful, genuinely capable of doing real work in real systems, and we're handing them credentials like we're still in 2019 passing API keys around in Slack. The 81% surge in AI-service credential leaks in a single year isn't a warning shot. It's the opening act. If you're building with or deploying a computer use agent in 2026, the credential conversation isn't optional anymore. It's the conversation. Start there. Demand better defaults from your tools. And if you want to see what a computer use agent built for production actually looks like, go to coasty.ai and start with the free tier. The benchmark is 82%. The security conversation is one you can actually have with the team. That combination is rarer than it should be.