Your AI Agent Is Holding Your Passwords. Are You Sure You Trust It?
28.65 million hardcoded secrets were dumped onto public GitHub in 2025 alone, and AI agents are now one of the fastest-growing contributors to that number. Let that sink in for a second. You gave your computer use agent your Salesforce login, your AWS keys, maybe your corporate SSO token. You told it to 'just handle it.' And somewhere in that workflow, those credentials are sitting in a config file, a log, a prompt history, or a memory buffer that you haven't audited once. This isn't a theoretical risk. It's happening right now, at companies that thought they were being smart by automating things. The question isn't whether AI agent credential handling is broken. The question is how badly it's broken at your company specifically.
The Numbers Are Worse Than You Think
GitGuardian's State of Secrets Sprawl 2026 report dropped a stat that should have been front-page news: AI-service credential leaks surged 81% year over year. GitHub Copilot adoption rose 27% between 2023 and 2024, and that single tool alone caused a 40% spike in secret leaks. Now multiply that across every computer use agent, every autonomous browser tool, and every agentic workflow your team has stood up in the last 18 months. The pattern is brutal and consistent. Developers move fast, agents need credentials to do anything useful, and the path of least resistance is always to hardcode the secret or pass it through the prompt. It works until it catastrophically doesn't. A real incident from May 2025: an xAI API key got exposed publicly. Not through a sophisticated attack. Through the same lazy credential hygiene that has been killing companies since 2012, now turbocharged by AI agents that touch dozens of systems in a single task run.
What Makes Computer Use Agents Specifically Dangerous Here
- ●A computer use agent doesn't just read credentials, it actively uses them inside live sessions on real browsers and desktops, meaning a compromised agent isn't just leaking a key, it's logged in and clicking around
- ●Prompt injection attacks on computer-using AI are already documented and weaponized: a malicious webpage or document tells the agent to exfiltrate credentials mid-task, and the agent complies because it can't distinguish the instruction from your original one
- ●HiddenLayer's October 2024 research on Anthropic's Claude Computer Use demonstrated indirect prompt injection leading directly to credential theft, and that research is now over a year old with the attack surface only growing
- ●OpenAI's Operator system card from January 2025 explicitly flagged 'human-like credential use' as a fundamental security challenge with no clean solution yet
- ●MCP (Model Context Protocol) deployments, which power a huge number of computer use agent integrations, routinely expose API keys and passwords through misconfigured servers, per Docker's July 2025 security analysis
- ●AI agents typically run with far more privilege than they need, and nobody is auditing what they actually touched after the task completes
An AI agent can be coerced into exfiltrating your credentials by a malicious instruction embedded in a webpage it visits during a normal task. It won't ask for confirmation. It won't flag anything as suspicious. It'll just do it, because that's what it was built to do.
The Anthropic and OpenAI Credential Problem Nobody Wants to Admit
Here's the part that gets awkward. Anthropic's own threat intelligence report from August 2025 documented a cybercriminal who used Claude to systematically track compromised credentials, pivot through networks, and run extortion campaigns. That's Claude being used as the attack tool, not the defense. And in February 2026, a hacker used Anthropic's Claude to steal sensitive data from Mexican organizations by exploiting the exact credential access patterns that make computer use agents useful in the first place. OpenAI isn't clean here either. A July 2025 academic paper titled 'A Systematization of Security Vulnerabilities in Computer Use Agents' used Operator as its primary case study and found a broad absence of standardized security controls across the computer use agent category. The Reddit cybersecurity community did a brutal post-mortem in February 2026 going through every AI agent security incident of 2025, and the consensus was damning: 'It's not credential theft, it's identity theft, and it's how much implicit trust these agents are operating with.' That's the real problem. These tools were built for capability first. Security was an afterthought, and users are paying for that ordering decision right now.
How You Should Actually Handle Credentials With a Computer Use Agent
The short answer: never pass credentials through a prompt. Ever. Not even once, not even for testing. The moment a credential touches a prompt, it can end up in logs, training data, memory systems, or prompt history that you don't control. The right architecture uses a secrets vault, HashiCorp Vault or AWS Secrets Manager being the most common, and injects credentials at runtime through a secure sidecar process that the agent itself never directly sees. The agent gets a scoped, time-limited token. It does the task. The token expires. Audit logs capture what it touched. That's the minimum viable secure setup. On top of that, you need least-privilege access for every agent identity. Your computer use agent that files expense reports does not need write access to your production database. Seems obvious. Most deployments ignore it completely. Prompt injection defenses matter too: sanitize inputs, treat web content as untrusted by default, and build approval gates for any action that involves authentication. The Obsidian Security team published a solid framework for this in October 2025 that's worth reading if you're building anything serious.
Why Coasty Exists
I've looked at a lot of computer use agents. The security architecture varies wildly, and most of the big names treat it as a user problem to figure out. Coasty is different in how it's built for real enterprise use, where credential handling isn't an afterthought. It runs tasks on isolated cloud VMs, which means your credentials aren't floating around in a shared environment with other users' workloads. The BYOK (bring your own keys) support means you're not handing your secrets to a third-party system and hoping for the best. And when you need to run parallel workloads through agent swarms, each execution context stays isolated. That matters enormously for credential hygiene because the attack surface of a shared session is completely different from an isolated one. Coasty also sits at 82% on OSWorld, the gold standard benchmark for computer use agents. That's not a marketing number, that's a reproducible score on real-world computer tasks that no competitor has matched. The reason that benchmark matters for security is simple: an agent that actually completes tasks correctly is less likely to thrash around in a confused state, which is exactly when agents make bad decisions about credential handling. Competence and security are more connected than people realize. You can try it free at coasty.ai, and the BYOK option means you stay in control of your own secrets from day one.
Here's my honest take. The AI agent credential problem is going to produce a genuinely catastrophic breach at a recognizable company within the next 12 months. The ingredients are all there: agents running with too much privilege, credentials passed through prompts, prompt injection attacks that are already documented and working, and a security community that's still playing catch-up to how fast agentic AI got deployed. If you're using any computer-using AI in production right now, go audit your credential handling today. Not this sprint. Today. Check whether any secrets are in your prompts. Check whether your agent has more access than it needs. Check whether your logs are capturing what the agent actually did. And if you're still evaluating which computer use agent to build on, pick one that was designed with isolation in mind from the start. The capability gap between the best and worst computer use agents is already massive. The security gap is even bigger, and it's the one that will actually hurt you. Start at coasty.ai.