VS Code Auto-Credits Copilot and AI Agent Security
Two stories this week show how AI integration is moving faster than the guardrails. Microsoft is automatically crediting Copilot in code commits, and security researchers are finding fundamental flaws in how we deploy AI agents.
VS Code Credits Copilot Without Your Permission
Microsoft’s VS Code now automatically adds ‘Co-Authored-by: GitHub Copilot’ to git commits. The kicker? It happens whether you actually used Copilot or not. Users discovered this after finding the attribution in commits where they wrote everything manually.
This isn’t just annoying — it’s legally messy. If your company has policies about AI-generated code, these phantom attributions could trigger compliance issues. Worse, it makes code auditing harder when you can’t trust the commit history.
The fix is buried in settings, but most developers won’t know to look for it. This is Microsoft quietly changing how code ownership works without asking.
AI Agent Security Has a Fundamental Problem
Researchers at Mendral published findings that most AI agent deployments get security backwards. The common approach puts agents inside sandboxes, but the real vulnerability is in the orchestration layer — the code that manages the agent.
When that harness gets compromised, the attacker controls everything: what data the agent sees, what actions it takes, what responses get filtered. The sandbox becomes worthless because the threat is already inside the control system.
This matters immediately if you’re building or buying AI agents. The security model most vendors use is flawed from the ground up. You need to isolate the orchestration, not just the AI model.
For companies building custom AI agents, this means rethinking architecture from day one. The agent itself should be the least trusted component, not the most protected. Every interaction needs authentication, every output needs validation, and the control plane needs to be completely separate from the execution environment.
What This Means for Your AI Strategy
Both stories point to the same issue: AI tooling is advancing faster than the operational frameworks to manage it safely. Whether it’s phantom code attribution or vulnerable agent architectures, the risks are showing up in production systems.
If you’re deploying AI agents, audit your security model now. If you’re using AI coding tools, check what’s actually being recorded in your version control. The defaults aren’t designed for enterprise governance — they’re designed for adoption speed.
Need help with your AI or cloud strategy?
We build custom AI agents, cloud infrastructure, and automation systems that fit your business.
Let's talk