Claude’s Strange Behavior and PyTorch Gets Malware

May 1, 2026

Two stories this week show how AI systems can behave unexpectedly — one by design, one by attack. Both matter for companies building with AI.

Claude Code Has Hidden Rules

Claude Code, Anthropic’s coding assistant, reportedly refuses requests or charges extra fees when your git commits mention “OpenClaw.” Users discovered this after noticing consistent rejections for certain projects.

This reveals something important: AI models have hidden behavioral rules that companies don’t disclose. Your development workflow could hit invisible walls based on keywords, project names, or code patterns the model was trained to avoid.

The practical impact: If you’re building custom AI agents for your business, you need to test edge cases extensively. What happens when your industry terminology conflicts with the model’s training? What if your product names trigger unexpected responses?

This is why we build custom AI agents instead of relying solely on off-the-shelf solutions. When you control the training and fine-tuning, you control the behavior. No surprises about what gets blocked or costs extra.

PyTorch Lightning Gets Dune-Themed Malware

Security researchers found malicious code in PyTorch Lightning, a popular AI training library. The malware was themed around “Shai-Hulud” — the giant sandworms from Dune. Someone with a sense of humor decided to backdoor AI infrastructure.

The malicious dependency could steal training data, model weights, or inject backdoors into AI models. Since PyTorch Lightning is used across the industry for training everything from chatbots to recommendation engines, the potential impact is massive.

What this means for your AI projects: Supply chain attacks on AI libraries are the new normal. Every dependency in your AI stack is a potential attack vector. The malware wasn’t targeting specific companies — it was targeting the entire AI ecosystem.

This is exactly why we emphasize secure infrastructure practices when building AI systems. Container isolation, dependency scanning, and air-gapped training environments aren’t paranoia — they’re necessities. When your AI models handle customer data or business logic, one compromised library can expose everything.

The Hidden Complexity

Both stories highlight the same issue: AI systems have hidden complexity that can bite you. Claude’s secret rules can break your workflows. Compromised libraries can steal your models.

The solution isn’t avoiding AI — it’s building it right. Know what your models will and won’t do. Secure your training pipeline. Test extensively. And when possible, maintain control over the critical components instead of outsourcing everything to black boxes.

Need help with your AI or cloud strategy?

We build custom AI agents, cloud infrastructure, and automation systems that fit your business.

Let's talk