Learn extra at:
AI brokers embedded in CI/CD pipelines could be tricked into executing high-privilege instructions hidden in crafted GitHub points or pull request texts.
Researchers at Aikido Safety have traced the issue again to workflows that pair GitHub Actions or GitLab CI/CD with AI instruments corresponding to Gemini CLI, Claude Code Actions, OpenAI Codex Actions or GitHub AI Inference. They discovered that unsupervised user-supplied strings corresponding to difficulty our bodies, pull request descriptions, or commit messages, could possibly be fed straight into prompts for AI brokers in an assault they’re calling PromptPwnd.
Relying on what the workflow lets the AI do, this will result in unintended edits to repository content material, disclosure of secrets and techniques, or different high-impact actions.

