2025-08-27 –, Potter Auditorium - Kenneth C Rowe Management Building
Discover how AI-driven misconfigurations expose secrets in containerized supply chains, leading to security breaches and compliance risks. This session explores real-world incidents where leaked credentials and misconfigured AI-powered deployments nearly caused major disruptions. Learn how Generative AI is transforming threat detection, secret management, and automated remediation to prevent unauthorized access. Gain actionable insights on securing CI/CD pipelines, mitigating AI-driven risks, and ensuring compliance with evolving regulations like the Cyber Resilience Act (CRA) and NIST SSDF.
The integration of AI into supply chain security has introduced new attack vectors—especially when secrets, credentials, and sensitive configurations are unintentionally exposed within containerized environments. A single leaked API key in an AI-driven deployment can cascade into a full-blown security incident, allowing attackers to tamper with containerized applications, manipulate software supply chains, and exploit vulnerabilities before detection.
In this session, I’ll take you through a real-world incident where a seemingly minor AI-generated misconfiguration in a containerized pipeline led to the accidental exposure of hardcoded secrets. This misstep opened the door for unauthorized access to critical infrastructure, placing the organization at risk of both breach and regulatory non-compliance under frameworks like the Cyber Resilience Act (CRA) and NIST SSDF.
We’ll dissect what went wrong, how traditional security tools failed to detect the risk, and the rapid response that prevented a major breach. More importantly, we’ll explore how Generative AI is now being leveraged to proactively identify, obfuscate, and manage secrets within containerized environments—before attackers do.
Key takeaways include:
- How AI-driven misconfigurations lead to exposed secrets and container security risks.
- Lessons from a real-world AI security incident where a simple oversight nearly caused a major compliance failure.
- How Generative AI is transforming secret scanning, anomaly detection, and automated remediation.
- Best practices for integrating AI-powered secret management within CI/CD pipelines and supply chains.
- A battle-tested approach to securing AI-driven deployments while staying compliant with emerging regulations.
This session goes beyond theory—it’s an action-oriented roadmap for securing AI-powered supply chains, preventing container security breaches, and ensuring secrets remain protected in an era where AI can both introduce and mitigate risks. Whether you're building AI-enabled applications or defending containerized environments, you'll leave with tangible strategies to harden security and prevent AI-driven threats before they escalate.
Aamiruddin Syed is a Senior Security Architect with over decade in years of experience in the industry. He specializes in DevSecOps, Shift-Left Security, cloud security, and internal penetration testing. He has extensive expertise in automating security into CI/CD pipelines, developing security automation, and building security into infrastructure as code. He has worked on securing cloud platforms by applying security best practices to infrastructure provisioning and configuration. Leveraging his penetration testing skills, he routinely conducts targeted internal assessments of critical applications and systems to proactively identify risks. He excels at bridging the gap between security and engineering teams to enable building security directly into products.
Aamiruddin Syed holds Dual Master’s degree in Cybersecurity from Northeastern University and Jadavpur University. A recognized security advocate, he frequently speaks and has chair sessions at industry conferences including Defcon , Blackhat. He is an active contributor to open-source security tools that aim to make security seamless for developers. When he is not researching the latest application security techniques, he enjoys traveling and photography.