AI Con USA 2025 - Security

Customize your AI Con USA 2025 experience with sessions covering security for software developers and testers.

Tuesday, June 10

Tariq King
Test IO
TA

Prompt Engineering for Software Practitioners

Tuesday, June 10, 2025 - 8:30am to 12:00pm

With the sudden rise of ChatGPT and large language models (LLMs), practitioners are using these tools for all aspects of engineering. This includes leveraging LLMs for creating software artifacts such as requirements documents, source code, and tests; reviewing them for issues and making corrective suggestions, and analyzing or summarizing results or outcomes. However, if LLM's are not fed good prompts describing the task that the AI is supposed to perform, their responses can be inaccurate and unreliable. Join Tariq King as he teaches you how to craft high-quality AI prompts and...

Thursday, June 12

Peter Wang
Anaconda
T9

Securing the Foundations of AI: Addressing the Past to Safeguard the Future

Thursday, June 12, 2025 - 1:25pm to 2:10pm

AI’s future hinges on an ecosystem built on decades of technical debt, fragmented tools, and opaque processes; creating vulnerabilities that threaten the reliability and security of modern applications. In this talk, Peter will examine how the legacy of open-source numerical computing and software supply chains is influencing AI’s trajectory. Drawing from over a decade of leadership in the Python and scientific computing communities, Peter will share strategies for tackling these challenges: improving transparency in data and dependencies, building curated software stacks, addressing...

Gal Elbaz
Oligo Security
T11

Shadow Vulnerabilities in AI/ML Data Stacks - What You Don’t Know CAN Hurt You

Thursday, June 12, 2025 - 2:40pm to 3:25pm

The adoption of open-source AI software introduces a new family of vulnerabilities to organizations. Some components in AI, like model serving, include Remote Code Execution (RCE) by design, like when loading pre-trained models from external sources. Traditional SCA and SAST approaches are not built for the AI ecosystem leaving a huge & insecure attack surface. The irony is that in the AI ecosystem, security issues such as remote code execution are actually a feature and not a bug, often specified explicitly in the docs, which most devs don’t read. AI models are often downloaded from...