Training Delivery & Duration

  • Live Online / On-Site / Private Team Training
  • 2 Days

Secure Coding for AI and Large Language Model (LLM) Applications

Hands-on training that teaches teams to securely develop and deploy AI and LLM applications. Learn how to prevent prompt injection, data leakage, and other critical risks from the OWASP Top 10 for LLMs.

Training Delivery & Duration

  • Live Online / On-Site / Private Team Training
  • 2 Days

Do you have 5 or more attendees?

Contact Us about Team Training >>
bkg-decorativelines-big-white

About this course

About this course

Course Overview

AI-driven applications, especially those powered by Large Language Models (LLMs), are rapidly transforming products, developer workflows, and customer experiences. 

But these systems introduce unique security risks that traditional AppSec practices don’t address. 

This 2 days hands-on course teaches developers, AppSec engineers, and architects how to design and build secure AI/LLM applications. Participants learn to defend against prompt injection, insecure output handling, model poisoning, data leakage, and other risks from the updated OWASP Top 10 for LLM Applications 2025. 

Through labs and real-world case studies, attendees gain practical skills for deploying safe, trustworthy, and compliant AI capabilities at scale.

Why Take this Course?

This course helps organisations confidently integrate AI technologies by addressing the emerging security challenges associated with LLM-powered systems.

You will learn to:

  • Protect AI and LLM applications from real-world attacks (prompt injection, data leaks, model theft).
  • Demonstrate compliance readiness across AI governance standards (ISO 42001, NIST AI RMF).
  • Build defensible and auditable AI architectures aligned with security best practices.
  • Reduce business, legal, and operational risk caused by AI system failures.
  • Equip engineering teams with security-by-design practices for AI-enabled products.
  • For compliance managers and buyers, this training provides assurance that your organisation is developing secure and responsible AI.

Learning Objectives

Participants will be able to:

  • Identify and mitigate the unique risks of AI/LLM-powered applications

  • Implement secure coding practices for LLM inputs, outputs, and agent-based behaviours

  • Apply OWASP Top 10 for LLM Applications 2025 controls effectively

  • Design AI systems with safe autonomy, secure plugin architectures, and least-privilege access

  • Detect high-risk behaviours, hallucinations, and security regressions in AI systems

  • Evaluate AI components for compliance and governance implications

Who Should Attend this Course?

This course is designed for anyone building, integrating, or securing applications that use large language models (LLMs):

  • Software Developers and Engineers

  • AI/ML Engineers and Data Scientists

  • Application Security and Cloud Security Professionals

  • Technical Architects and Engineering Managers

  • AI Governance, Risk, and Compliance (GRC) Leads

  • Product Owners working on AI-enabled features

To fully benefit from this course, participants should have:

  • A basic understanding of software development and web technologies.
  • Familiarity with Python and JavaScript, you don’t need to be an expert, but you should be comfortable reading and modifying simple code snippets.
  • A general grasp of application security concepts (i.e. input validation, injection attacks, authentication).
  • Interest in AI and LLM systems, no prior experience with machine learning is required.

Benefits

Attendee Testimonials

Course Outline

Part I: Foundations of AI and LLM Security

Part II: Threat Modeling and Architecture

  • Threat Modeling for LLM Systems
  • RAG Security: Retrieval, Embeddings, and Index Integrity
  • Agent and Tool Security

Part III: The OWASP Top 10 for LLM Applications 2025

  • LLM01:2025 Prompt Injection
  • LLM02:2025 Sensitive Information Disclosure
  • LLM03:2025 Supply Chain
  • LLM04:2025 Data and Model Poisoning
  • LLM05:2025 Improper Output Handling
  • LLM06:2025 Excessive Agency
  • LLM07:2025 System Prompt Leakage
  • LLM08:2025 Vector and Embedding Weaknesses
  • LLM09:2025 Misinformation
  • LLM10:2025 Unbounded Consumption

Part IV: Secure AI/LLM Design and Governance

  • Secure AI/LLM Design Patterns and Best Practices
  • Governance, Risk and Regulatory Alignment

Format

This instructor-led workshop is available for both onsite or online deliveries. It combines focused technical instruction with practical, hands-on labs in a secure AI/LLM Lab environment. Participants engage in guided exercises, real attack simulations, and collaborative problem-solving, ensuring the skills learned can be applied immediately to real-world AI and LLM application development.The course combines theory and hands-on practical exercises.

What is included?

• Live instructor-led sessions (online or in-person)

• 365 days access to slides and course materials via Cycubix Academy

• Specific labs for secure coding for AI and LLM

• Certificate of Completion

• Option to customise content for organisational objectives

Levels

  • SECCDAI-01 Secure Coding for AI & LLM Applications Core Course
    • Focuses on OWASP Top 10 for LLM Applications. Practical secure coding for AI/LLM systems.

Team Training with Cycubix

Team Training with Cycubix

Instructors

The minds behind the course

The minds behind the course

Fabio Cerullo

Senior Official ISC2 Authorised Instructor for CISSP, CCSP, CSSLP and SSCP

Fabio Cerullo, CISSP, CCSP, CSSLP, SSCP, is the Managing Director of Cycubix Ltd., where he leads cybersecurity consulting, compliance programs and professional training services for organisations across a wide range of industries. His work spans secure engineering, cloud security and guidance on major regulatory and certification requirements including ISO 27001, SOC2, FedRAMP, NIS2, PCI and GDPR.

He also serves as an ISC2 Senior Authorised Instructor, delivering advanced courses that help security and engineering teams build practical skills in cloud security, software security and information risk management. His cloud expertise is reinforced by his AWS Certified Solutions Architect and AWS Security Specialty certifications and hands-on experience advising organisations on secure architecture and cloud-native security practices.

He is an active contributor to the OWASP Foundation, regularly providing training, speaking at industry events and supporting community initiatives focused on modern application security. He volunteers as Google Summer of Code administrator, mentoring new students into the cybersecurity field and guiding them through their first contributions to open source security projects.

Originally from Argentina and now based in Ireland, he holds a master’s degree in computer engineering. His interests include emerging technologies, with a particular focus on AI risks and secure AI engineering. Outside of his professional work he enjoys spending time with his family, running outdoors, and actively supporting initiatives that aim to make high-quality cyber-security education accessible to a broader audience.

Show (Instructors)

The minds behind the course

The minds behind the course

Fabio Cerullo

Fabio Cerullo, CISSP, CCSP, CSSLP, SSCP, is the Managing Director of Cycubix Ltd., where he leads cybersecurity consulting, compliance programs and professional training services for organisations across a wide range of industries. His work spans secure engineering, cloud security and guidance on major regulatory and certification requirements including ISO 27001, SOC2, FedRAMP, NIS2, PCI and GDPR.

He also serves as an ISC2 Senior Authorised Instructor, delivering advanced courses that help security and engineering teams build practical skills in cloud security, software security and information risk management. His cloud expertise is reinforced by his AWS Certified Solutions Architect and AWS Security Specialty certifications and hands-on experience advising organisations on secure architecture and cloud-native security practices.

He is an active contributor to the OWASP Foundation, regularly providing training, speaking at industry events and supporting community initiatives focused on modern application security. He volunteers as Google Summer of Code administrator, mentoring new students into the cybersecurity field and guiding them through their first contributions to open source security projects.

Originally from Argentina and now based in Ireland, he holds a master’s degree in computer engineering. His interests include emerging technologies, with a particular focus on AI risks and secure AI engineering. Outside of his professional work he enjoys spending time with his family, running outdoors, and actively supporting initiatives that aim to make high-quality cyber-security education accessible to a broader audience.