|

Date : TBC

Training Format & Duration
  • Live Online / On-Site / Private Team Training
  • 1 Day
Do you have 5 or more attendees?

Secure Coding for Large Language Model Applications

As LLMs become embedded in everything from chatbots to internal dev tools and customer-facing platforms, they introduce a new class of security risks—prompt injection, data leakage, unintended behavior, and more. This full-day workshop teaches engineers and AppSec professionals how to design, build, and test LLM applications with security in mind.

Heading

 |

Date : TBC

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Do you have 5 or more attendees?
Contact Us about Team Training >>
bkg-decorativelines-big-white

About this course

About this course

Why this course?

As large language models (LLMs) become embedded in everything from customer service to code generation, security professionals, developers, and architects must shift their mindset. Traditional security models aren’t enough—LLMs introduce new, often misunderstood risks like prompt injection, model theft, and excessive autonomy.

This course will equip you to:

Understand the unique vulnerabilities of LLM-based systems

Apply real-world mitigation techniques aligned with OWASP’s Top 10 for LLMs

Design and deploy secure, reliable, and trustworthy LLM applications

Objectives

This full-day workshop teaches engineers and AppSec professionals how to design, build, and test LLM applications with security in mind. Participants will gain a practical understanding of secure coding principles tailored for LLM-driven architectures, drawing from real-world case studies, OWASP guidance, and hands-on lab scenarios.

It is recommended that participants on the Secure Coding for Large Language Model Applications have completed the Web Application Security Training course. Please see Related Training at the end of this page.

Overview

Module 1: Introduction

Overview of the OWASP Top 10 for LLMs

Threat landscape for LLM applications

Why traditional security paradigms fall short

Mapping to existing risk frameworks (e.g., NIST, ISO, OWASP AppSec)

Module 2: LLM01 – Prompt Injection

Definition and impact

Direct vs Indirect prompt injection

Techniques: Role confusion, prompt leaking, instruction override

Real-world examples (e.g., attacks via RAG, plugins, and tools)

Lab: Prompt Injection Leaks Sensitive Content

Mitigations: Input validation, content filtering, retrieval sanitization

Module 3: LLM02 – Insecure Output Handling

Output injection (HTML, code, SQL, etc.)

Over-reliance on hallucinated or unverified content

Risks to downstream consumers (e.g., agents, APIs, UIs)

Lab: Insecure HTML Handling Leads to XSS

Mitigations: Content post-processing best practices

Module 4: LLM03 – Training Data Poisoning

Risks in dataset curation and ingestion pipelines

Threats from 3rd-party or open-source data

Intentional vs unintentional poisoning

Mitigations: Detection and prevention strategies

Module 5: LLM04 – Model Denial of Service

Token flooding, infinite loops, adversarial prompts

Cost/resource exhaustion attacks

Lab: Missing Rate Limiting on Resource Intensive Model

Mitigations: Rate limiting, token quotas, and safe prompt patterns

Module 6: LLM05 – Supply Chain Vulnerabilities

Risks in third-party models, plugins, libraries, and datasets

Trust boundaries and integrity of the ML pipeline

Lab: Exploitable Agent Due to Outdated Component

Mitigations: SBOMs, code/data provenance, dependency auditing

Module 7: LLM06 – Sensitive Information Disclosure

Memorization of secrets (e.g., API keys, PII)

Prompt leaking and output probing

Lab: Client Data Leaks in Model

Mitigations: Redaction, access control, and differential privacy techniques

Module 8: LLM07 – Insecure Plugin Design

Input validation and sanitization failures

Over-permissive capabilities and scopes

Lab: Insecure Plugin Design Leads to Command Execution

Mitigations: Secure plugin architecture and zero-trust principles

Module 9: LLM08 – Excessive Agency

Risk of autonomous decision-making and execution

Case studies: Code execution, financial transactions, data deletion

Mitigation: Human-in-the-loop, capability scoping

Module 10: LLM09 – Overreliance

Psychological trust in LLMs (automation bias)

Failures in oversight or review

Mitigation: Policy design and human oversight mechanisms

Module 11: LLM10 – Model Theft

Reverse engineering, output inference, and model extraction

Threats to IP, compliance, and model confidentiality

Mitigation: Rate limiting, watermarking, access control

Format: The course combines theory and hands-on practical exercises.

Duration: 1 day (8 hours)

Attendee Profile

This course is designed for anyone building, integrating, or securing applications that use large language models (LLMs)

To fully benefit from this course, students should have:

A basic understanding of software development and web technologies

Familiarity with Python and JavaScript—you don’t need to be an expert, but you should be comfortable reading and modifying simple code snippets

A general grasp of application security concepts (e.g., input validation, injection attacks, authentication)

Interest in AI and LLM systems—no prior experience with machine learning is required

Attendee Testimonials

Benefits

Top 3 takeaways

1. How to Recognize and Mitigate the Unique Security Risks of LLMs - Students will gain a clear understanding of the OWASP Top 10 for LLMs, including threats like prompt injection, insecure output handling, and model theft. They’ll learn how these risks differ from traditional application security issues and how to defend against them.

2. How to Design and Deploy LLM Applications with Secure Defaults - Students will be equipped with practical techniques for securing LLMs throughout the lifecycle—covering input/output validation, plugin security, data provenance, and safe autonomy boundaries—enabling them to implement LLMs with confidence in real-world systems.

3. Why Critical Oversight and Responsible Use Are Essential in LLM-Driven Systems - Students will understand the human and operational risks of overreliance on LLM outputs and excessive model agency. They’ll learn how to integrate human-in-the-loop controls, policy safeguards, and monitoring to maintain accountability and trust.

What is included?

Recognition

Exam & Certifications

Materials

  • Printed materials
  • Virtual image containing all tools used
  • Certificate of Participation (CPE Points)
Proudly Certified for Excellence in Cybersecurity Training

ISC2 Official Training Partner

ISC2 Official Training Partner

As an ISC2 Official Training Partner, we provide access to world-class certifications such as CISSP®, CCSP®, and SSCP®, delivering training that aligns with the latest industry standards.

By partnering with ISC2, we ensure our clients stay ahead in the ever-evolving cybersecurity landscape, confidently equipped to protect critical data and infrastructure.

Cycubix-ISC2-Official-Training-Partner-logo

Team Training with Cycubix

Team Training with Cycubix

Instructors

The minds behind the course

The minds behind the course

Picture of Fabio Cerullo Cybersecurity Instructor

Fabio Cerullo

Senior Official ISC2 Authorised Instructor for CISSP, CCSP, CSSLP and SSCP

Fabio Cerullo is the Managing Director of Cycubix. He has extensive experience in understanding and addressing the challenges of cybersecurity from over two decades working in and with organisations across a diverse range of industries – from financial services to government departments, technology and manufacturing.

Fabio Cerullo is a Senior Authorised Instructor for ISC2,the global leader in information security education and certification. Fabio has delivered training to thousands of IT and security professionals world wide in cyber, cloud, and application security. As a member of ISC2 and OWASP organisations, Fabio helps individuals and organisations strengthen their application security posture and build fruitful relationships with governments, industry and educational institutions.

Fabio is a regular speaker and delivers training at events organised by leading Cybersecurity associations including OWASP and ISC2. He holds a Msc in Computer Engineering from UCA and the SSCP, CISSP, CSSLP & CCSP certifications from ISC2.

Show (Instructors)

The minds behind the course

The minds behind the course

Picture of Fabio Cerullo Cybersecurity Instructor

Fabio Cerullo

Fabio Cerullo is the Managing Director of Cycubix. He has extensive experience in understanding and addressing the challenges of cybersecurity from over two decades working in and with organisations across a diverse range of industries – from financial services to government departments, technology and manufacturing.

Fabio Cerullo is a Senior Authorised Instructor for ISC2,the global leader in information security education and certification. Fabio has delivered training to thousands of IT and security professionals world wide in cyber, cloud, and application security. As a member of ISC2 and OWASP organisations, Fabio helps individuals and organisations strengthen their application security posture and build fruitful relationships with governments, industry and educational institutions.

Fabio is a regular speaker and delivers training at events organised by leading Cybersecurity associations including OWASP and ISC2. He holds a Msc in Computer Engineering from UCA and the SSCP, CISSP, CSSLP & CCSP certifications from ISC2.