Hands-On Training Framework for Prompt Injection Exploits in Large Language Models †

Research output: Contribution to journalArticlepeer-review

Abstract

With the increasing deployment of large language models (LLMs) in diverse applications, security vulnerability attacks pose significant risks, such as prompt injection. Despite growing awareness, structured, hands-on educational platforms for systematically studying these threats are lacking. In this study, we present an interactive training framework designed to teach, assess, and mitigate prompt injection attacks through a structured, challenge-based approach. The platform provides progressively complex scenarios that allow users to exploit and analyze LLM vulnerabilities using both rule-based adversarial testing and Open Worldwide Application Security Project-inspired methodologies, specifically focusing on the LLM01:2025 prompt injection risk. By integrating attack simulations and guided defensive mechanisms, this platform equips security professionals, artificial intelligence researchers, and educators to understand, detect, and prevent adversarial prompt manipulations. The platform highlights the effectiveness of experiential learning in AI security, emphasizing the need for robust defenses against evolving LLM threats.

Original languageEnglish
Article number25
JournalEngineering Proceedings
Volume108
Issue number1
DOIs
Publication statusPublished - 2025

All Science Journal Classification (ASJC) codes

  • Biomedical Engineering
  • Mechanical Engineering
  • Industrial and Manufacturing Engineering
  • Electrical and Electronic Engineering

Cite this