Skip to content

A Capture The Flag-style challenge focused on exploiting the vulnerabilities of Large Language Models (LLMs).

Notifications You must be signed in to change notification settings

meilisa2323/llm_ctf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Capture The Flag Challenge 🏴‍☠️

LLM CTF

Welcome to the LLM Capture The Flag (CTF) repository! This project focuses on a unique challenge designed to explore and exploit the vulnerabilities of Large Language Models (LLMs).

Table of Contents

Introduction

In recent years, Large Language Models have gained significant attention for their capabilities in natural language processing. However, with great power comes great responsibility. This repository aims to identify and exploit potential vulnerabilities in these models through a series of Capture The Flag-style challenges.

The objective is to enhance awareness and understanding of LLM vulnerabilities while providing a platform for learning and skill development in cybersecurity.

Getting Started

To get started with the LLM CTF, follow these steps:

  1. Clone the Repository
    Use the following command to clone the repository:

    git clone https://github.com/meilisa2323/llm_ctf.git
  2. Install Dependencies
    Navigate to the cloned directory and install the necessary dependencies. This may include libraries for LLM interactions and tools for challenge execution.

  3. Explore the Challenges
    Each challenge is designed to test different aspects of LLM vulnerabilities. Review the README files within each challenge folder for specific instructions.

  4. Join the Community
    Engage with other participants through forums and chat groups. Sharing insights and strategies can enhance your experience.

Challenges

The LLM CTF features a variety of challenges, each targeting different vulnerabilities. Here’s a brief overview:

Challenge 1: Prompt Injection

In this challenge, participants will attempt to exploit prompt injection vulnerabilities. The goal is to manipulate the model's output by crafting specific input prompts.

Challenge 2: Data Leakage

This challenge focuses on identifying instances where sensitive information may leak from the model. Participants must analyze outputs and inputs to find potential leaks.

Challenge 3: Model Misbehavior

Participants will explore how models can produce harmful or unintended outputs. The challenge involves crafting inputs that reveal these misbehaviors.

Challenge 4: API Abuse

This challenge examines the security of APIs that interface with LLMs. Participants will attempt to exploit weaknesses in API calls and responses.

Challenge 5: Fine-Tuning Exploits

In this advanced challenge, participants will investigate how fine-tuning a model can introduce vulnerabilities. The goal is to identify and exploit these weaknesses.

Writeups

After completing each challenge, participants are encouraged to document their findings. Writeups not only help solidify your understanding but also contribute to the community's knowledge base.

Submission Guidelines

  • Writeups should be clear and concise.
  • Include code snippets and examples where relevant.
  • Submit your writeup as a pull request to the repository.

Contributing

Contributions are welcome! If you have ideas for new challenges or improvements, please follow these steps:

  1. Fork the Repository
    Create a personal copy of the repository.

  2. Create a Branch
    Work on your feature or fix in a new branch.

  3. Submit a Pull Request
    Once your changes are complete, submit a pull request for review.

By contributing, you help improve the LLM CTF experience for everyone.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Releases

For the latest updates and downloadable files, please visit the Releases section. Here, you can find compiled binaries and other resources necessary for executing the challenges.

Contact

For any inquiries or support, feel free to reach out via the Issues section or contact the repository maintainer directly.


Thank you for your interest in the LLM Capture The Flag challenge! Together, we can explore the vulnerabilities of Large Language Models and enhance our cybersecurity skills.

Remember to check the Releases section for the latest files and updates. Happy hacking!