LLM App is a production framework for building and serving AI applications and LLM-enabled real-time data pipelines.
-
Updated
Dec 27, 2023 - Python
LLM App is a production framework for building and serving AI applications and LLM-enabled real-time data pipelines.
The Security Toolkit for LLM Interactions
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Papers and resources related to the security and privacy of LLMs 🤖
Prompt injection attacks and defenses in LLM-integrated applications
The world's first open source LLM Applications Firewall.
Risks and targets for assessing LLMs & LLM vulnerabilities
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
Framework for Attacking the Confidentiality of Large Language Models (LLMs)
LLM security and privacy
MER is a software that identifies and highlights manipulative communication in text from human conversations and AI-generated responses. MER benchmarks language models for manipulative expressions, fostering development of transparency and safety in AI. It also supports manipulation victims by detecting manipulative patterns in human communication.
Vulnerable LLM Application
This repo focus on how to deal with prompt injection problem faced by LLMs
CLI tool that uses the Lakera API to perform security checks in LLM inputs
The Security Toolkit for LLM Interactions (TS version)
Universal and Transferable Attacks on Aligned Language Models
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."