Finding and documentating vulnerabilities in Generative Models based on prompt-engineering
| Name | Description | proof |
|---|---|---|
| Prompt In the Middle (PITM)? | Injecting prompt to access other's output | [Proof] |
| Nested Prompt Attack (Need a better name :D) | While Providing nested prompts, the model ignores the initial instructions | [Proof] |