NewGraphRAG now in early beta

Glossary

Prompt injection

Also known as: Prompt injection attack, Indirect prompt injection

Definition

In a prompt injection attack, the adversary either places manipulative instructions directly in the user prompt (direct) or hides them inside documents, websites, or tool outputs that the model reads later (indirect). The goal is to get the model to bypass safety rules, disclose system prompts, or trigger unintended tool calls. Mitigations combine input sanitization, strict tool policies, role separation, and output filters.

How Swiss Knowledge Hub uses this term

Swiss Knowledge Hub addresses prompt injection risks through separated system and context roles, an explicit per-tenant tool configuration, and audit logging of tool executions. Concrete hardening depends on the workspace configuration and the chosen MCP servers.

Related terms

Sources

  1. OWASP Top 10 for LLM Applications — LLM01:2025 Prompt Injectionhttps://genai.owasp.org/llm-top-10/

Last updated: April 22, 2026