The vulnerability allows bypassing the Guardrail node in n8n. The root cause is improper validation of the output from the language model used by the guardrail. The patch, found in commit 8d0251d1deef256fd3d9176f05dedab62afde918, reveals that the Zod schema (LlmResponseSchema) used for parsing the LLM's JSON response was not strict. This allowed for additional, unexpected fields in the response, which could be manipulated by a malicious input to bypass the guardrail's checks.
The function runLLM in packages/@n8n/nodes-langchain/nodes/Guardrails/helpers/model.ts is responsible for invoking the language model and parsing its output using this schema. The patch modifies this area by adding .strict() to the LlmResponseSchema and also adds explicit type checking for the parsed fields within runLLM. Therefore, runLLM is the function that directly handles the improperly validated data and is the core of this vulnerability.