The vulnerability lies in LangChain's handling of untrusted prompt template strings, which could lead to template injection and information disclosure. The analysis of the security patches reveals three distinct vulnerable functions corresponding to the three supported template formats (f-string, mustache, and jinja2).
F-string Templates: The get_template_variables function in langchain_core.prompts.string failed to validate variable names extracted from f-string templates. It allowed names with attribute access (.) and indexing ([]) syntax. This meant that a malicious template string like "{user.__class__}" would be accepted and later evaluated, exposing internal object details. The fix was to add validation in this function to reject such syntax.
Mustache Templates: The _get_key function in langchain_core.utils.mustache insecurely used getattr() as a fallback to resolve template keys. This allowed an attacker to craft a template like "{{user.__class__}}" to traverse and expose attributes of any Python object supplied as a template variable. The patch hardens this by restricting key resolution to only dict, list, and tuple types, preventing traversal into arbitrary objects.
Jinja2 Templates: The jinja2_formatter function in langchain_core.prompts.string used Jinja2's SandboxedEnvironment. While this environment blocks access to dangerous dunder attributes, it still permits access to other, potentially sensitive, attributes and methods. The patch addresses this by implementing a much stricter _RestrictedSandboxedEnvironment that denies all attribute and method access, providing a defense-in-depth hardening measure.
In all three cases, exploitation requires the application to accept the template string itself from an untrusted source. The identified functions are the precise locations in the code where this insecure processing occurs, making them the key indicators of this vulnerability during runtime profiling.
GHSA-6qv9-48xg-fc7f: LangChain Vulnerable to Template Injection via Attribute Access in Prompt Templates
__globals__
Potentially escalate to more severe attacks depending on the objects passed to templates
Attack Vectors
1. F-string Template Injection
Before Fix:
from langchain_core.prompts import ChatPromptTemplate
malicious_template = ChatPromptTemplate.from_messages(
[("human", "{msg.__class__.__name__}")],
template_format="f-string"
)
# Note that this requires passing a placeholder variable for "msg.__class__.__name__".
result = malicious_template.invoke({"msg": "foo", "msg.__class__.__name__": "safe_placeholder"})
# Previously returned
# >>> result.messages[0].content
# >>> 'str'
2. Mustache Template Injection
Before Fix:
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage
msg = HumanMessage("Hello")
# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
[("human", "{{question.__class__.__name__}}")],
template_format="mustache"
)
result = malicious_template.invoke({"question": msg})
# Previously returned: "HumanMessage" (getattr() exposed internals)
3. Jinja2 Template Injection
Before Fix:
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage
msg = HumanMessage("Hello")
# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
[("human", "{{question.parse_raw}}")],
template_format="jinja2"
)
result = malicious_template.invoke({"question": msg})
# Could access non-dunder attributes/methods on objects
Root Cause
F-string templates: The implementation used Python's string.Formatter().parse() to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:
from string import Formatter
template = "{msg.__class__} and {x}"
print([var_name for (_, var_name, _, _) in Formatter().parse(template)])
# Returns: ['msg.__class__', 'x']
The extracted names were not validated to ensure they were simple identifiers. As a result, template strings containing attribute traversal and indexing expressions (e.g., {obj.__class__.__name__} or {obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with (), they do support [] indexing, which could allow traversal through dictionaries like __globals__ to reach sensitive objects.
Mustache templates: By design, used getattr() as a fallback to support accessing attributes on objects (e.g., {{user.name}} on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objects
Jinja2 templates: Jinja2's default SandboxedEnvironment blocks dunder attributes (e.g., __class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objects
passed to templates.
Dynamically constructs prompt templates based on user-provided patterns
Allows users to customize or create prompt templates
Example vulnerable code:
# User controls the template string itself
user_template_string = request.json.get("template") # DANGEROUS
prompt = ChatPromptTemplate.from_messages(
[("human", user_template_string)],
template_format="mustache"
)
result = prompt.invoke({"data": sensitive_object})
Low/No Risk Scenarios
You are NOT affected if:
Template strings are hardcoded in your application code
Template strings come only from trusted, controlled sources
Users can only provide values for template variables, not the template structure itself
Example safe code:
# Template is hardcoded - users only control variables
prompt = ChatPromptTemplate.from_messages(
[("human", "User question: {question}")], # SAFE
template_format="f-string"
)
# User input only fills the 'question' variable
result = prompt.invoke({"question": user_input})
The Fix
F-string Templates
F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:
Added validation to enforce that variable names must be valid Python identifiers
Rejects syntax like {obj.attr}, {obj[0]}, or {obj.__class__}
Only allows simple variable names: {variable_name}
# After fix - these are rejected at template creation time
ChatPromptTemplate.from_messages(
[("human", "{msg.__class__}")], # ValueError: Invalid variable name
template_format="f-string"
)
Mustache Templates (Defensive Hardening)
As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:
Replaced getattr() fallback with strict type checking
Only allows traversal into dict, list, and tuple types
Blocks attribute access on arbitrary Python objects
As defensive hardening, we've significantly restricted Jinja2 template capabilities:
Introduced _RestrictedSandboxedEnvironment that blocks ALL attribute/method access
Only allows simple variable lookups from the context dictionary
Raises SecurityError on any attribute access attempt
# After hardening - all attribute access is blocked
prompt = ChatPromptTemplate.from_messages(
[("human", "{{msg.content}}")],
template_format="jinja2"
)
# Raises SecurityError: Access to attributes is not allowed
Important Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.
While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.
Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g., HumanMessage, AIMessage, ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.
Remediation
Immediate Actions
Audit your code for any locations where template strings come from untrusted sources
Update to the patched version of langchain-core
Review template usage to ensure separation between template structure and user data
Best Practices
Consider if you need templates at all - Many applications can work directly with message objects (HumanMessage, AIMessage, etc.) without templates
Reserve Jinja2 for trusted sources - Only use Jinja2 templates when you fully control the template content