| Package Name | Ecosystem | Vulnerable Versions | First Patched Version |
|---|---|---|---|
| @langchain/core | npm | >= 1.0.0, < 1.1.8 | 1.1.8 |
| @langchain/core | npm | < 0.3.80 | 0.3.80 |
| langchain | npm | >= 1.0.0, < 1.2.3 | 1.2.3 |
| langchain | npm | < 0.3.37 | 0.3.37 |
The vulnerability is a classic serialization injection issue stemming from two core functions: Serializable.toJSON for serialization and load for deserialization. The root cause was a failure to properly handle untrusted user input during the serialization process, combined with insecure defaults during deserialization.
Serializable.toJSON: This function did not escape user-provided data that contained a special "lc" key. LangChain uses this key internally to identify its own serialized objects. By not escaping this key in user data (e.g., in additional_kwargs from an LLM response), the serialization output would contain malicious, user-controlled objects that masqueraded as legitimate LangChain constructs.
load: This function would deserialize the data and, upon encountering a serialized "secret" object, would attempt to resolve it. Critically, its default behavior was to fall back to reading from environment variables (process.env) if the secret was not found in the explicitly provided secretsMap. This allowed an attacker-controlled payload from toJSON to instruct load to read and return the value of any environment variable on the server.
The patch addresses both issues. It introduces an escaping mechanism in Serializable.toJSON to ensure user data is never misinterpreted as a LangChain object. It also changes the default behavior of load to be secure by default, requiring developers to explicitly opt-in to loading secrets from the environment (secretsFromEnv: true). This combination of fixes effectively closes the secret extraction vector.
Serializable.toJSONlibs/langchain-core/src/load/serializable.ts
loadlibs/langchain-core/src/load/index.ts