top of page

Subscribe to our newsletter

LangGrinch (CVE-2025-68664): Critical langchain-core Vulnerability Enables Secret Exfiltration and Code Execution via Serialization Injection

  • Rescana
  • 1 day ago
  • 5 min read
Image for post about Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection

Executive Summary

A critical vulnerability, tracked as CVE-2025-68664 and colloquially named LangGrinch, has been identified in the langchain-core Python package, a foundational library for constructing Large Language Model (LLM)-powered applications. This flaw enables attackers to exploit unsafe serialization and deserialization logic, resulting in the exfiltration of sensitive secrets, prompt injection, and, in certain configurations, arbitrary code execution. The vulnerability is rated as critical (CVSS 9.3) due to its trivial exploitability, the widespread adoption of LangChain in AI agentic workflows, and the potential for severe impact on confidentiality and integrity. Although no advanced persistent threat (APT) group exploitation has been confirmed as of this writing, the vulnerability is being actively discussed in the security community, and proof-of-concept (PoC) code is publicly available. Organizations leveraging LangChain for LLM orchestration are strongly advised to take immediate action to mitigate this risk.

Technical Information

The LangGrinch vulnerability (CVE-2025-68664) is rooted in the improper handling of serialization and deserialization within the langchain-core package. Specifically, the dumps() and dumpd() functions fail to adequately escape dictionaries containing the reserved "lc" key, which is used internally to denote serialized LangChain objects. When user-controlled data includes this key, the deserialization process erroneously interprets it as a trusted LangChain object, rather than as benign user data.

This flaw is particularly dangerous in environments where LLM outputs are serialized and deserialized as part of orchestration loops, agentic workflows, or streaming operations. Attackers can inject malicious "lc" keys into fields such as metadata, additional_kwargs, or response_metadata via prompt injection or other input vectors. Upon deserialization, these payloads are instantiated as trusted objects, enabling a range of attack outcomes.

If the application is configured with secrets_from_env=True (which was the default in previous versions), the attacker can exfiltrate secrets from environment variables. Furthermore, the vulnerability allows for the instantiation of arbitrary classes within trusted namespaces, such as langchain_core, langchain, and langchain_community. In scenarios where Jinja2 templates are involved, this can escalate to arbitrary code execution.

A representative attack vector, as demonstrated in public PoCs, involves crafting a prompt or input that causes the LLM to output a structure containing the "lc" key. This output, when serialized and subsequently deserialized by LangChain, triggers the vulnerability. For example, an attacker could inject a structure that requests the value of an environment variable (such as an API key), which is then leaked upon deserialization.

The vulnerability affects all versions of langchain-core before 0.3.81 and all versions from 1.0.0 up to, but not including, 1.2.5. The issue is remediated in versions 0.3.81 and 1.2.5. A related vulnerability, CVE-2025-68665, impacts the JavaScript/TypeScript ecosystem, specifically the @langchain/core and langchain npm packages.

The technical impact of this vulnerability is multifaceted. It enables attackers to bypass data validation and type safety mechanisms, leading to the unauthorized extraction of secrets, manipulation of application logic, and, in certain configurations, remote code execution. The attack surface is broad, encompassing any application that serializes and deserializes untrusted LLM output using vulnerable versions of LangChain.

Exploitation in the Wild

The primary attack vector for CVE-2025-68664 is prompt injection via LLM response fields, such as additional_kwargs and response_metadata. Any application that serializes and deserializes untrusted LLM output is at risk. While there have been no confirmed reports of APT or mass exploitation as of this report, the vulnerability is trivial to exploit, and PoC code is readily available in the security community. Security researchers have published detailed analyses and exploit demonstrations, highlighting the ease with which secrets can be exfiltrated and the potential for further escalation.

The exploitability of this vulnerability is high. It is network-based, requires no special conditions, and does not necessitate elevated privileges or user interaction. The impact on confidentiality is severe, as secrets such as API keys and credentials can be exfiltrated. The impact on integrity is also significant, particularly if arbitrary class instantiation or code execution is achieved. Availability is generally not affected.

Indicators of compromise include the presence of serialized objects with unexpected "lc" keys in application logs or data stores, unusual access to environment variables, and LLM outputs containing "lc"-structured data in fields that are later deserialized.

APT Groups using this vulnerability

As of this report, there is no public evidence attributing exploitation of CVE-2025-68664 to specific APT groups, sectors, or countries. However, the vulnerability is highly attractive to both opportunistic and targeted attackers, particularly those interested in AI supply chain compromise, LLM-powered applications, and environments where sensitive secrets are managed via environment variables. The technical characteristics of the vulnerability make it suitable for exploitation by a wide range of threat actors, including those with a focus on advanced AI and automation.

Affected Product Versions

The affected products are langchain-core (Python) versions before 0.3.81 and all versions from 1.0.0 up to, but not including, 1.2.5. The vulnerability is fixed in versions 0.3.81 and 1.2.5. In the JavaScript/TypeScript ecosystem, the affected products are @langchain/core versions before 1.1.8 and langchain npm package versions before 1.2.3. Organizations using any of these versions are at immediate risk and should prioritize remediation.

Workaround and Mitigation

The primary mitigation is to upgrade langchain-core to version 1.2.5 or later, or to 0.3.81+ for older branches. For JavaScript/TypeScript environments, upgrade @langchain/core to 1.1.8+ and langchain to 1.2.3+.

Organizations should review their application logic to ensure that LLM outputs are not blindly serialized and deserialized. The use of secrets_from_env should be disabled unless absolutely necessary; this parameter is set to False by default in patched versions. The new allowed_objects allowlist in the load() and loads() functions should be leveraged to restrict deserialization to safe classes only. Additionally, Jinja2 templates in serialized data should be blocked or sanitized, as they are now by default in patched versions.

It is also recommended to monitor for indicators of compromise, such as serialized objects with unexpected "lc" keys, unusual environment variable access, and LLM outputs containing "lc"-structured data. Regularly audit application logs and data stores for signs of exploitation.

References

The following resources provide additional technical details and guidance:

GitHub Advisory: GHSA-c67j-w6g6-q2cm

Rescana is here for you

At Rescana, we are committed to helping organizations navigate the evolving threat landscape. Our Third-Party Risk Management (TPRM) platform empowers you to continuously monitor, assess, and mitigate risks across your digital supply chain, including those introduced by open-source dependencies and AI frameworks. We encourage all customers to review their exposure to LangChain and related technologies, implement the recommended mitigations, and leverage our platform for enhanced visibility and control. If you have any questions or require further assistance, our team is ready to help at ops@rescana.com.

bottom of page