vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.

Project Subscriptions

No data.

Advisories

No advisories yet.

Fixes

Solution

No solution given by the vendor.


Workaround

No workaround given by the vendor.

History

Fri, 27 Mar 2026 04:00:00 +0000

Type Values Removed Values Added
Description vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
Title vLLM's hardcoded trust_remote_code=True in NemotronVL and KimiK25 bypasses user security opt-out
Weaknesses CWE-693
References
Metrics cvssV3_1

{'score': 8.8, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H'}


Projects

Sign in to view the affected projects.

cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2026-03-26T23:56:53.579Z

Reserved: 2026-02-24T15:19:29.717Z

Link: CVE-2026-27893

cve-icon Vulnrichment

No data.

cve-icon NVD

Status : Received

Published: 2026-03-27T00:16:22.333

Modified: 2026-03-27T00:16:22.333

Link: CVE-2026-27893

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

No data.

Weaknesses