The vulnerability is a classic command injection issue within the ms-swift web UI. The root cause is the practice of building shell command strings by directly concatenating user-provided input without proper sanitization or escaping. The advisory specifically highlights the LLMTrain.train and LLMTrain.train_local methods, where user-controlled parameters like --output_dir are appended to a command string.
The analysis of the patch confirms this. Across multiple UI components (llm_train, llm_eval, llm_export, llm_infer, etc.), the code was changed to stop building raw command strings. Instead of concatenating parameters, the fix implements building a list of command arguments. This list is then safely passed to subprocess.Popen or subprocess.run, which, when not using shell=True, treats each item in the list as a distinct argument and does not interpret shell metacharacters, thus preventing command injection.
The vulnerable functions are primarily the methods responsible for executing the commands (e.g., train_local, eval_model), which used insecure methods like os.system or subprocess.Popen(shell=True). The methods that construct these commands (e.g., train, eval) are also culpable as they are where the malicious payload is incorporated into the command. The vulnerability is widespread across the UI, affecting training, evaluation, exporting, and inference functionalities.