The vulnerability exists because the lmdeploy library uses torch.load() to deserialize model files from sources that may not be trusted. The torch.load function internally uses Python's pickle module, which is known to be insecure for loading data from untrusted sources as it can execute arbitrary code. The vulnerability is present in multiple locations where model weights or statistics are loaded from .bin or .pth files. An attacker could craft a malicious model file containing a payload that executes arbitrary commands on the victim's machine when the file is loaded by lmdeploy. The fix, applied in commit eb04b4281c5784a5cff5ea639c8f96b33b3ae5ee, is to add the weights_only=True parameter to all torch.load() calls. This parameter ensures that only the model's tensors (weights) are loaded, preventing the execution of any potentially malicious code embedded in the file.