The vulnerability exists because multiple block execution endpoints in AutoGPT failed to check if a block was disabled before executing it. The advisory and the patch commit 75ecc4de92281d72f256993989763b66a7acd8a5 confirm this. The patch adds a simple check, if obj.disabled:, to three separate execution paths: the main web API (/api/blocks/{block_id}/execute), the external API (/external-api/v1/blocks/{id}/execute), and the chat tool's block runner (RunBlockTool._execute).
The root cause of the RCE is the BlockInstallationBlock, which is disabled by default for security reasons. Its run method is designed to write and execute arbitrary Python code, a feature intended only for development. By exploiting the lack of authorization in the execution endpoints, an attacker can bypass the disabled flag and call this dangerous function with a malicious payload. Therefore, the vulnerable functions are the three endpoints that fail to perform the check, and the BlockInstallationBlock.run function would be seen in a runtime profile as it's the final step that executes the malicious code.