Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vulnerability Report: Code Injection Risk Due to Unsafe Usage of eval() #4873

Open
ybdesire opened this issue Jan 1, 2025 · 2 comments
Open

Comments

@ybdesire
Copy link

ybdesire commented Jan 1, 2025

What happened?

Description

In the provided code snippet below, there exists a potential Remote Code Execution (RCE) vulnerability stemming from the unsafe use of the eval() function. The code checks if the target_url variable starts with the string "func". If true, it extracts the substring after "func:" and performs a replacement operation to substitute "last_url" with the page.url value. Subsequently, this processed string is passed to the eval() function for execution.

            if target_url.startswith("func"):
                func = target_url.split("func:")[1]
                func = func.replace("__last_url__", page.url)
                target_url = eval(func)

If we create a config as below:

target_url= "func:__import__('os').system('rm -f /path/to/sensitive/file')"

So user's sensitive files will be deleted.

The code is from latest main branch :
https://github.com/microsoft/autogen/blob/main/python/packages/agbench/benchmarks/WebArena/Templates/Common/evaluation_harness/evaluators.py#L276

Such issue is belongs to CWE-94
https://cwe.mitre.org/data/definitions/94.html

Security Impact:
This vulnerability allows attackers to bypass normal security mechanisms and execute arbitrary code with the privileges of the user running the vulnerable application. This could lead to severe consequences, including data theft, service disruption, or the installation of malicious software.

What did you expect to happen?

To mitigate this vulnerability, avoid using eval() with untrusted inputs. Instead, consider implementing a safer alternative, such as a whitelist of allowed functions or a more secure parsing and execution mechanism. Additionally, perform thorough input validation and sanitization to prevent malicious inputs from being processed.

How can we reproduce it (as minimally and precisely as possible)?

create a config as below:

target_url= "func:__import__('os').system('rm -f /path/to/sensitive/file')"

So user's sensitive files will be deleted.

AutoGen version

latest main branch today

Which package was this bug in

Core

Model used

No response

Python version

No response

Operating system

No response

Any additional info you think would be helpful for fixing this bug

To mitigate this vulnerability, avoid using eval() with untrusted inputs. Instead, consider implementing a safer alternative, such as a whitelist of allowed functions or a more secure parsing and execution mechanism. Additionally, perform thorough input validation and sanitization to prevent malicious inputs from being processed.

@jackgerrits
Copy link
Member

The mentioned code is from the WebArena benchmark evaluation script. We simply have a vendored copy of it with modifications (the referenced code is not modified from the original). See here for the upstream: https://github.com/web-arena-x/webarena/blob/main/evaluation_harness/evaluators.py

This is not part of any package we distribute.

I would recommend you open this issue in their repo. We would be more than happy to update our copy when they fix this.

@ybdesire
Copy link
Author

ybdesire commented Jan 2, 2025

@jackgerrits Got that. Thanks for the update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants