Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/Sandbox #12

Merged
merged 17 commits into from
Oct 29, 2022
Merged

Feat/Sandbox #12

merged 17 commits into from
Oct 29, 2022

Conversation

TheExGenesis
Copy link
Collaborator

@TheExGenesis TheExGenesis commented Oct 8, 2022

For now, we have a sandboxed server that we can hit up from outside the sandbox with code and it will return a serialized walker object. (see sandbox/server/index.py

  1. receive POST requests containing Python encodings for Walkers as strings,
  2. generate the Walker in the sandboxed server,
  3. return the serialized walker to the caller.

Evaluation can in principle be done outside of the sandbox since the arbitrary code is only used for generating walklers, which are then safe.

Open questions:

  • the example walker descriptions in python do generate walkers but then running walker.validate() returns false. I suspect it's a problem with the validate function rather than the descriptions but did not check.

@TheExGenesis
Copy link
Collaborator Author

TheExGenesis commented Oct 8, 2022

Just realized I pulled from build-lego which isn't finished yet, will happily pull again when it's ready for merging.

@honglu2875
Copy link
Collaborator

honglu2875 commented Oct 22, 2022

A note on the sandbox for my branch #13:
(my code is kind of quick-and-dirty, so please feel free to do what's the best for your branch)
If #13 is merged, for the toy task ImageOptim to run, I will need an api in the sandbox kind of like the following

@app.route("/eval_func", methods=["POST"])
def evaluate_function():
    req_json = request.get_json()
    code_string = req_json["code"]
    func_name = req_json["func_name"]

    return execute_with_result(code_string, func_name, 5.0).__repr__()

where execute_with_result is just to

  • run the program under reliability_guard() in a process,
  • take in an extra string of function name (func_name) in the parameter,
  • execute the function,
  • assuming the result is a np.ndarray and return to me the .tolist() result.

We just need to have a slightly different unsafe_execute. Here is my own quick implementation without much considerations of anything else:

def unsafe_execute_with_result(check_program, func_name, result, timeout):
    import shutil, os
    reliability_guard()

    # Run program.
    try:
        exec_globals = {}
        with swallow_io():
            with time_limit(timeout):
                exec(check_program, exec_globals)
                result.append(exec_globals[func_name]().tolist())

    except BaseException as e:
        result.append(str(e))

I don't know what we do with codes with errors, so that except part is really arbitrary.

@herbiebradley
Copy link
Collaborator

@TheExGenesis Thanks for the great work! Can you pull from main, make sure your code still works and can interface with @honglu2875's code (see https://github.com/CarperAI/ELM/blob/main/map_elites/environments.py#L213), then I think we're ready to merge!

@TheExGenesis
Copy link
Collaborator Author

TheExGenesis commented Oct 28, 2022

@honglu2875 can I ask why the name _with_result and why you're appending to result rather than just returning?

Also it seems to me different environments might benefit from different dedicated functions since e.g. your interface assumes an ndarray and uses the sandbox for a full eval, while the sodaracer only requires the sandbox to generate the racers, not for eval. I'll accommodate the ImageOptim of course.

@honglu2875
Copy link
Collaborator

honglu2875 commented Oct 28, 2022

@honglu2875 can I ask why the name _with_result and why you're appending to result rather than just returning?

Also it seems to me different environments might benefit from different dedicated functions since e.g. your interface assumes an ndarray and uses the sandbox for a full eval, while the sodaracer only requires the sandbox to generate the racers, not for eval. I'll accommodate the ImageOptim of course.

Oh yeah I literally copied&pasted the other codes in codex_execute.py. The name _with_result is nothing important (I was thinking I'm evaluating the function and returning its result). Appending instead of returning is also unnecessary (simply because I copied from the other unsafe_execute function). My codes here are really ugly (plus a serious concurrency issue if we receive a batch of codes and use multiple processes) and only want to give an idea about what it should return. Please forget about them and only keep the idea that it should return the numpy array or list.

Yes, I agree that a different function and interface make total sense. Feel free to also change the name of the endpoint if necessary. The modification on my side is trivial.

I remember @herbiebradley had some codes somewhere about classifying different errors into different numbers. Maybe we can incorporate this into the sandbox and let it return json message like {'status': ..., 'data': ...} for both interface?
Oh no never mind!! You already implemented it for sodaracer! I got confused between the two unsafe_execute in util.py and codex_execute.py (I was modifying the latter instead of the former).
My ImageOptim hasn't taken error codes into account but it would be nice if we also receive it from the sandbox, in case there is the need to improve this image toy task in the future.

@TheExGenesis
Copy link
Collaborator Author

TheExGenesis commented Oct 28, 2022

Thanks for responding! I'll make the error codes nicer by returning json. Don't have a great idea of what to do concurrency-wise for the execs, open to suggestions.

EDIT: I added batch processing, it's not concurrent tho.

@TheExGenesis
Copy link
Collaborator Author

@herbiebradley confirmed that the code still works and interface is compatible with H's work.

Copy link
Collaborator

@herbiebradley herbiebradley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, I added a few comments. Thanks for the work!

sandbox/server/index.py Show resolved Hide resolved
sandbox/scripts/bootstrap.sh Show resolved Hide resolved
sandbox/server/walker/walk_creator.py Outdated Show resolved Hide resolved
@TheExGenesis
Copy link
Collaborator Author

Looks good, I added a few comments. Thanks for the work!

all good, handled the comments :)

Copy link
Collaborator

@herbiebradley herbiebradley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I'll merge now!

@herbiebradley herbiebradley merged commit 0fe54c9 into main Oct 29, 2022
@herbiebradley herbiebradley deleted the sandbox branch October 29, 2022 23:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants