Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement reflection #11

Closed
mijhaels opened this issue Mar 30, 2023 · 12 comments
Closed

Implement reflection #11

mijhaels opened this issue Mar 30, 2023 · 12 comments
Labels
enhancement New feature or request Stale

Comments

@mijhaels
Copy link

Context: Implement reflection, a technique that allows generating more coherent and natural texts using pre-trained language models.

Problem or idea: Reflection is based on two articles that propose different methods to incorporate world knowledge and causal reasoning in text generation. The articles are:

  1. ArXiv Article
  2. GitHub Repository

Solution or next step: I would like the Auto-GPT project to include reflection as an option to improve the quality of the generated texts. @Torantulino, what do you think of this idea?

@Torantulino
Copy link
Member

That it VERY interesting indeed. I've been playing about with getting Auto-GPT to improve itself and write it's own code, this might just be the ticket!

Executing unreviewed AI Generated code is a security risk though, so we'd have to think of the safest way to do this.

Submit a pull request!

@mijhaels
Copy link
Author

mijhaels commented Apr 1, 2023

I code but only know about web programming, and I'm a junior too... sorry Haha

@mijhaels mijhaels closed this as completed Apr 1, 2023
@mijhaels mijhaels reopened this Apr 1, 2023
@Torantulino
Copy link
Member

No worries at all, and never let being a junior stop you, the absolute best way to learn is by trying and failing!

@Torantulino Torantulino added the enhancement New feature or request label Apr 1, 2023
@NotoriousPyro
Copy link

Executing unreviewed AI Generated code is a security risk though, so we'd have to think of the safest way to do this.

Probably best to have it raise pull requests so that they can be reviewed manually.

@algopapi
Copy link

algopapi commented Apr 5, 2023

I kinda implemented this. In my implementation I let the agent reflect every N steps which can be quite interesting but requires more testing.

@Bearnardd
Copy link

Around weekend I will do a little bit of research around this topic. @algopapi would you mind sharing your implementation? Have you pushed it on your fork?

@LeonardoLGDS
Copy link

Also, probably will have to create some kind of benchmarks so that the AI can know the direction it needs to go. "Improve itself" can be a bit vague

@Bearnardd
Copy link

@LeonardoLGDS I had no time to check the paper yet but I assumed there are some guidelines on benchmarking provided in paper

@aristotaloss
Copy link

If anyone can get me a GPT4 key, either directly via your org with prepayment or by getting me in touch with someone at OpenAI, I'm willing to implement it. I think it could be a lot of fun, and to add an edge, I'll do it on a fresh Linux w/ firewalled access to only the OpenAI API, and a USB chainsaw.

Jk, not the chainsaw.

@younghuman
Copy link

younghuman commented Apr 22, 2023

I'm planning to implement this over the weekend. Another best paper for self-reflection is this: https://arxiv.org/abs/2304.03442

They also have a way of evaluating the agents with some questions.

@Torantulino @Andythem23

@bjm88 bjm88 mentioned this issue Apr 27, 2023
1 task
@Boostrix
Copy link
Contributor

Boostrix commented May 1, 2023

That it VERY interesting indeed. I've been playing about with getting Auto-GPT to improve itself and write it's own code, this might just be the ticket!

I guess, a starting point would be accepting actual constraints/restrictions, aka:

  • context window size (restrict to 8k, ideally much less, probably 50% of that)
  • restrict changes to a single isolated module (which would mean commands or even better plugins)
  • accept that the underlying architecture would then need to work analogous to message passing, and possibly via pipes - anything else won't scale or would change too many places in the source tree at once, basically any agent would consist of a list of other sub-agents, all of which would fork each other as needed (think makefiles/clustering)
  • alternatively, come up with a module that can deal with patches/diffs for features spanning multiple files/contexts initially
  • to provide sufficient surrounding context, freely use Python docstrings - basically reject PRs that don't have a ton of surrounding context in the form of comments and a ton of unit tests/test coverage
  • extend git_operations.py to add support for traversing commit logs (patches + log messages)
  • come up with a new plugin to handle github API integration, as per: Auto-GPT Recursive Self Improvement #15 (comment)
  • have unit tests and benchmarks for regression testing purposes
  • improve docker integration for CI
  • use a custom local LLM to come up with a heatmap (query/idea -> location) that helps identify path/file name and line number based on querying the LLM for an idea/change, so that the agent can determine what set of files/places is likely to be relevant for certain changes, and then narrow down via git logs and commit history

@github-actions
Copy link
Contributor

This issue was closed automatically because it has been stale for 10 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 17, 2023
@github-project-automation github-project-automation bot moved this from Todo to Done in AutoGPT Roadmap Sep 17, 2023
tennox pushed a commit to tennox/AutoGPT that referenced this issue Sep 28, 2024
fix: update `default` shell, add missing derivations
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Stale
Projects
Status: Done
Development

No branches or pull requests

9 participants