Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for adding capability of replying to close-ended questions in a concise way #613

Open
bdattoma opened this issue Mar 21, 2024 · 3 comments

Comments

@bdattoma
Copy link

Describe the proposed contribution to the taxonomy

  • When replying to a close ended question the model writes a lot
  • If the prompt asks explicitly to response only with the option label (and maybe its value), the model does not respect the requirement
  • Proposing to add a composite skill to reply only with the option label like in a close-ended question exam

Input given at the prompt

What the first law of robotics say? Return option number only: 
    1. A robot can hit a human if they are trying to hurt the robot;
    2. A robot may not injure a human being or, through inaction, allow a human being to come to harm;
    3. A human cannot interfere with robot action if this would mean making the robot failing a task

Response from the current model

   The first law of robotics, as stated by Isaac Asimov, is: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." This is option 2 in the list provided.

Response that you would expect instead with the contribution

  2 or Option 2
@bdattoma
Copy link
Author

also related to #611

@obuzek
Copy link
Contributor

obuzek commented Mar 21, 2024

Yup, again, models really aren't great at things like "select a number from the list". The reason is that they're intentionally biased towards giving more output. 2-3 token responses are discouraged because they're usually nonsense.

Can you talk a little bit about in what situation you think getting an LLM to select from a list would be valuable to an end user?

@bdattoma
Copy link
Author

bdattoma commented Mar 22, 2024

Yup, again, models really aren't great at things like "select a number from the list". The reason is that they're intentionally biased towards giving more output. 2-3 token responses are discouraged because they're usually nonsense.

Can you talk a little bit about in what situation you think getting an LLM to select from a list would be valuable to an end user?

Few thoughts:

  1. Probably more than the close-ended per se, the fact which captured my atention is that the model does not respect the prompt when it says how to respond. Check with ChatGPT for a comparison
  2. A scenario, I guess, an end user could leverage on this skill is in a automatic exam responses verification - maybe combined with domain-specific knowledge
  3. Finally, I recognize the feature per se may not be used in daily life so often, however I don't think it has less or more value than a reverse string skill for example.

Are there area of specific interests you could suggest to focus on?

@github-actions github-actions bot added the stale stale-bot has marked you as stale label May 9, 2024
@jjasghar jjasghar removed the stale stale-bot has marked you as stale label Aug 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants