Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suppport for RAG with Hugging Face LLM Models using Connector. Post and Pre Process Function Support. #1869

Closed
ramda1234786 opened this issue Jan 13, 2024 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@ramda1234786
Copy link

ramda1234786 commented Jan 13, 2024

Is your feature request related to a problem?
Need to do RAG with Hugging Face Models. All the Hugging Face models gives response in the below format and post_process_function does make it into JSON object. Like we have out of the box support for Cohere, Amazon and OpenAI. Need support for Hugging Face Models (LLM Models and not Embedding).

{
    "inference_results": [
        {
            "output": [
                {
                    "name": "response",
                    "dataAsMap": {
                        "response": [
                            {
                                "generated_text": "Generated text here...................."
                            }
                        ]
                    }
                }
            ],
            "status_code": 200
        }
    ]
}

What solution would you like?
Like we have out of the box support for Cohere, Amazon and OpenAI. Need support for Hugging Face Models (LLM Models and not Embedding).
Should be able to perform post_process_function and pre_process_function

What alternatives have you considered?
Tried doing post_process_function but it never works

Do you have any additional context?
Tried below series of steps

"post_process_function": "\n def json = \"{\" +\n \"\\\"completion\\\":\\\"\" + params['response'][0].generated_text + \"\\\" \"}\";\n return json;\n "

Tried with painless script as well

POST /_scripts/painless/_execute

{
    "script": {
        "source": "\n      def json = \"{\" +\n                 \"\\\"completion\\\":\\\"\" + params['response'][0].generated_text + \"\\\" }\";\n      return json;\n    ",
        "params": {
            "response": [
                {
                    "generated_text": "Your text here"
                }
            ]
        }
    }
}

I got this

{
    "result": "{\"completion\":\"Your text here\" }"
}

After Connector model, it did not worked i still get this error :
"reason": "Cannot invoke \"Object.getClass()\" because \"callArgs[0]\" is null"

Expected result is this

{
    "result": {"completion":"Your text here" }
}

Much better if we can have a API to test, the post_process_function other than Painless script test/execution

@ramda1234786 ramda1234786 added enhancement New feature or request untriaged labels Jan 13, 2024
@ramda1234786 ramda1234786 changed the title [FEATURE] Suppport for RAG with Hugging Face LLM Models using Connector. Post and Pre Process Function Support. Jan 13, 2024
@b4sjoo b4sjoo removed the untriaged label Feb 13, 2024
@b4sjoo b4sjoo moved this to On-deck in ml-commons projects Feb 13, 2024
@ramda1234786
Copy link
Author

@b4sjoo , this is working now and we can close this request, by using post_process_function we can achieve this.
One working way is here

We can close this request

@github-project-automation github-project-automation bot moved this from On-deck to Done in ml-commons projects Feb 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Development

No branches or pull requests

4 participants