Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

Commit

Permalink
Fix examples path change issue (#613)
Browse files Browse the repository at this point in the history
Signed-off-by: Lv, Liang1 <liang1.lv@intel.com>
Co-authored-by: kevinintel <hanwen.chang@intel.com>
  • Loading branch information
lvliang-intel and kevinintel authored Nov 2, 2023
1 parent c06b96c commit 3c4e47b
Show file tree
Hide file tree
Showing 6 changed files with 11 additions and 11 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ Additionally, we are preparing to introduce Baichuan, Mistral, and other models
<td colspan="3" align="center"><a href="intel_extension_for_transformers/neural_chat/docs/notebooks/deploy_chatbot_on_habana_gaudi.ipynb">Chatbot on Gaudi</a></td>
</tr>
<tr>
<td colspan="4" align="center"><a href="intel_extension_for_transformers/neural_chat/examples/talkingbot_pc/build_talkingbot_on_pc.ipynb">Chatbot on Client</a></td>
<td colspan="4" align="center"><a href="intel_extension_for_transformers/neural_chat/examples/deployment/talkingbot/pc/build_talkingbot_on_pc.ipynb">Chatbot on Client</a></td>
<td colspan="4" align="center"><a href="intel_extension_for_transformers/neural_chat/docs/full_notebooks.md">More Notebooks</a></td>
</tr>
<tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ RUN if [ -f /torchvision-0.16.0+cpu-cp3${PYTHON_VERSION##*.}-cp3${PYTHON_VERSION

RUN source activate && conda activate neuralchat && pip install oneccl_bind_pt==2.0.0+cpu -f https://developer.intel.com/ipex-whl-stable-cpu && \
cd /intel-extension-for-transformers && pip install -r requirements.txt && pip install -v . && \
cd ./intel_extension_for_transformers/neural_chat/examples/instruction_tuning && pip install -r requirements.txt && \
cd ./intel_extension_for_transformers/neural_chat/examples/finetuning/instruction && pip install -r requirements.txt && \
cd /intel-extension-for-transformers/intel_extension_for_transformers/neural_chat && pip install -r requirements_cpu.txt && \
conda install astunparse ninja pyyaml mkl mkl-include setuptools cmake cffi future six requests dataclasses -y && \
conda install jemalloc gperftools -c conda-forge -y && \
Expand Down Expand Up @@ -133,7 +133,7 @@ RUN git clone https://github.com/huggingface/optimum-habana.git && \
pip install -e .

# Install dependency
RUN cd /intel-extension-for-transformers/intel_extension_for_transformers/neural_chat/examples/instruction_tuning/ && \
RUN cd /intel-extension-for-transformers/intel_extension_for_transformers/neural_chat/examples/finetuning/instruction && \
sed -i '/find-links https:\/\/download.pytorch.org\/whl\/torch_stable.html/d' requirements.txt && \
sed -i '/^torch/d;/^intel-extension-for-pytorch/d' requirements.txt && \
pip install -r requirements.txt && \
Expand Down Expand Up @@ -184,7 +184,7 @@ RUN conda init bash && \
source ~/.bashrc

RUN source activate && conda activate neuralchat && cd /intel-extension-for-transformers && \
pip install -r ./intel_extension_for_transformers/neural_chat/examples/instruction_tuning/requirements.txt && \
pip install -r ./intel_extension_for_transformers/neural_chat/examples/finetuning/instruction/requirements.txt && \
pip install -r ./intel_extension_for_transformers/neural_chat/requirements.txt

WORKDIR /intel-extension-for-transformers/intel_extension_for_transformers/neural_chat/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ RUN source activate && conda activate chatbot-finetuning && pip install oneccl_b
pip install datasets torch accelerate SentencePiece evaluate nltk rouge_score protobuf==3.20.1 tokenizers einops && \
git clone https://github.com/huggingface/peft.git && cd peft && python setup.py install && \
cd /itrex && pip install -v . && \
cd ./intel_extension_for_transformers/neural_chat/examples/instruction_tuning && pip install -r requirements.txt
cd ./intel_extension_for_transformers/neural_chat/examples/finetuning/instruction && pip install -r requirements.txt

# Enable passwordless ssh for mpirun^M
RUN mkdir /var/run/sshd
Expand All @@ -79,7 +79,7 @@ RUN sed -i'' -e's/^#PermitRootLogin prohibit-password$/PermitRootLogin yes/' /et
&& echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
EXPOSE 22

WORKDIR /itrex/neural_chat/examples/instruction_tuning/
WORKDIR /itrex/intel_extension_for_transformers/neural_chat/examples/finetuning/instruction/

CMD ["/usr/sbin/sshd", "-D"]

Expand Down Expand Up @@ -109,12 +109,12 @@ ARG ITREX_VER=main
ARG REPO=https://github.com/intel/intel-extension-for-transformers.git

RUN git clone --single-branch --branch=${ITREX_VER} ${REPO} itrex && \
cd /itrex/intel_extension_for_transformers/neural_chat/examples/instruction_tuning/ && \
cd /itrex/intel_extension_for_transformers/neural_chat/examples/finetuning/instruction/ && \
pip install -r requirements.txt

# Build ITREX
RUN cd /itrex && pip install -v . && \
pip install transformers==4.32.0 && \
pip install accelerate==0.22.0

WORKDIR /itrex/neural_chat/examples/instruction_tuning
WORKDIR /itrex/intel_extension_for_transformers/neural_chat/examples/finetuning/instruction/
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@
"metadata": {},
"outputs": [],
"source": [
"!curl -OL https://raw.githubusercontent.com/intel/intel-extension-for-transformers/main/intel_extension_for_transformers/neural_chat/examples/talkingbot/backend/talkingbot.yaml"
"!curl -OL https://raw.githubusercontent.com/intel/intel-extension-for-transformers/main/intel_extension_for_transformers/neural_chat/examples/deployment/textbot/backend/textbot.yaml"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
"metadata": {},
"outputs": [],
"source": [
"!curl -OL https://raw.githubusercontent.com/intel/intel-extension-for-transformers/main/intel_extension_for_transformers/neural_chat/examples/textbot/backend/textbot.yaml"
"!curl -OL https://raw.githubusercontent.com/intel/intel-extension-for-transformers/main/intel_extension_for_transformers/neural_chat/examples/deployment/textbot/backend/textbot.yaml"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ In addition, our plugin seamlessly integrates the online embedding model, Google
The workflow of this plugin consists of three main operations: document indexing, intent detection, and retrieval. The `Agent_QA` initializes itself using the provided `input_path` to construct a local database. During a conversation, the user's query is first passed to the `IntentDetector` to determine whether the user intends to engage in chitchat or seek answers to specific questions. If the `IntentDetector` determines that the user's query requires an answer, the retriever is activated to search the database using the user's query. The documents retrieved from the database serve as reference context in the input prompt, assisting in generating responses using the Large Language Models (LLMs).

# Usage
The most convenient way to use is this plugin is via our `build_chatbot` api as introduced in the [example code](https://github.com/intel/intel-extension-for-transformers/tree/main/intel_extension_for_transformers/neural_chat/examples/retrieval). The user could refer to it for a simple test.
The most convenient way to use is this plugin is via our `build_chatbot` api as introduced in the [example code](https://github.com/intel/intel-extension-for-transformers/tree/main/intel_extension_for_transformers/neural_chat/examples/plugins/retrieval). The user could refer to it for a simple test.

We support multiple file formats for retrieval, including unstructured file formats such as pdf, docx, html, txt, and markdown, as well as structured file formats like jsonl and xlsx. For structured file formats, they must adhere to predefined structures.

Expand Down

0 comments on commit 3c4e47b

Please sign in to comment.