Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Update Dockerfile with extensions support #107

Merged
merged 11 commits into from
Mar 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions .github/workflows/publish-docker.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
name: Docker Build + Publish

on:
# For now, just manually trigger
# push:
# branches:
# - main
# pull_request:
# branches:
# - main
workflow_dispatch:

jobs:
build-docker-image:

runs-on: aws-avx2-192G-4-a10g-96G
timeout-minutes: 240

steps:

- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3

- name: Login to Github Packages
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: 1
submodules: recursive

- name: Get version tag
id: extract_tag
run: echo "tag=$(date +%Y%m%d)" >> $GITHUB_OUTPUT

- name: Current Version Name
run: echo ${{ steps.extract_tag.outputs.tag }}

- name: nm-vllm latest
uses: docker/build-push-action@v5
with:
context: .
target: vllm-openai
push: true
tags: ghcr.io/neuralmagic/nm-vllm-openai:${{ steps.extract_tag.outputs.tag }},ghcr.io/neuralmagic/nm-vllm-openai:latest
4 changes: 4 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,10 @@ COPY requirements.txt requirements.txt
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt

# UPSTREAM SYNC: Install sparsity extras
RUN --mount=type=cache,target=/root/.cache/pip \
pip install nm-magic-wand

# Install flash attention (from pre-built wheel)
RUN --mount=type=bind,from=flash-attn-builder,src=/usr/src/flash-attention-v2,target=/usr/src/flash-attention-v2 \
pip install /usr/src/flash-attention-v2/*.whl --no-cache-dir
Expand Down
Loading