Skip to content

OP5dev/Prompt-AI

Use this GitHub action with your project
Add this Action to an existing workflow or create a new one
View on Marketplace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

GitHub license GitHub release tag * GitHub repository stargazers

Prompt GitHub AI Models via GitHub Action

Tip

Prompt GitHub AI Models using inference request via GitHub Action API.


Usage Examples

Compare available AI models to choose the best one for your use-case.

Summarize GitHub Issues

on:
  issues:
    types: opened

jobs:
  summary:
    runs-on: ubuntu-latest

    permissions:
      issues: write
      models: read

    steps:
      - name: Summarize issue
        id: prompt
        uses: op5dev/prompt-ai@v2
        with:
          user-prompt: |
            Concisely summarize the GitHub issue
            with title '${{ github.event.issue.title }}'
            and body: ${{ github.event.issue.body }}
          max_tokens: 250

      - name: Comment summary
        run: gh issue comment $NUMBER --body "$SUMMARY"
        env:
          GH_TOKEN: ${{ github.token }}
          NUMBER: ${{ github.event.issue.number }}
          SUMMARY: ${{ steps.prompt.outputs.response }}

Troubleshoot Terraform Deployments

on:
  pull_request:
  push:
    branches: main

jobs:
  provision:
    runs-on: ubuntu-latest

    permissions:
      actions: read
      checks: write
      contents: read
      pull-requests: write
      models: read

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3

      - name: Provision Terraform
        id: provision
        uses: op5dev/tf-via-pr@v13
        with:
          working-directory: env/dev
          command: ${{ github.event_name == 'push' && 'apply' || 'plan' }}

      - name: Troubleshoot Terraform
        if: failure()
        uses: op5dev/prompt-ai@v2
        with:
          model: openai/gpt-4.1-mini
          system-prompt: You are a helpful DevOps assistant and expert at debugging Terraform errors.
          user-prompt: Troubleshoot the following Terraform output; ${{ steps.provision.outputs.result }}
          max-tokens: 500
          temperature: 0.7
          top_p: 0.9

Inputs

The only required input is user-prompt, while every parameter can be tuned per documentation.

Type Name Description
Common model Model ID to use for the inference request.
(e.g., openai/gpt-4.1-mini)
Common system-prompt Prompt associated with the system role.
(e.g., You are a helpful software engineering assistant)
Common user-prompt Prompt associated with the user role.
(e.g., List best practices for workflows with GitHub Actions)
Common max-tokens The maximum number of tokens to generate in the completion. The token count of your prompt plus max-tokens cannot exceed the model's context length.
(e.g., 100)
Common temperature The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic.
(e.g., range is [0, 1])
Common top-p An alternative to sampling with temperature called nucleus sampling. This value causes the model to consider the results of tokens with the provided probability mass.
(e.g., range is [0, 1])
Additional frequency-penalty A value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text.
(e.g., range is [-2, 2])
Additional modalities The modalities that the model is allowed to use for the chat completions response.
(e.g., from text and audio)
Additional org Organization to which the request is to be attributed.
(e.g., github.repository_owner)
Additional presence-penalty A value that influences the probability of generated tokens appearing based on their existing presence in generated text.
(e.g., range is [-2, 2])
Additional seed If specified, the system will make a best effort to sample deterministically such that repeated requests with the same seed and parameters should return the same result.
(e.g., 123456789)
Additional stop A collection of textual sequences that will end completion generation.
(e.g., ["\n\n", "END"])
Additional stream A value indicating whether chat completions should be streamed for this request.
(e.g., false)
Additional stream-include-usage Whether to include usage information in the response.
(e.g., false)
Additional tool-choice If specified, the model will configure which of the provided tools it can use for the chat completions response.
(e.g., 'auto', 'required', or 'none')
Payload payload Body parameters of the inference request in JSON format.
(e.g., {"model"…)
Payload payload-file Path to a JSON file containing the body parameters of the inference request.
(e.g., ./payload.json)
Payload show-payload Whether to show the body parameters in the workflow log.
(e.g., false)
Payload show-response Whether to show the response content in the workflow log.
(e.g., true)
GitHub github-api-version GitHub API version.
(e.g., 2022-11-28)
GitHub github-token GitHub token for authorization.
(e.g., github.token)

Outputs

Due to GitHub's API limitations, the response content is truncated to 262,144 (2^18) characters so the complete, raw response is saved to response-file.

Name Description
response Response content from the inference request.
response-file File path containing the complete, raw response in JSON format.
payload Body parameters of the inference request in JSON format.

Security

View security policy and reporting instructions.

Tip

Pin your GitHub Action to a commit SHA to harden your CI/CD pipeline security against supply chain attacks.


Changelog

View all notable changes to this project in Keep a Changelog format, which adheres to Semantic Versioning.

Tip

All forms of contribution are very welcome and deeply appreciated for fostering open-source projects.


License

About

AI inference request GitHub Models via this GitHub Action.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Sponsor this project

  •  
  •