Skip to content

Commit

Permalink
fix: merged from main
Browse files Browse the repository at this point in the history
Signed-off-by: Alex Jones <alexsimonjones@gmail.com>
  • Loading branch information
AlexsJones committed May 19, 2023
2 parents cf80125 + 948dae5 commit 1d27577
Show file tree
Hide file tree
Showing 9 changed files with 143 additions and 68 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,6 @@ jobs:
- name: Unit Test
run: make test

- name: Fmt Test
run: fmtFiles=$(make fmt); if [ "$fmtFiles" != "" ];then exit 1; fi
# - name: Fmt Test
# run: fmtFiles=$(make fmt); if [ "$fmtFiles" != "" ];then exit 1; fi

123 changes: 78 additions & 45 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@ It has SRE experience codified into its analyzers and helps to pull out the most

<a href="https://www.producthunt.com/posts/k8sgpt?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-k8sgpt" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=389489&theme=light" alt="K8sGPT - K8sGPT&#0032;gives&#0032;Kubernetes&#0032;Superpowers&#0032;to&#0032;everyone | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>

<img src="images/demo4.gif" width=650px; />

# CLI Installation


Expand Down Expand Up @@ -127,8 +129,6 @@ _This mode of operation is ideal for continuous monitoring of your cluster and c
* Run `k8sgpt analyze` to run a scan.
* And use `k8sgpt analyze --explain` to get a more detailed explanation of the issues.

<img src="images/demo4.gif" width=650px; />

## Analyzers

K8sGPT uses analyzers to triage and diagnose issues in your cluster. It has a set of analyzers that are built in, but
Expand Down Expand Up @@ -188,8 +188,8 @@ _Anonymize during explain_
k8sgpt analyze --explain --filter=Service --output=json --anonymize
```

### Using filters
<details>
<summary> Using filters </summary>

_List filters_

Expand Down Expand Up @@ -221,11 +221,9 @@ k8sgpt filters remove [filter(s)]

</details>


### Additional commands

<details>

<summary> Additional commands </summary>
_List configured backends_

```
Expand Down Expand Up @@ -275,47 +273,45 @@ curl -X GET "http://localhost:8080/analyze?namespace=k8sgpt&explain=false"
```
</details>

## Additional AI providers

### Setting a new default AI provider
## Key Features

<details>
<summary> LocalAI provider </summary>

There may be scenarios where you wish to have K8sGPT plugged into several default AI providers. In this case you may wish to use one as a new default, other than OpenAI which is the project default.
To run local models, it is possible to use OpenAI compatible APIs, for instance [LocalAI](https://github.com/go-skynet/LocalAI) which uses [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml) to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.

_To view available providers_

```
k8sgpt auth list
Default:
> openai
Active:
> openai
> azureopenai
Unused:
> localai
> noopai
To run local inference, you need to download the models first, for instance you can find `ggml` compatible models in [huggingface.com](https://huggingface.co/models?search=ggml) (for example vicuna, alpaca and koala).

```
### Start the API server

_To set a new default provider_
To start the API server, follow the instruction in [LocalAI](https://github.com/go-skynet/LocalAI#example-use-gpt4all-j-model).

### Run k8sgpt

To run k8sgpt, run `k8sgpt auth new` with the `localai` backend:

```
k8sgpt auth default -p azureopenai
Default provider set to azureopenai
k8sgpt auth new --backend localai --model <model_name> --baseurl http://localhost:8080/v1
```

Now you can analyze with the `localai` backend:

```
k8sgpt analyze --explain --backend localai
```

</details>

<details>
<summary> AzureOpenAI provider </summary>

### Azure OpenAI
<em>Prerequisites:</em> an Azure OpenAI deployment is needed, please visit MS official [documentation](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource) to create your own.

To authenticate with k8sgpt, you will need the Azure OpenAI endpoint of your tenant `"https://your Azure OpenAI Endpoint"`, the api key to access your deployment, the deployment name of your model and the model name itself.
<details>

### Run k8sgpt

To run k8sgpt, run `k8sgpt auth` with the `azureopenai` backend:
```
k8sgpt auth add --backend azureopenai --baseurl https://<your Azure OpenAI endpoint> --engine <deployment_name> --model <model_name>
Expand All @@ -327,42 +323,48 @@ Now you are ready to analyze with the azure openai backend:
k8sgpt analyze --explain --backend azureopenai
```

</details>

### Running local models

To run local models, it is possible to use OpenAI compatible APIs, for instance [LocalAI](https://github.com/go-skynet/LocalAI) which uses [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml) to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.
</details>

<details>
<summary>Setting a new default AI provider</summary>

To run local inference, you need to download the models first, for instance you can find `ggml` compatible models in [huggingface.com](https://huggingface.co/models?search=ggml) (for example vicuna, alpaca and koala).
There may be scenarios where you wish to have K8sGPT plugged into several default AI providers. In this case you may wish to use one as a new default, other than OpenAI which is the project default.

### Start the API server
_To view available providers_

To start the API server, follow the instruction in [LocalAI](https://github.com/go-skynet/LocalAI#example-use-gpt4all-j-model).
```
k8sgpt auth list
Default:
> openai
Active:
> openai
> azureopenai
Unused:
> localai
> noopai
```

### Run k8sgpt

To run k8sgpt, run `k8sgpt auth add` with the `localai` backend:
_To set a new default provider_

```
k8sgpt auth add --backend localai --model <model_name> --baseurl http://localhost:8080/v1
k8sgpt auth default -p azureopenai
Default provider set to azureopenai
```

Now you can analyze with the `localai` backend:

```
k8sgpt analyze --explain --backend localai
```
</details>

</details>

## How does anonymization work?
<details>

With this option, the data is anonymized before being sent to the AI Backend. During the analysis execution, `k8sgpt` retrieves sensitive data (Kubernetes object names, labels, etc.). This data is masked when sent to the AI backend and replaced by a key that can be used to de-anonymize the data when the solution is returned to the user.

<details>

<summary> Anonymization </summary>
1. Error reported during analysis:
```bash
Error: HorizontalPodAutoscaler uses StatefulSet/fake-deployment as ScaleTargetRef which does not exist.
Expand All @@ -387,9 +389,8 @@ The Kubernetes system is trying to scale a StatefulSet named fake-deployment usi

</details>

## Configuration

<details>
<summary> Configuration management</summary>
`k8sgpt` stores config data in the `$XDG_CONFIG_HOME/k8sgpt/k8sgpt.yaml` file. The data is stored in plain text, including your OpenAI key.

Config file locations:
Expand All @@ -400,6 +401,38 @@ Config file locations:
| Windows | %LOCALAPPDATA%/k8sgpt/k8sgpt.yaml |
</details>

<details>
There may be scenarios where caching remotely is prefered.
In these scenarios K8sGPT supports AWS S3 Integration.

<summary> Remote caching </summary>

_As a prerequisite `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` are required as environmental variables._

_Adding a remote cache_
Note: this will create the bucket if it does not exist
```
k8sgpt cache add --region <aws region> --bucket <name>
```

_Listing cache items_
```
k8sgpt cache list
```

_Removing the remote cache_
Note: this will not delete the bucket
```
k8sgpt cache remove --bucket <name>
```
</details>


## Documentation

Find our official documentation available [here](https://docs.k8sgpt.ai)


## Contributing

Please read our [contributing guide](./CONTRIBUTING.md).
Expand Down
5 changes: 4 additions & 1 deletion cmd/auth/remove.go
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,7 @@ var removeCmd = &cobra.Command{
},
}

func init() {}
func init() {

}

7 changes: 7 additions & 0 deletions cmd/cache/add.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,10 @@ import (
"github.com/spf13/viper"
)

var (
region string
)

// addCmd represents the add command
var addCmd = &cobra.Command{
Use: "add",
Expand All @@ -46,6 +50,7 @@ var addCmd = &cobra.Command{
os.Exit(1)
}
cacheInfo.BucketName = bucketname
cacheInfo.Region = region

// Save the cache information
viper.Set("cache", cacheInfo)
Expand All @@ -59,7 +64,9 @@ var addCmd = &cobra.Command{

func init() {
CacheCmd.AddCommand(addCmd)
addCmd.Flags().StringVarP(&region, "region", "r", "", "The region to use for the cache")
addCmd.Flags().StringVarP(&bucketname, "bucket", "b", "", "The name of the bucket to use for the cache")
addCmd.MarkFlagRequired("bucket")
addCmd.MarkFlagRequired("region")

}
18 changes: 7 additions & 11 deletions cmd/cache/remove.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ limitations under the License.
package cache

import (
"fmt"
"os"

"github.com/fatih/color"
Expand All @@ -27,8 +26,8 @@ import (
// removeCmd represents the remove command
var removeCmd = &cobra.Command{
Use: "remove",
Short: "Remove a remote cache",
Long: `This command allows you to remove a remote cache and use the default filecache.`,
Short: "Remove the remote cache",
Long: `This command allows you to remove the remote cache and use the default filecache.`,
Run: func(cmd *cobra.Command, args []string) {

// Remove the remote cache
Expand All @@ -43,22 +42,19 @@ var removeCmd = &cobra.Command{
os.Exit(1)
}
// Warn user this will delete the S3 bucket and prompt them to continue
color.Yellow("Warning: this will delete the S3 bucket %s", cacheInfo.BucketName)
color.Yellow("Are you sure you want to continue? (y/n)")
var response string
_, err = fmt.Scanln(&response)
color.Yellow("Warning: this will not delete the S3 bucket %s", cacheInfo.BucketName)
cacheInfo = cache.CacheProvider{}
viper.Set("cache", cacheInfo)
err = viper.WriteConfig()
if err != nil {
color.Red("Error: %v", err)
os.Exit(1)
}
if response != "y" {
os.Exit(0)
}

color.Green("Successfully removed the remote cache")
},
}

func init() {
CacheCmd.AddCommand(removeCmd)
removeCmd.Flags().StringVarP(&bucketname, "bucket", "b", "", "The name of the bucket to use for the cache")
}
5 changes: 5 additions & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,11 @@ require (

require github.com/jmespath/go-jmespath v0.4.0 // indirect

require (
github.com/aws/aws-sdk-go v1.44.264 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
)

require (
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
Expand Down
3 changes: 2 additions & 1 deletion go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -452,6 +452,8 @@ github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 h1:DklsrG3d
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw=
github.com/aws/aws-sdk-go v1.44.264 h1:5klL62ebn6uv3oJ0ixF7K12hKItj8lV3QqWeQPlkFSs=
github.com/aws/aws-sdk-go v1.44.264/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
github.com/aws/aws-sdk-go v1.44.264 h1:5klL62ebn6uv3oJ0ixF7K12hKItj8lV3QqWeQPlkFSs=
github.com/aws/aws-sdk-go v1.44.264/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
Expand Down Expand Up @@ -781,7 +783,6 @@ github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLf
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/jmoiron/sqlx v1.3.5 h1:vFFPA71p1o5gAeqtEAwLU4dnX2napprKtHr7PYIcN3g=
github.com/jmoiron/sqlx v1.3.5/go.mod h1:nRVWtLre0KfCLJvgxzCsLVMogSvQ1zNJtpYr2Ccp0mQ=
Expand Down
3 changes: 2 additions & 1 deletion pkg/cache/cache.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ func New(noCache bool, remoteCache bool) ICache {
// CacheProvider is the configuration for the cache provider when using a remote cache
type CacheProvider struct {
BucketName string `mapstructure:"bucketname"`
Region string `mapstructure:"region"`
}

func RemoteCacheEnabled() (bool, error) {
Expand All @@ -33,7 +34,7 @@ func RemoteCacheEnabled() (bool, error) {
if err != nil {
return false, err
}
if cache.BucketName != "" {
if cache.BucketName != "" && cache.Region != "" {
return true, nil
}
return false, nil
Expand Down
Loading

0 comments on commit 1d27577

Please sign in to comment.