Setup a Kubernetes cluster using
k3d
running in GitHub Codespaces
This is a template that will setup a Kubernetes developer cluster using k3d
in a GitHub Codespace
or local Dev Container
We use this for inner-loop
Kubernetes development. Note that it is not appropriate for production use but is a great Developer Experience
. Feedback calls the approach game-changing
- we hope you agree!
For ideas, feature requests, and discussions, please use GitHub discussions so we can collaborate and follow up.
This Codespace is tested with zsh
and oh-my-zsh
- it "should" work with bash but hasn't been fully tested. For the HoL, please use zsh to avoid any issues.
You can run the dev container
locally and you can also connect to the Codespace with a local version of VS Code.
Please experiment and add any issues to the GitHub Discussion. We LOVE PRs!
The motivation for creating and using Codespaces is highlighted by this GitHub Blog Post. "It eliminated the fragility and single-track model of local development environments, but it also gave us a powerful new point of leverage for improving GitHub’s developer experience."
Cory Wilkerson, Senior Director of Engineering at GitHub, recorded a podcast where he shared the GitHub journey to Codespaces
You must be a member of the Microsoft OSS and CSE-Labs GitHub organizations
-
Instructions for joining the GitHub orgs are here
- If you don't see an
Open in Codespaces
option, you are not part of the organization(s)
- If you don't see an
-
Click the
Code
button on this repo -
Click the
Codespaces
tab -
Click
New Codespace
-
Choose the
4 core
option
Wait until the Codespace is ready before opening the workspace
- When prompted, choose
Open Workspace
-
This will create a local Kubernetes cluster using k3d
- The cluster is running inside your Codespace
# build the cluster make all
-
Output from
make all
should resemble thisdefault jumpbox 1/1 Running 0 25s default ngsa-memory 1/1 Running 0 33s default webv 1/1 Running 0 31s logging fluentbit 1/1 Running 0 31s monitoring grafana-64f7dbcf96-cfmtd 1/1 Running 0 32s monitoring prometheus-deployment-67cbf97f84-tjxm7 1/1 Running 0 32s
- If you get an error, just run the command again - it will clear once the services are ready
# check endpoints
make check
- From the Codespace terminal window, start
k9s
- Type
k9s
and press enter - Press
0
to select all namespaces - Wait for all pods to be in the
Running
state (look for theSTATUS
column) - Use the arrow key to select
nsga-memory
then press thel
key to view logs from the pod - To go back, press the
esc
key - Use the arrow key to select
jumpbox
then presss
key to open a shell in the container- Hit the
ngsa-memory
NodePort from within the cluster by executinghttp ngsa-memory:8080/version
- Verify 200 status in the response
- To exit -
exit
- Hit the
- To view other deployed resources - press
shift + :
followed by the deployment type (e.g.secret
,services
,deployment
, etc). - To exit -
:q <enter>
- Type
Open curl.http
curl.http is used in conjuction with the Visual Studio Code REST Client extension.
When you open curl.http, you should see a clickable
Send Request
text above each of the URLs
Clicking on Send Request
should open a new panel in Visual Studio Code with the response from that request like so:
A jump box
pod is created so that you can execute commands in the cluster
-
use the
kj
aliaskubectl exec -it jumpbox -- bash -l
- note: -l causes a login and processes
.profile
- note:
sh -l
will work, but the results will not be displayed in the terminal due to a bug
- note: -l causes a login and processes
-
example
- run
kj
- Your terminal prompt will change
- From the
jumpbox
terminal - Run
http ngsa-memory:8080/version
exit
back to the Codespaces terminal
- run
-
use the
kje
aliaskubectl exec -it jumpbox --
-
example
- run http against the ClusterIP
kje http ngsa-memory:8080/version
- run http against the ClusterIP
-
Since the jumpbox is running
in
the cluster, we use the service name and port, not the NodePort- A jumpbox is great for debugging network issues
-
Codespaces exposes
ports
to the browser -
We take advantage of this by exposing
NodePort
on most of our K8s services -
Codespaces ports are setup in the
.devcontainer/devcontainer.json
file -
Exposing the ports
// forward ports for the app "forwardPorts": [ 3500, 5000, 9411, 30000, 30080, 30088, 32000 ],
-
Adding labels to the ports
// add labels "portsAttributes": { "3500": { "label": "Dapr" }, "5000": { "label": "weather" }, "9411": { "label": "Zipkin" }, "30000": { "label": "Prometheus" }, "30080": { "label": "ngsa-app" }, "30088": { "label": "WebV" }, "32000": { "label": "Grafana" }, },
-
Click on the
ports
tab of the terminal window -
Click on the
open in browser icon
on the Prometheus port (30000) -
This will open Prometheus in a new browser tab
-
From the Prometheus tab
- Begin typing
NgsaAppDuration_bucket
in theExpression
search - Click
Execute
- This will display the
histogram
that Grafana uses for the charts
- Begin typing
-
Grafana login info
- admin
- cse-labs
-
Click on the
ports
tab of the terminal window- Click on the
open in browser icon
on the Grafana port (32000) - This will open Grafana in a new browser tab
- Click on the
# from Codespaces terminal
# run a baseline test (will generate warnings in Grafana)
make test
# run a 60 second load test
make load-test
- Switch to the Grafana brower tab
- The test will generate 400 / 404 results
- The requests metric will go from green to yellow to red as load increases
- It may skip yellow
- As the test completes
- The metric will go back to green (10 req/sec)
- The request graph will return to normal
Fluent Bit is set to forward logs to stdout for debugging
Fluent Bit can be configured to forward to different services including Azure Log Analytics
- Start
k9s
from the Codespace terminal - Press
0
to show allnamespaces
- Select
fluentbit
and pressenter
- Press
enter
again to see the logs - Press
s
to Toggle AutoScroll - Press
w
to Toggle Wrap - Review logs that will be sent to Log Analytics when configured
- See
deploy/loganalytics
for directions
- See
-
Switch back to your Codespaces tab
# from Codespaces terminal # make and deploy a local version of ngsa-memory to k8s make app
Makefile is a good place to start exploring
Make sure you are in the root of the repo
Create a new dotnet webapi project
mkdir -p dapr-app
cd dapr-app
dotnet new webapi --no-https
Run the app with Dapr
dapr run -a myapp -p 5000 -H 3500 -- dotnet run
Check the endpoints
- open
dapr.http
- click on the
dotnet app
send request
link - click on the
dapr endpoint
send request
link
- click on the
Open Zipkin
- Click on the
Ports
tab- Open the
Zipkin
link - Click on
Run Query
- Explore the traces generated automatically with Dapr
- Open the
Stop the app by pressing ctl-c
Clean up
cd ..
rm -rf dapr-app
Changes to the app have already been made and are detailed below
- Open
.vscode/launch.json
- Added
.NET Core Launch (web) with Dapr
configuration
- Added
- Open
.vscode/task.json
- Added
daprd-debug
anddaprd-down
tasks
- Added
- Open
weather/weather.csproj
- Added
dapr.aspnetcore
package reference
- Added
- Open
weather/Startup.cs
- Injected dapr into the services
- Line 29
services.AddControllers().AddDapr()
- Line 29
- Added
Cloud Events
- Line 40
app.UseCloudEvents()
- Line 40
- Injected dapr into the services
- Open
weather/Controllers/WeatherForecastController.cs
PostWeatherForecast
is a new function forsending
pub-sub events- Added the
Dapr.Topic
attribute - Got the
daprClient
via Dependency Injection - Published the model to the
State Store
- Added the
Get
- Added the
daprClient
via Dependency Injection - Retrieved the model from the
State Store
- Added the
- Set a breakpoint on lines 30 and 38
- Click on one of the VS Code panels to make sure it has the focus, then Press
F5
to run - Alternatively, you can use the
hamburger
menu, thenRun
andStart Debugging
- Open
dapr.http
- Send a message via Dapr
- Click on
Send Request
underpost to dapr
- Click
continue
when you hit the breakpoint - 200 OK
- Click on
- Get the model from the
State Store
- Click on
Send Request
underdapr endpoint
- Click
continue
when you hit the breakpoint - Verify the value from the POST request appears
- Click on
- Change the
temperatureC
value in POST request and repeat
- Send a message via Dapr
- Why don't we use helm to deploy Kubernetes manifests?
- The target audience for this repository is app developers so we chose simplicity for the Developer Experience.
- In our daily work, we use Helm for deployments and it is installed in the
Codespace
should you want to use it.
- Why
k3d
instead ofKind
?- We love kind! Most of our code will run unchanged in kind (except the cluster commands)
- We had to choose one or the other as we don't have the resources to validate both
- We chose k3d for these main reasons
- Smaller memory footprint
- Faster startup time
- Secure by default
- K3s supports the CIS Kubernetes Benchmark
- Based on K3s which is a certified Kubernetes distro
- Many customers run K3s on the edge as well as in CI-CD pipelines
- Rancher provides support - including 24x7 (for a fee)
- K3s has a vibrant community
- K3s is a CNCF sandbox project
- Team Working Agreement
- Team Engineering Practices
- CSE Engineering Fundamentals Playbook
This project uses GitHub Issues to track bugs and feature requests. Please search the existing issues before filing new issues to avoid duplicates. For new issues, file your bug or feature request as a new issue.
For help and questions about using this project, please open a GitHub issue.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services.
Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines.
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.