>>> netstat -aon | findstr :3234
to check processes running on a specific port or>>> findstr /C:"3232" /C:"8080" ...
for multiple ports or>>> netstat -aon | findstr LISTEN
for all listening ports>>> tasklist /FI "PID eq 19812"
to identify the process>>> taskkill /PID 19812 /F
to kill the process>>> tasklist | findstr /I "VBox"
to print the processes that run under Image Name starting withVBox
>>> netstat -aon | findstr /R /C:"31900" /C:"32076"
to see on which port these process ids are listening to>>> ForEach ($processId in (netstat -aon | Select-String ":9000" | ForEach-Object { $_.ToString().Split()[-1] })) { taskkill /PID $processId /F }
to kill processes running on port 9000
- ssh-agent
- Start:
>>> eval $(ssh-agent -s)
- Kill:
>>> eval $(ssh-agent -k)
- List keys:
>>> ssh-add -L
- Add key from file:
>>> ssh-add ~/.ssh/id_rsa
- Start:
- Get public key from private key:
>>> ssh-keygen -y -f /path/to/private_key > extracted_public_key.pub
- ssh passing private key file or password
>>> ssh -p 2222 -i C:/ANASTASIS/HUA/SEMESTER-4/DIT247-Cloud-Services/Project/ubuntu-20.04-vm/.vagrant/machines/default/virtualbox/private_key vagrant@127.0.0.1 -o LogLevel=DEBUG
>>> ssh -p 2222 -i ~/.vagrant.d/insecure_private_key vagrant@127.0.0.1 -o LogLevel=DEBUG
>>> ssh -p 2222 -i ~/.vagrant.d/insecure_private_keys/vagrant.key.rsa vagrant@127.0.0.1 -o LogLevel=DEBUG
>>> ssh -p 2222 -i ~/.vagrant.d/insecure_private_keys/vagrant.key.ed25519 vagrant@127.0.0.1 -o LogLevel=DEBUG
>>> ssh -p 2222 vagrant@127.0.0.1 -o LogLevel=DEBUG
(vagrant)>>> ssh -p 2222 root@127.0.0.1 -o LogLevel=DEBUG
(root)- Add
-L 1880:localhost:1880 -L 9991:localhost:9991 -L 8080:localhost:8080 -L 5984:localhost:5984 -L 3233:localhost:3233 -L 3232:localhost:3232 -L 8025:localhost:8025
to ssh with port mapping
Syncing folder with vagrant doesnt seem to work
- Add a shared folder to the vm from VirtualBox settings
- Folder Path : the path on host
- Folder Name : the path on the vm
- Mount Point : the path on the vm to mount the folder with the above folder name
- Select Auto mount and Make Permanent
- In the vm, add vagrant logged in user to
vboxsf
group (mounted folder will be of userroot
and groupvboxsf
)>>> sudo usermod -aG vboxsf $(whoami)
>>> groups
to verify logged in user is invboxsf
group
- Verify
wsk
client installation from vagrant provisioning>>> wsk --help
- API host:
- Verify if API host is set
>>> wsk property get
(This should have been done from vagrant provisioning) - if not, set it
>>> wsk property set --apihost http://127.0.0.1:3233
- Verify if API host is set
- Credentials:
- Verify credentials are set
>>> wsk property get --auth
(This should have been done from vagrant provisioning) - if not verified, set them
>>> wsk property set --auth `cat ~/openwhisk/ansible/files/auth.guest`
- Verify credentials are set
- API host/Credentials/namespace:
- Verify host and connection credentials
>>> wsk list -v
or by >>> cat ~/.wskprops
- Verify
guest
namespace exists by>>> wsk namespace list
- Verify host and connection credentials
Actions with dependencies will be inside ~/openwhisk_actions
. Each subfolder will be the name of the action.
Work in ~/dit247/actions/dependencies/minio
which is shared and copy to ~/openwhisk_actions
afterwards (need to be separate due to VirtualBox sharing imposed permissions)
For a minio named action with python runtime:
- In
~/dit247/actions/dependencies/minio
run>>> docker build -f Dockerfile.python -t img-python-action .
to build python3.10 image for action creation - Create
~/openwhisk_actions/minio
if it doesn't exist - Make sure it is clean:
>>> sudo rm -R ./*
- Copy required content from working folder:
>>> cp ~/dit247/actions/dependencies/minio/* -R .
- Use built image to create python3.10 virtual environment for action creation compatible with python 3.10 runtime:
>>> docker run --rm -v "$PWD:/app" img-python-action bash -c "virtualenv virtualenv && source virtualenv/bin/activate && pip install -r requirements.txt"
- Zip content:
>>> zip -r minio.zip virtualenv __main__.py
- Create action:
>>> wsk action create minio --kind python:3.10 --main main minio.zip
(3.10, 3.11, 3.12
are available) - Verify created action
>>> wsk action list
or>>> curl -u 23bc46b1-71f6-4ed5-8c54-816aa4f8c502:123zO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP http://localhost:3233/api/v1/namespaces/guest/actions
(This can send from postman as well)
- Invoke action
>>> wsk action invoke minio --result --blocking --param key1 value1 --param key2 value2 ...
- (Delete action and verify deletion:
>>> wsk action delete minio && wsk action list
)
Make sure a dit247
topic exists on kafka. If not create it from the UI
(This might be created automatically when the nodered consumer node that listens on it is up)
Log in to the container: >>> docker exec -it ctr-minio bash
- Set alias for minio service:
>>> mc alias set minio http://127.0.0.1:9000 admin password
- Configure kafka notiications on topic dit247:
>>> mc admin config set minio notify_kafka:1 brokers="ctr-kafka:9992" topic="dit247" tls_skip_verify="off" queue_dir="" queue_limit="0" sasl="off" sasl_password="" sasl_username="" tls_client_auth="0" tls="off" client_tls_cert="" client_tls_key="" version="" --insecure
- Restart minio service:
>>> mc admin service restart minio
- Verify configuration:
>>> mc admin config get minio notify_kafka
- Make sure buckets dit247 and dit247c are created through the nodered flow
- Add event on bucket dit247 for notiication coniguration 1:
>>> mc event add minio/dit247 arn:minio:sqs::1:kafka --event put
- To Disable event and notification configuration:
>>> mc event remove minio/dit247c arn:minio:sqs::2:kafka --event put
>>> mc admin config set minio notify_kafka:2 enable=off
>>> mc admin service restart minio
Configure webhook notiications on endpoint http://ctr-nodered:1880/compressed-images:
- Make sure bucket dit247c is created through the nodered flow
>>> mc admin config set minio notify_webhook:1 endpoint="http://ctr-nodered:1880/compressed-images" queue_limit="10000" queue_dir="/tmp" queue_retry_interval="1s" enable="on"
- Restart minio service:
>>> mc admin service restart minio
- Verify configuration:
>>> mc admin config get minio notify_webhook
- Add event on bucket dit247c for notiication coniguration:
>>> mc event add minio/dit247c arn:minio:sqs::1:webhook --event put
- To Disable event and notification configuration:
>>> mc event remove minio/dit247c arn:minio:sqs::1:webhook --event put
>>> mc admin config set minio notify_webhook:1 enable=off
>>> mc admin service restart minio
-
In local mahchine run
>>> vagrant up
in the folder where theVagrantfile
is -
Connect to vm with VSCode using one of the hosts in
~/.ssh/config
file -
Forward ports from VSCode
- 1880 (Nodered UI
http://localhost:1880
and dashboardhttp://localhost:1880/ui
) - 9991 (Minio UI
http://localhost:9991/browser
) - 8080 (Kafka UI
http://localhost:8080/
) - 5984 (CouchDB UI
http://localhost:5984/_utils/#login
) - 3233 (Openwisk API
http://localhost:3233
required for postman. Can test from vm if it is available with>>> curl http://0.0.0.0:3233
) - 3232 (Openwisk playground
http://localhost:3232/playground/ui/index.html
. Not required.) - 8025 (Mailhog UI, if used
http://localhost:8025/
)
- 1880 (Nodered UI
-
If VSCode has problems with ssh, then:
>>> vagrant ssh -- -L 1880:localhost:1880 -L 9991:localhost:9991 -L 8080:localhost:8080 -L 5984:localhost:5984 -L 3233:localhost:3233 -L 3232:localhost:3232 -L 8025:localhost:8025
to ssh with vagrant and map required ports to local machine- Verify port mapping with
>>> netstat -aon | findstr /C:"1880" /C:"9991" /C:"8080" /C:"5984" /C:"3233" /C:"3232" /C:"8025"
from local machine - Ports should be releashed when terminating ssh session by
>>> logout
or>>> exit
from inside the vm - If terminal is closed without logging out and the session is open, then
>>> Get-Process | Where-Object { $_.ProcessName -like "*ssh*" }
and>>> taskkill /PID <Id> /F
(Id
column number) from powershell or>>> ps aux | grep ssh
and>>> kill -9 <PID>
(PID
column number) from gitbash
-
If problems continue: (LIFESAVER)
- Run
>>> vagrant halt
- Open VirtualBox and in the settings of the vm, in System left pane and under Acceleration tab, make sure Hardware virtualization is unchecked. Click Ok
>>> vagrant up
again and try to ssh Inside the vm:
- Run
-
Run
>>> ps -eF | grep java
to see if the openwhisk launch command is running and if not run it:>>> sudo java -Dwhisk.standalone.host.name=0.0.0.0 -Dwhisk.standalone.host.internal=0.0.0.0 -Dwhisk.standalone.host.external=0.0.0.0 -jar ~/openwhisk/bin/openwhisk-standalone.jar --couchdb --kafka --api-gw --kafka-ui
-
Check if Openwhisk API is accessible:
>>> curl http:0.0.0.0:3233
-
>>> docker-compose up -d
(or>>> docker-compose up -d -build
if needed) in~/dit247
-
Check the forwarded ports from browser in the above urls and
-
make sure the containers are Up with
>>> docker ps -a
Run the inject node on Bucket list section to list minio buckets (They should already be there from Prerequisites: Configurations stage)
- Make sure the node Invoke Openwhisk action has the vm IP on th url which can be found with
>>> ip addr | grep eth0 | head -n 2 | tail -n 1
in the vm - Make sure a folder the folder
~/dit247/data/nodered/images
exists in the vm with files with namesfile-1.jpg, file-2.jpg, ...
- If not, run
>>> python3 -m python.rename_files
from~/dit247
to rename them - Run the Trigger file upload inject node of the Single image upload section and check
- log messages
- Minio bucket and kafka UI from browser
- Retry pattern:
- Stop the minio container
>>> docker stop ctr-minio
- Enable the Repeatedly trigger file upload inject node and set Repeat to every 5 hours or sth in order to test retry pattern once
- Run the Repeatedly trigger file upload inject node and check logs to see retries
- Start the minio container
>>> docker start ctr-minio
- Reset Repeat to every 3 seconds Enable the Repeatedly trigger file upload inject node
- Run the Repeatedly trigger file upload inject node and check
- log messages
- Minio bucket and kafka UI from browser
- Stop the minio container
Generally run
>>> vagrant up (--provision)
(.vagrant/machines/virtualbox/private_key
is expected to NOT be created)>>> vagrant reload
(maybe more than once)- (
.vagrant/machines/virtualbox/private_key
is expected to BE created) - (is expeted to NOT be able to ssh), then
- (
>>> vagrant up (--provision)
vagrant ssh
to check if it can ssh (is expected to BE able to ssh)>>> vagrant ssh-config
to update with its output the~/.ssh/config
- Connect with VSCode to the vm
- host
config
file (see committedconfig
file) - Remove from
~/.ssh/known_hosts
the[127.0.0.1]:2222
lines defining ssh keys vagrant ssh
to vm and do the below or set it in vagrantfile provision- change on
/etc/ssh/sshd_config
settingPasswordAuthentication
toyes
- run
>>> sudo systemctl restart sshd (or ssh)
- change on
If private key has to be added to ~/.ssh
(for example to work with git/github from inside the vm)
- Must do
chmod 700 ~/.ssh
- Must do
chmod 600 ~/.ssh/id_ed25519
,chmod 600 ~/.ssh/id_rsa
VSCode (LIFE SAVER)
If ssh is possible from terminal but not with VSCode, then from VScode:
>> Ctrl Shit P
- Remote-SSH: Kill VS Code Server on Host...
- Select host defined on config that has the problem
- Try again
If with
>>> vagrant up
and/or>>> vagrant up --provision
the .vagrant/machines/virtualbox/private_key
is not generated, then vagrnat ssh
connects to the vm and
>>> vagrant ssh-config
will use the~/.vagrant.d/insecure_private_keys
keys.
Running
>>> vagrant reload
may generate the.vagrant/machines/virtualbox/private_key
so>>> vagrant ssh-config
will use theVagrantfile folder path/.vagrant/machines/default/virtualbox/private_key
keys.
In any case, to connect with VSCode to the vm, update ~/.ssh/config
with the output of >>> vagrant ssh-config
-
Vagrant
-
ssh/config files
-
MinIO
-
Node red
-
Couchdb
-
Openwhisk
- Swagger
- Standalone Server (Github)
- Docker compose setup (Github)
- Docker compose file (Github)
- How to setup OpenWhisk with Docker Compose (Github)
- Actions (Github)
- wsk cli (Github)
- Automating Actions from Event Sources (Apache)
- Apache OpenWhisk package for communication with Kafka or IBM Message Hub (Github)
- Apache OpenWhisk Runtimes for Python (Github)
- Creating and invoking Python actions (Github)
- Python Packages in OpenWhisk (James Thomas)