Dockerising by default.
To change constantly try new environments flexible base is one of the necessary things.
At best working on as many as possible operating systems. here are comming couple of contenerisations players.
When we are talking about small scale up to 200 instances: docker and docker swarm is one of the choices.
Docker allows for containerization of applications, which means you can run isolated instances of your services and applications.
In addition, Docker gives you the ability to connect your apps and services together with Docker Compose, which makes Python apps very convenient to build.
Below there are cases where in my learning and work with docker was really game changer. For that reason I will not be expaining what is image container, docker-compose etc. It will be more about examples like " for doing ML or NLP or other task I took particualr image ajusted and made it work.
Someone could ask what about Virtual Machines?
Contrary to how VMs work, with Docker we don’t need to constantly set up clean environments in the hopes of avoiding conflicts.
With Docker, we know that there will be no conflicts. Docker guarantees that application microservices will run in their own environments that are completely separate from the operating system.
Thanks to Docker, there’s no need for each developer in a team to carefully follow 20 pages of operating system-specific instructions.
Instead, one developer can create a stable environment with all the necessary libraries and languages and simply save this setup in the Docker Hub or other server.
If you want to start a journey on high seas it is good to learn from the best .
like people who own title of capitain docker.
One of that group is Lulasz Lach(https://lach.dev/ | https://github.com/lukaszlach/).
I learned couple of trics which mage my life easier with ML, NLP and Python
netshoot https://github.com/nicolaka/netshoot
First application which I am opening in my ubuntu is
ctop from netshoot( created by Nicola Kabar ) to deal with all kind of network issues.
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock --name netshooot-ctop nicolaka/netshoot ctop
This will be my prevailing approach not installing applications in the system if possible use applications the from container
docker run -d \
--name firefox \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
kennethkl/firefox
If there is need to do that in windows work of Jocelyn Le Sage is doing the job https://github.com/jlesage/docker-firefox. Well documented, bullet proff, up to date. There are also other tools and I am not the only one who appreciate this repository.
docker run -d \
--name=firefox \
-p 5800:5800 \
-v /docker/appdata/firefox:/config:rw \
--shm-size 2g \
jlesage/firefox
docker run -it \
-p 8080:8080 \
-v "$PWD:/home/coder/project" \
-u "$(id -u):$(id -g)" \
codercom/code-server
https://github.com/NLPbox/stanford-corenlp-docker
docker run --rm --name stanza_nlp -p 9000:9000 nlpbox/corenlp
end voilà:
It is nicely described how to prepare own conrtainer with jupyter keras pandas and tensorflow to be able check own ideas
This is fully ready Docker container with:
NumPy
Pandas
Sklearn
Matplotlib
Seaborn
pyyaml
h5py
Jupyter
Tensorflow
Keras
OpenCV 3
ffmpeg
https://github.com/andreivmaksimov/python_data_science
logs from the container are a bit missleading
[I 07:57:30.517 NotebookApp] Jupyter Notebook 6.1.3 is running at:
[I 07:57:30.518 NotebookApp] http://3e95443d41b4:8888/
but jupyter works simply give localhost or 0.0.0.0 and port instead of generated link
as Docker file is given if there was need as with ffmpeg I simply included that in the secion of apt install ie :
RUN apt-get update && apt-get install -y \
libopencv-dev \
python3-pip \
python3-opencv \
ffmpeg && \
rm -rf /var/lib/apt/lists/*
in case of permission problems nice small extra parameter could do the job
--user "$(id -u):$(id -g)" \
The difference is ‘–user “$(id -u):$(id -g)“’ - they tell the container to run with the current user id and group id which are obtained dynamically through bash command substitution by running the “id -u” and “id -g” and passing on their values.
we could start from the simplest
docker stats
a bit more advances with the use of other docker container ctop
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock --name netshooot-ctop nicolaka/netshoot ctop
Very useful is portainer especially that it could be in docker itself
https://github.com/portainer/portainer
docker run -d -p 9000:9000 -p 8000:8000 --name portainer \
--restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data --restart always \
portainer/portainer
you can use nice tool based on the webbrownser from Google
cAdvisor
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/dev/disk/:/dev/disk:ro \
--publish=8080:8080 \
--detach=true \
--name=cadvisor \
gcr.io/google-containers/cadvisor:latest
to present some traffic and interactions that: I started before some couple of instances of redis and httpd so the charts will be showing something
If you still need more more details there is package prometeus graphana etc everything dockerised by Brian Christner's team
https://github.com/vegasbrianc/prometheus
Created by: lencz.sla@gmail.com