-
Couldn't load subscription status.
- Fork 6
Running SQL containers on a custom network
In a previous wiki page we went through creating SQL containers on the default docker bridge network
However, docker gives us the ability to create our own custom networks, and it provides several drivers for us to use: -
- bridge - the default. Allows containers connected to the same bridge network to communicate.
- host - removes network isolation between the container and the host. The container uses the host's network.
- none - disabled the container's network stack.
- macvlan - assign a MAC address to the container so it shows as a physical device on the network.
- Overlay - connect multiple docker daemons together. Used for Docker Swarm.
The one we're going to focus on is the bridge network. User-defined (aka custom) brigde networks have several advantages over the default bridge network, the main one being that containers on a custom bridge network can communicate via container name. The other advantages are listed here.
Let's test that out! Create a custom network: -
docker network create sqlserver
We're not specifying a driver here so the network created will be a bridge network as it's the default. We can confirm that by running: -
docker network ls

And there's our custom network! Ok, let's spin up a couple of containers on that network: -
docker container run -d `
--network sqlserver `
--env ACCEPT_EULA=Y `
--env MSSQL_SA_PASSWORD=Testing1122 `
--name sqlcontainer1 `
ghcr.io/dbafromthecold/customsql2019-tools:cu5
docker container run -d `
--network sqlserver `
--env ACCEPT_EULA=Y `
--env MSSQL_SA_PASSWORD=Testing1122 `
--name sqlcontainer2 `
ghcr.io/dbafromthecold/customsql2019-tools:cu5

We're using a custom image here as it has ping installed, so that we can test communicate between the containers.
Confirm that the containers are up and running: -
docker container ls -a

And inspect the custom network: -
docker network inspect sqlserver

There's our containers! Ok, now we can test pinging each container from the other by name: -
docker exec sqlcontainer1 ping sqlcontainer2 -c 4
docker exec sqlcontainer2 ping sqlcontainer1 -c 4

Excellent! The containers can communicate by name!
This is really handy when building out test environment in which we need multiple SQL instances that can talk to each other!
I'd recommend always creating a custom network for your SQL containers as it also provides more isolation for the containers from the default network.
What's really cool as well is that containers can be attached/detached from a custom network on the fly!
Let's try that out. Spin up a container on the default network: -
docker container run -d `
--env ACCEPT_EULA=Y `
--env MSSQL_SA_PASSWORD=Testing1122 `
--name sqlcontainer3 `
ghcr.io/dbafromthecold/customsql2019-tools:cu5
Confirm the container is running: -
docker container ls -a

And confirm that it is on the default bridge network: -
docker network inspect bridge

Ok, let's attach that container to our custom network: -
docker network connect sqlserver sqlcontainer3
And now inspect the custom network: -
docker network inspect sqlserver

The container is now attached to that network! So the other containers can communicate to it via its name: -
docker exec sqlcontainer1 ping sqlcontainer3 -c 4

And if we want to remove it from the custom network we can run: -
docker network disconnect sqlserver sqlcontainer3
Confirm that it's no longer attached to the custom network: -
docker network inspect sqlserver

Gone! Which means the other containers won't be able to ping it any more: -
docker exec sqlcontainer1 ping sqlcontainer3 -c 4

And that's how we can use custom docker networks to allow containers to communicate with each other by name.