I have some micro services that accept arguments to run.
At some point I might need them like below:
docker run -d -e 'MODE=a' --name x_service_a x_service
docker run -d -e 'MODE=b' --name x_service_b x_service
docker run -d -e 'X_SOURCE=a' -e 'MODE'='foo' --name y_service_afoo y_service
docker run -d -e 'X_SOURCE=b' -e 'MODE'='foo' --name y_service_bfoo y_service
docker run -d -e 'X_SOURCE=b' -e 'MODE'='bar' --name y_service_bbar y_service
I do this with another service I wrote called 'coordinator' which uses docker engine api to monitor, start and stop these micro services.
The reason I can't make docker compose (as in my above example) because I can't have two running x_service with identical config.
So is it fine to manage them with docker engine API?
Services are generally scaled up and scaled down based on organizations needs. This translates to starting and stopping containers dynamically.
Many a times, the same docker image is started with different configurations. Think of a company managing various Wordpress websites for different customers.
So the answer to your question if it is a bad practice to start/stop docker containers dynamically, the answer is NO.
There are multiple ways to manage docker containers, some like to manage with just docker commands, some with docker-compose and some with more advanced management platforms.
Related
My setup is based on running two Docker containers, one with an API and the other with a DB.
This methodology makes it possible that both containers have an exposed port to web services.
But what I want is that the DB container (toolname-db) can only be exposed to the API container (toolname-api). This makes sure that the DB is not not exposed to web services directly.
How do I have to alter my setup in order to make sure what I want is possible?
Currently I use the following commands:
sudo docker build -t toolname .
sudo docker run -d -p 3333:3333 --name=toolname-db mdillon/postgis
sudo docker run -it -p 4444:4444 --name=toolname-api --network=host -d toolname
A container will only be reachable from outside Docker space if it has published ports. So you need to remove the -p option from your database container.
For the two containers to be able to talk to each other they need to be on the same network. Docker's default here is for compatibility with what's now a very old networking setup, so you need to manually create a network, though it doesn't need any special setting.
Finally, you don't need --net host. That disables all of Docker's networking setup; port mappings with -p are disabled, and you can't communicate with containers that don't themselves have ports published. (I usually see it recommended as a hack to work around hard-coded localhost connection strings.)
That leaves your final setup as:
sudo docker build -t toolname .
sudo docker network create tool
sudo docker run -d --net=tool --name=toolname-db mdillon/postgis
sudo docker run -d --net=tool -p 4444:4444 --name=toolname-api toolname
As #BentCoder suggests in a comment, it's very common to use Docker Compose to run multiple containers together. If you do, it creates a network for you which can save you a step.
I'm trying to implement a backup system. This system requires the execution of a docker container at the time of the backup on the specific node. Unfortunately I have not been able to get it to execute on the required node.
This is the command I'm using and executing on the docker swarm manager node. It is creating the container on the swarm manager node and not the one specified in the constraint. What am I missing?
docker run --rm -it --network cluster_02 -v "/var/lib:/srv/toBackup" \
-e BACKUPS_TO_KEEP="5" \
-e S3_BUCKET="backup" \
-e S3_ACCESS_KEY="" \
-e S3_SECRET_KEY="" \
-e constraint:node.hostname==storageBeta \
comp/backup create $BACKUP_NAME
You are using an older classic Swarm method to try running your container, but almost certainly using Swarm Mode. If you installed your swarm with docker swarm init and can see nodes with docker node ls on the manager, then this is the case.
Classic Swarm ran as a container that was effectively a reverse proxy to multiple docker engines, intercepting calls to commands like docker run and sending them to the appropriate node. It is generally recommended to avoid this older swarm implementation unless you have a specific use case and take the time to configure mTLS on each of your docker hosts.
Swarm Mode provides an HA manager using Raft for quorum (same as etcd), handles encryption of all management requests, configures overlay networking, and works with a declarative model, giving the target state, rather than imperative commands to run. It's a very different model from classic Swarm. Notably, Swarm Mode only works on services and stacks, and completely ignores the docker run and other local commands, e.g.:
docker service create \
--name backup \
--constraint node.hostname==storageBeta \
--network cluster_02 \
-v "/var/lib:/srv/toBackup" \
-e BACKUPS_TO_KEEP="5" \
-e S3_BUCKET="backup" \
-e S3_ACCESS_KEY="" \
-e S3_SECRET_KEY="" \
comp/backup create $BACKUP_NAME
Note that jobs are not well supported in Swarm Mode yet (there is an open issue seeking community feedback on including this functionality). It is currently focused on running persistent services that do not normally exit unless there is an error. If your command is expected to exit, you can include an option like --restart-max-attempts 0 to prevent swarm mode from restarting it. You may also have additional work to do if the network is not swarm scoped.
I'd also recommend converting this to a docker-compose.yml file and deploying with docker stack deploy to better document the service as a config file.
I have made two Docker containers using:
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password1234" -p1433:1433 --name sql2019 -d mcr.microsoft.com/mssql/server:vNext-CTP2.0-ubuntu
and I distinguished them by changing the -p and --name section, but when I go over to Azure Data Studio and connect, I can only connect to one of them, because I enter 'localhost' in the Server section, but since both containers use 'localhost', how can I differentiate the two in Azure Data Studio? Is there any way to use the --name flag?
I would appreciate a clear answer; I am new to server stuff.
I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.
Ref: https://github.com/crosbymichael/skydock
https://github.com/crosbymichael/skydns
First I fired up those two instances.
docker run -d -p 8080:8080 -p 172.17.42.1:53:53/udp --name skydns crosbymichael/skydns -nameserver 8.8.8.8:53 -domain docker
docker run -d -v /var/run/docker.sock:/docker.sock --name skydock crosbymichael/skydock -ttl 30 -environment dev -s /docker.sock -domain docker -name skydns
And this setup is working as expected.
Now I want to spawn another production environment. This time I only fired another skydock container with the env production as follows.
docker run -d -v /var/run/docker.sock:/docker.sock --name skydock-prod crosbymichael/skydock -ttl 30 -environment prod -s /docker.sock -domain docker -name skydns
Querying the api doesn't show the production skydoc.
curl $(docker-ip):8080/skydns/services/
And now I am wondering on how to setup the production version of skydock.
Do I have to run in separate docker-host?
If I fire up in the same docker host, in which DNS url entry will the new containers be available?
Do I have to pass some flags/variables when I fire new containers to be available in the production env?
I don't about the way to make 2 or more skydock instances listen to the same docker.sock (within single host machine). I think conceptually it is not right. Docker containers know nothing about your logical enviroments (production, staging, ...)
I got a multihost setup with skydns and skydock. I run skydns on a separate host. Each of two other servers run single instance of skydock, which registers all docker containers ips in centralised SkyDNS, so that all containers are visible by dns name across different physical hosts.
All of that is working on top of Flannel network overlay https://github.com/coreos/flannel (which requires etcd)