Start Docker Containers on logon under Windows - docker

I've just set up a new Windows 10 development machine and so as to minimise the hassle of installs I've got various dev dependencies (Oracle, MongoDB, RabbitMQ, HAProxy, etc.) running under Docker using a docker-compose script.
I'd like to automatically start these containers on Windows logon but as yet I haven't figured out a way to do this; a simple script that executes docker-compose up -d in the correct directory should do it, but if it executes immediately on logon Docker hasn't yet started up so the script fails. Does anyone know how to programatically wait until docker is running?

To further elaborate on my comment i have done a little test with a webserver service, but it should work for any service, as long as you configure it the way you want it to behave.
Its quite easy to set this up using the following commands:
docker swarm init
Then for example a webserver
docker service create --name webserver --publish 80:80 httpd
Or even a database
docker service create --replicas 1 --name database --publish 1433:1433 -e "ACCEPT_EULA=y" -e "SA_PASSWORD=test" microsoft/mssql-server-linux
These will restart after a reboot and on fatal crashes automatically because of the requested amount of replicas (1 by default) that Docker swarm keeps alive for you.
Hopefully this can be of some help!

Turns out this is really easy to achieve via docker-compose using restart! Have changed out compose file as follows:
version: '2'
services:
rabbitmq:
image: rabbitmq:3.6-management
ports:
- "5672:5672"
- "15672:15672"
volumes:
- /var/lib/rabbitmq
restart: unless-stopped
This extra restart directive means that unless the container has been explicitly stopped it will start up with docker on logon/reboot. Tested and working!

Related

Advantage of using docker-compose file version 3 over a shellscript?

My initial reason for creating a docker-compose.yml, was to take advantage of features such as build: and depends-on: to make a single file that builds all my images and runs them in containers. However, I noticed version
3 depreciates most of these functions, and I'm curious why I would use this over building a shellscript.
This is currently my shellscript that runs all my containers (I assume this is what the version 3 docker-compose file would replace if I were to use it):
echo "Creating docker network net1"
docker network create net1
echo "Running api as a container with port 5000 exposed on net1"
docker run --name api_cntr --net net1 -d -p 5000:5000 api_img
echo "Running redis service with port 6379 exposed on net1"
docker run --name message_service --net net1 -p 6379:6379 -d redis
echo "Running celery worker on net1"
docker run --name celery_worker1 --net net1 -d celery_worker_img
echo "Running flower HUD on net1 with port 5555 exposed"
docker run --name flower_hud --net net1 -d -p 5555:5555 flower_hud_img
Does docker-swarm rely on using stacks? If so then I can see a use for docker-compose and stacks, but I couldn't seem to find an answer online. I would use version 3 because it is compatible with swarm, unlike version 2 if what I've read it true. Maybe I am missing the point of docker-compose completely, but as of right I'm a bit confused as to what it brings to the table.
Readability
Compare your sample shell script to a YAML version of same:
services:
api_cntr:
image: api_img
network: net1
ports:
- 5000:5000
message_service:
image: redis
network: net1
ports:
- 6379:6379
celery_worker1:
image: celery_worker_img
network: net1
flower_hud:
image: flower_hud_img
network: net1
ports:
- 5555:5555
To my eye at least, it is much easier to determine the overall architecture of the application from reading the YAML than from reading the shell commands.
Cleanup
If you use docker-compose, then running docker-compose down will stop and clean up everything, remove the network, etc. To do that in your shell script, you'd have to separately write a remove section to stop and remove all the containers and the network.
Multiple inheriting YAML files
In some cases, such as for dev & testing, you might want to have a main YAML file and another that overrides certain values for dev/test work.
For instance, I have an application where I have a docker-compose.yml as well as docker-compose.dev.yml. The first contains all of the production settings for my app. But the "dev" version has a more limited set of things. It uses the same service names, but with a few differences.
Adds a mount of my code directory into the container, overriding the version of the code that was built into the image
Exposes the postgres port externally (so I can connect to it for debugging purposes) - this is not exposed in production
Uses another mount to fake a user database so I can easily have some test users without wiring things up to my real authentication server just for development
Normally the service only uses docker-compose.yml (in production). But when I am doing development work, I run it like this:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
It will load the normal parameters from docker-compose.yml first, then read docker-compose.dev.yml second, and override only the parameters found in the dev file. The other parameters are all preserved from the production version. But I don't require two completely separate YAML files where I might need to change the same parameters in both.
Ease of maintenance
Everything I described in the last few paragraphs can be done using shell scripts. It's just more work to do it that way, and probably more difficult to maintain, and more prone to mistakes.
You could make it easier by having your shell scripts read a config file and such... but at some point you have to ask if you are just reimplementing your own version of docker-compose, and whether that is worthwhile to you.

How can one Docker container call another Docker container

I have two Docker containers
A Web API
A Console Application that calls Web API
Now, on my local web api is local host and Console application has no problem calling the API.However, I have no idea when these two things are Dockerized, how can I possibly make the Url of Dockerized API available to Dockerized Console application?
i don't think i need a Docker Compose because I am passing the Url of API as an argument of the API so its just the matter of making sure that the Dockerized API's url is accessible by Dockerized Console
Any ideas?
The idea is not to pass the url, but the hostname of the other container you want to call.
See Networking in Compose
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
This is what replace the deprecated --link option.
And if your containers are not running on a single Docker server node, Docker Swarm Mode would enable that discoverability across multiple nodes.
This is the best way I have found to connect multiple containers in a local machine / single cluster.
Given: data-provider-service, data-consumer-service
Option 1: Using Network
docker network create data-network
docker run --name=data-provider-service --net=data-network -p 8081:8081 data-provider-image
docker run --name=data-consumer-service --net=data-network -p 8080:8080 data-consumer-image
Make sure to use URIs like: http://data-provider-service:8081/ inside your data-consumer-service.
Option 2: Using Docker Compose
You can define both the services in a docker-compose.yml file and use depends_on property in data-provider-service.
e.g.
data-consumer-service:
depends_on:
- data-provider-service
You can see more details here on my Medium post: https://saggu.medium.com/how-to-connect-nultiple-docker-conatiners-17f7ca72e67f
You can use the link option with docker run:
Run the API:
docker run -d --name api api_image
Run the client:
docker run --link api busybox ping api
You should see that api can be resolved by docker.
That said, going with docker-compose is still a better option.
The problem can be solved easily if using compose feature. With compose, you just create one configuration file (docker-compose.yml) like this :
version: '3'
services:
db:
image: postgres
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
To make it run, just call up like this :
docker-compose up
This is the best way to run all your stack, so, check this reference :
https://docs.docker.com/compose/
Success!

Multiple docker images run from docker file

I am trying to execute multiple docker images run from single docker file with different ports.
Please advise How to execute multiple "docker run" commands from single docker file with different ports.
You want to use docker-compose it sounds like. Here is an example using nginx and redis (It's how I do it anyway)
services:
nginx:
image: nginx
ports:
- "80:80"
redis:
image: redis
ports:
- "1000:1000"
So as you can see, if I run docker-compose up, docker will spin up two containers, nginx and redis, each running off of a different port! If you don't want to you docker-compose, you can do it from docker run
docker run --name nginx -p 1000:10001
docker run --name redis -p 3333:2423
I don't 100% understand your question, but I hope this helps!

How does one close a dependent container with docker-compose?

I have two containers that are spun up using docker-compose:
web:
image: personal/webserver
depends_on:
- database
entrypoint: /usr/bin/runmytests.sh
database:
image: personal/database
In this example, runmytests.sh is a script that runs for a few seconds, then returns with either a zero or non-zero exit code.
When I run this setup with docker-compose, web_1 runs the script and exits. database_1 remains open, because the process running the database is still running.
I'd like to trigger a graceful exit on database_1 when web_1's tasks have been completed.
You can pass the --abort-on-container-exit flag to docker-compose up to have the other containers stop when one exits.
What you're describing is called a Pod in Kubernetes or a Task in AWS. It's a grouping of containers that form a unit. Docker doesn't have that notion currently (Swarm mode has "tasks" which come close but they only support one container per task at this point).
There is a hacky workaround beside scripting it as #BMitch described. You could mount the Docker daemon socket from the host. Eg:
web:
image: personal/webserver
depends_on:
- database
volumes:
- /var/run/docker.sock:/var/run/docker.sock
entrypoint: /usr/bin/runmytests.sh
and add the Docker client to your personal/webserver image. That would allow your runmytests.sh script to use the Docker CLI to shut down the database first. Eg: docker kill database.
Edit:
Third option. If you want to stop all containers when one fails, you can use the --abort-on-container-exit option to docker-compose as #dnephin mentions in another answer.
I don't believe docker-compose supports this use case. However, making a simple shell script would easily resolve this:
#!/bin/sh
docker run -d --name=database personal/database
docker run --rm -it --entrypoint=/usr/bin/runmytests.sh personal/webserver
docker stop database
docker rm database

Development workflow for server and client using Docker Compose?

I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose

Resources