docker and docker-compose seem to be interacting with the same dockerFile, what is the difference between the two tools?
The docker cli is used when managing individual containers on a docker engine. It is the client command line to access the docker daemon api.
The docker-compose cli can be used to manage a multi-container application. It also moves many of the options you would enter on the docker run cli into the docker-compose.yml file for easier reuse. It works as a front end "script" on top of the same docker api used by docker, so you can do everything docker-compose does with docker commands and a lot of shell scripting. See this documentation on docker-compose for more details.
Update for Swarm Mode
Since this answer was posted, docker has added a second use of docker-compose.yml files. Starting with the version 3 yml format and docker 1.13, you can use the yml with docker-compose and also to define a stack in docker's swarm mode. To do the latter you need to use docker stack deploy -c docker-compose.yml $stack_name instead of docker-compose up and then manage the stack with docker commands instead of docker-compose commands. The mapping is a one for one between the two uses:
Compose Project -> Swarm Stack: A group of services for a specific purpose
Compose Service -> Swarm Service: One image and it's configuration, possibly scaled up.
Compose Container -> Swarm Task: A single container in a service
For more details on swarm mode, see docker's swarm mode documentation.
docker manages single containers
docker-compose manages multiple container applications
Usage of docker-compose requires 3 steps:
Define the app environment with a Dockerfile
Define the app services in docker-compose.yml
Run docker-compose up to start and run app
Below is a docker-compose.yml example taken from the docker docs:
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}
A Dockerfile is a text document that contains all the commands/Instruction a user could call on the command line to assemble an image.
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. By default, docker-compose expects the name of the Compose file as docker-compose.yml or docker-compose.yaml. If the compose file has a different name we can specify it with -f flag.
Check here for more details
docker or more specifically docker engine is used when we want to handle only one container whereas the docker-compose is used when we have multiple containers to handle. We would need multiple containers when we have more than one service to be taken care of, like we have an application that has a client server model. We need a container for the server model and one more container for the client model. Docker compose usually requires each container to have its own dockerfile and then a yml file that incorporates all the containers.
Related
There are a lot of applications which I launch on my workstation using docker-compose up.
Reasons:
They don't have an installer, or I don't want to use it
They require a dedicated storage engine to be present
They require a build process step
They are created by me and I want them to be easily launched on any workstation
e.t.c
So what I usually end up with the following file-structure:
myAppDir
- docker-compose.yml
- Dockerfile (not always)
- someConfigFile
And my docker-compose.yml is something like this:
(It can contain 2 or 3 services, but I provide the simplest form that I use)
version: '3.7'
services:
mysql:
image: mysql:5.7.29
restart: always
volumes:
- ./mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf
environment:
- MYSQL_ROOT_PASSWORD=xyz
ports:
- 3306:3306
Then when I need to launch the application I just perform:
docker-compose up # (or with --build)
Recently I tried to add:
deploy:
resources:
limits:
cpus: '0.50'
memory: 200M
and got a message:
Some services (mysql) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.
So I tried:
docker stack deploy mystack --compose-file docker-compose.yml
and got message:
Ignoring unsupported options: restart
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
This seems more complex that docker-compose up.
I saw that I can use --compatibility flag e.g.
docker-compose --compatibility up
But the word compatibility means to me that I should soon switch to a new way of launching my apps locally.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
If you want to specify memory limits and similar constraints for local containers, you need to use a version 2 Compose file. This is called out in the documentation for the deploy: resources: section. docker/compose#4513 has some reasonably clear statements that Compose file version 2 is more targeted at local setups and version 3 more at Swarm installations, and that Docker intends to keep supporting both file versions.
Docker has put many options and functions specific to their Swarm cluster-installation mode into the core product. Anything that mentions a "stack", for example, is specific to a Swarm setup. One consequence of Swarm and plain-Docker things being combined together is that the deploy: Docker Compose options only have an effect in Swarm mode. The documentation for the deploy: key notes:
This only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
Docker compose V3 is meant to be used with Docker Swarm deployments, therefore you need to run your Docker in Swarm mode, otherwise just keep using the V2 and it's simpler interface for localhost developments.
For example restart is ignored because that responsibility belongs now to the Docker Swarm, not to Docker itself.
Using the compatibility flag it's kind of converting at runtime your V3 compose file into a V2 compose file.
So in short just use V3 if you want to run Docker in Swarm mode to take advantage of all its new features, aka it's kind of a Kubernetes in Docker land.
I am playing around with a single container docker image. I would like to store my db password as a secret without using compose (having probs with that and Gradle for now). I thought I could still use secrets even without compose but when I try I get...
$ echo "helloSecret" | docker secret create helloS -
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
Why do I need to use swarm mode just to use secrets? Why can't I use them without a cluster?
You need to run swarm mode for secrets because that's how docker implemented secrets. The value of secrets is that workers never write the secret to disk, the secret is on a need-to-know basis (other workers do not receive the secret until a task is scheduled there), and on managers encrypt that secret on disk. The storage of the secret on the manager uses the raft database.
You can easily deploy a single node swarm cluster with the command docker swarm init. From there, docker-compose up gets changed to docker stack deploy -c docker-compose.yml $stack_name.
Secrets and configs in swarm mode provide a replacement for mounting single file volumes into containers for configuration. So without swarm mode on a single node, you can always make the following definition:
version: '2'
services:
app:
image: myapp:latest
volumes:
- ./secrets:/run/secrets:ro
Or you can separate the secrets from your app slightly by loading those secrets into a named volume. For that, you could do something like:
tar -cC ./secrets . | docker run -i -v secrets:/secrets busybox tar -xC /secrets
And then mount that named volume:
version: '2'
volumes:
secrets:
external: true
services:
app:
image: myapp:latest
volumes:
- secrets:/run/secrets:ro
Check out this answer: https://serverfault.com/a/936262 as provided by user sel-en-ium :-
You can use secrets if you use a compose file. (You don't need to run
a swarm).
You use a compose file with docker-compose: there is documentation for
"secrets" in a docker-compose.yml file.
I switched to docker-compose because I wanted to use secrets. I am
happy I did, it seems much more clean. Each service maps to a
container. And if you ever want to switch to running a swarm instead,
you are basically already there.
Unfortunately the secrets are not loaded into the container's
environment, they are mounted to /run/secrets/
What we want to do:
We want to use docker-compose to link one already running container (A) to another container (B) by container name. We use "external-link" as both containers are started from different docker-compose.yml files.
Problem:
Container B fails to start with the error although a container with that name is running.
ERROR: for container_b Cannot start service container_b: Cannot link to a non running container: /PREVIOUSLY_LINKED_ID_container_a_1 AS /container_b_1/container_a_1
output of "docker ps":
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
RUNNING_ID container_a "/docker-entrypoint.s" 15 minutes ago Up 15 minutes 5432/tcp container_a_1
Sample code:
docker-compose.yml of Container B:
container_b:
external_links:
- container_a_1
What differs this question from the other "how to fix"-questions:
we can't use "sudo service docker restart" (which works) as this is a production environment
We don't want to fix this every time manually but find the reason so that we can
understand what we are doing wrong
understand how to avoid this
Assumptions:
It seems like two instances of the container_a exist (RUNNING_ID and PREVIOUSLY_LINKED_ID)
This might happen because we
rebuilt the container via docker-compose build and
changed the forwarded external port of the container (80801:8080)
Comment
Do not use docker-compose down as suggested in the comments, this removes volumnes!
Docker links are deprecated so unless you need some functionality they provide or are on an extremely old version of docker, I'd recommend switching to docker networks.
Since the containers you want to connect appear to be started in separate compose files, you would create that network externally:
docker network create app_net
Then in your docker-compose.yml files, you connect your containers to that network:
version: '3'
networks:
app_net:
external:
name: app_net
services:
container_a:
# ...
networks:
- app_net
Then in your container_b, you would connect to container_a as "container_a", not "container_a_1".
As an aside, docker-compose down is not documented to remove volumes unless you pass the -v flag. Perhaps you are using anonymous volumes, in which case I'm not sure that docker-compose up would know where to find your data. A named volume is preferred. More than likely, your data was not being stored in a volume, which is dangerous and removes your ability to update your containers:
$ docker-compose down --help
By default, the only things removed are:
- Containers for services defined in the Compose file
- Networks defined in the `networks` section of the Compose file
- The default network, if one is used
Networks and volumes defined as `external` are never removed.
Usage: down [options]
Options:
--rmi type Remove images. Type must be one of:
'all': Remove all images used by any service.
'local': Remove only images that don't have a custom tag
set by the `image` field.
-v, --volumes Remove named volumes declared in the `volumes` section
of the Compose file and anonymous volumes
attached to containers.
--remove-orphans Remove containers for services not defined in the
Compose file
I am using a bash script to spin up a virtual network with two docker containers on it. This feels prehistoric. Is there some tool that can spin such an ensemble up and down & show its current status, or does one have to take care of that on their own?
In case docker-compose, it is unclear from docker documentation whether docker-compose is self-contained or tied to swarm, and an authoritative example of a compose definition file, with commands for starting and stopping the ensemble would be very helpful.
E.g. here is what a bash script would do to define/start an application of two interrelated containers, needless to say this script does not help with managing its lifecycle beyond just starting it up once.
docker network create --driver bridge FooAppNet
docker run --rm --net=FooAppNet --name=component1 -p 9000:9000 component1-image
docker run --rm --net=FooAppNet --name=component2 component2-image
Also in this example, container component1 exposes port 9000 to the host, and its contained application has it hardwired in its configuration file, to consume the service of component2 by its name (following the common docker networking practice relying on docker networks' internal DNS).
For the example you've given, the following Docker Compose file would give you what you want:
component1:
image: component1-image
net: FooAppNet
container_name: component1
ports:
- "9000:9000"
component2:
image: component2-image
net: FooAppNet
container_name: component2
If you store this in a docker-compose.yml file and then run docker-compose up -d it will create/start/restart your containers and assign them to your FooAppNet network.
The -d flag runs the containers in detached mode and prevents the logging output being printed to your terminal window when you start the containers. You can still get their log via docker logs -f ... like with any other container.
You can then use docker-compose down and docker-compose restart etc to control the ensemble's lifecycle. As an aside, using variables can spice up the definition file towards greater flexibility.
See in the comments below about using the network automatically spun up by docker compose.
TL;DR ― see the beginning section of https://docs.docker.com/compose/networking/ for the solution. It walks you through the entire necessary configuration. Works nicely, and need to master the various docker-compose command-line options to be productive with it.
I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.