Using same containers with multiple project on local host - docker

Problem
A have few projects on my computer, that all are developed.
I started working with docker few month ago and I share some config between projects.
I have clear container list each time. Why? Because usually my process need to look like:
Delete all containers docker rm $(docker ps -a -q)
Delete all images - docker rmi $(docker images -q)
run docker-compose up -d
The problem is everywhere, I have defined composer (like in config below). When I switch, but don't delete images/containers, then i have, that composer/composer container exist and don't start.
Of course, I use more services like that, but this is simples one.
docker-compose.yml
version: '2'
services:
php:
image: php:7.1.3-alpine
volumes:
- ./:/app
working_dir: /app
composer:
image: composer/composer
volumes_from:
- php
working_dir: /app
My env
Mac OSX 10.11.6
Docker for mac: 17.03.1-ce-mac12 (17661)
Till now
searched texts like : Docker multi use of containers / docker multi projects on local with same container
read some blog with configuration, but didn't get hint.
Summary
As the topic is to wide, to many pages.
Maybe I ommited something in understand of concept or config.
Would be nice to get some explanation and hints for good manage docker-compose.yml files like that and what was wrong in my process.
Thanks.

Your question isn't very clear, but it sounds like you're using docker-compose to bring the containers up but relying on docker rm/docker rmi to take them down. Try doing everything with Docker Compose. Bring services up:
docker-compose up -d
Take services down but let volumes persist:
docker-compose down
Take services down and destroy volumes:
docker-compose down --volumes
https://docs.docker.com/compose/reference/down/
For the compose file you posted, you shouldn't generally need to use docker rm/docker rmi.

Related

How to update configuration files in Docker-compose volumes?

I'm running a docker-compose setup, and when I want to update files in my image I create a new docker image. Though the problem is; the file I'm editing is located in the persistent volume, meaning the Docker image itself will get the changes, but since I'm not deleting docker-compose volumes the volume will be used by the new image, hence the old file will be used by new image.
Running docker-compose down -v is not an options because I want to keep other existing files in the volume (logs etc.).
I want to know if it possible to do this without too much hacks, since I'm looking to automate this.
Example docker-compose.yml
version: '3.3'
services:
myService:
image: myImage
container_name: myContainer
volumes:
- data_volume:/var/data
volumes:
data_volume
NOTE: The process of doing change in my case:
docker-compose down
docker build -t myImage:t1 .
docker compose up -d
You could start a container, mount the volume and execute a command to delete single files. Something like
docker run -d --rm -v data_volume:/var/data myImage rm /var/data/[file to delete]

Redis Docker not linking with other docker containers

I have two docker images. One is jobservice and another one is redis. I tried to link the redis container into my job service container by using link command.
The error is unable to find the docker image.
I removed the link command then it is working fine.
Two docker images
$ docker images ls
gcr.io/sighmo-development/jobservice 1.0.1 f0a1a4458f89 11 seconds ago 874MB
redis latest f7302e4ab3a8 2 weeks ago 98.2MB
Docker ps command
$ docker ps
848cf2992a34 redis "docker-entrypoint.s…" 8 hours ago Up 8 hours 6379/tcp some-redis
docker command to run jobservice
$ docker run -d \
--env-file /home/amareswaran_cloud/lookmyjobs-repo/LOOK_MY_JOBS/docker-env/env.list \
-v /home/amareswaran_cloud/lookmyjobs-volume/jobservice:/home/ssl --name=jobservice \
--link discovery:discovery \
--link sc_kafka:kafka \
--link scdb:scdb \
--link sc_redis:some-redis \
gcr.io/sighmo-development/jobservice:1.0.1
Expected is docker command should link with redis. But actual is docker image not found.
You have the container name and alias reversed. The container name should be first, and according to docker ps, your container is named some-redis:
--link some-redis:sc_redis
Seems you're running different containers, not arranged by a Compose file and I strongly suggest you to use it for a several number of reasons:
you can achieve IaC (Infrastructure as Code) and commit it in a human-readable form
you can highly reproduce it just with a single command (docker-compose up), along with tear down (docker-compose down)
you can use with ease Docker network in order to avoid the use of link feature that is deprecated
In the end, it looks I'm missing some useful information to translate your current deployment to a Compose-based reference (I'm referring to sc_kafka,scdb and sc_redis), so YMMV but it should work enough adding required services.
First of all, ensure you got installed docker-compose in your path and put the content of this file in your working directory (I suppose /home/amareswaran_cloud/lookmyjobs-repo).
version: '3.7'
services:
redis:
image: redis:latest
sc_kafka:
image: <KAFKA_IMAGE>
scredis:
image: <REDIS_IMAGE>
scdb:
image: <DB_IMAGE>
jobservice:
image: gcr.io/sighmo-development/jobservice:1.0.1
env_file:
- ./LOOK_MY_JOBS/docker-env/env.list
volumes:
- ./../lookmyjobs-volume/jobservice:/home/ssl
With this simple Compose, all containers are linked to each one, just need to use the {SERVICE_NAME} DNS name and there you go.
An additional feature could be to set up several networks in order to segregate services at its best but that's a next step you can achieve on your own later.

Docker crash test with many containers of the same image

I would like to make a docker crash test on my server, to see how many containers based on the same image my server will support. (Because I've installed jupyterhub and I want to see how many containers can run in good condition.)
So how can I copy an existing container?
No need to copy an existing container, just create new ones of the same image. For your purposes I would recommend using the scale feature of docker-compose.
docker-compose.yml:
web:
image: <someimage>
db:
image: <someotherimage>
Then simply specify the amount of containers you would like to start:
$ docker-compose up -d
$ docker-compose ps
$ docker-compose scale web=15 db=3
$ docker-compose ps

How to link multiple Docker containers and encapsulate the result?

I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.

how can I create a data-container only using docker-compose.yml?

This question is coming from an issue on the Docker's repository:
https://github.com/docker/compose/issues/942
I can't figure it out how to create a data container (no process running) with docker compose.
UPDATE: Things have changed in the last years. Please refer to the answer from #Frederik Wendt for a good and up-to-date solution.
My old answer: Exactly how to do it depends a little on what image you are using for your data-only-container. If your image has an entrypoint, you need to overwrite this in your docker-compose.yml. For example this is a solution for the official MySql image from docker hub:
DatabaseData:
image: mysql:5.6.25
entrypoint: /bin/bash
DatabaseServer:
image: mysql:5.6.25
volumes_from:
- DatabaseData
environment:
MYSQL_ROOT_PASSWORD: blabla
When you do a docker-compose up on this, you will get a container like ..._DatabaseData_1 which shows a status of Exited when you call docker ps -a. Further investigation with docker inspect will show, that it has a timestamp of 0. That means the container was never run. Like it is stated by the owner of docker compose here.
Now, as long as you don't do a docker-compose rm -v, your data only container (..._DatabaseData_1) will not loose its data. So you can do docker-compose stop and docker-compose up as often as you like.
In case you like to use a dedicated data-only image like tianon/true this works the same. Here you don't need to overwrite the entrypoint, because it doesn't exist. It seems like there are some problems with that image and docker compose. I haven't tried it, but this article could be worth reading in case you experience any problems.
In general it seems to be a good idea to use the same image for your data-only container that you are using for the container accessing it. See Data-only container madness for more details.
The other answers to this question are quite out of date, and data volumes have been supported for some time now. Example:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
See
https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose for details and options.
A data only container (DOC) is a container that is created only to serve as a volume provider. The container itself has no function other than that other containers can mount it's volume by using the volumes_from directive.
The DOC has to run only once to create the volume. Other containers can reference the volumes in it even if it's stopped.
The OP Question:
The docker-compose.yml starts the DOC every time you do a docker-compose up. OP asks for an option to only create container and volume, and not run it, using some sort of an create_only: true option.
As mention in the issue from the OP's question:
you either create a data container with the same name as the one specified in the docker-compose.yml, and run docker-compose up --no-recreate (the one specified in docker-compose.yml won't be recreated).
or you run a container with a simple command which never returns.
Like: tail -f /dev/null

Resources