Set storage-driver in Docker Compose file - docker

I need to run the DinD docker image with overlay2 drivers, so I'd normally execute (as explained in dind Hub page):
docker run --privileged -d --name inner-docker docker:dind --storage-driver=overlay2
Is there a way to set storage-driver option in docker-compose.yml?
e.g.
app-docker:
container_name: inner-docker
image: docker:dind
privileged: true
storage_driver: overlay2
I could not find any trace in compose file docs (overlay is only referred as a network driver here).
I tried with storage_driver, storage-driver and similar with no luck.
There is an omonimous option discussed here, but it seems a totally different scope to me.

When you run below
docker run --privileged -d --name inner-docker docker:dind --storage-driver=overlay2
What you are doing is passing docker:dind arguments --storage-driver=overlay2 and not passing a option to docker run. So use below
app-docker:
container_name: inner-docker
image: docker:dind
privileged: true
command: --storage-driver=overlay2

Related

How do I convert docker-compose configuration to dockerfile

I am a bit confused I was trying to convert dockercompose of elastic kibana to dockerfile. But networking part and connectivity part is bit confusing for me. Can anyone help me with conversion and a bit of explanation.
Thanks a lot!
version: "3.0"
services:
elasticsearch:
container_name: es-container
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
environment:
- xpack.security.enabled=true
- "discovery.type=single-node"
networks:
- es-net
ports:
- 9200:9200
kibana:
container_name: kb-container
image: docker.elastic.co/kibana/kibana:6.5.4
environment:
- ELASTICSEARCH_HOSTS=http://es-container:9200
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
es-net:
driver: bridge
Docker Compose and Dockerfiles are completely different things. The Dockerfile is a configuration file used to create Docker images. The docker-compose.yml file is a configuration file used by Docker Compose to launch Docker containers using Docker images.
To launch the above containers without using Docker Compose you could run:
docker network create es-net
docker run -d -e xpack.security.enabled=true -e "discovery.type=single-node" -p 9200:9200 --network es-net --name es-container docker.elastic.co/elasticsearch/elasticsearch:6.5.4
docker run -d -e ELASTICSEARCH_HOSTS=http://es-container:9200 -p 5601:5601 --network es-net --name kb-container docker.elastic.co/kibana/kibana:6.5.4
Alternatively, you could run the containers on the hosts network stack (rather than the es-net nework). Kibana would then be able to talk to ElasticSearch on localhost:
docker run -d -e xpack.security.enabled=true -e "discovery.type=single-node" --network host --name es-container docker.elastic.co/elasticsearch/elasticsearch:6.5.4
docker run -d -e ELASTICSEARCH_HOSTS=http://localhost:9200 --network host --name kb-container docker.elastic.co/kibana/kibana:6.5.4
(I haven't actually run these so the commands might need some tweaking).
In that docker-compose.yml file, the only thing that could be built into an image at all are the environment variables, and there's not much benefit to hard-coding your deployment configuration like this. In particular you cannot force the eventual container name or manually specify the eventual networking configuration in an image.
If you're looking for a compact self-contained description of what to run that you can redistribute, the docker-compose.yml is it. Don't try to send around images, or focus on trying to have a single container; instead, distribute the docker-compose.yml file as the way to run your application. I'd consider Compose a standard enough tool that anyone who has Docker already has it and knows how to run docker-compose up -d.
# How to run this application on a different system
# (with Docker and Compose preinstalled):
here$ scp docker-compose.yml there:
here$ ssh there
there$ sudo docker-compose up -d

how to bring up a docker-compose container as privileged

I was running my container with the command sudo docker run --privileged container_name. But now I'm using a yml and and the command docker-compose up to bring it up but I don't know how to add the --privileged flag when bringing up the container with that command. I already tried adding privileged: true to the yml but it doesn't work in that case.
There is an apposite parameter to use:
web:
image: an_image-image:1.0
container_name: my-container
privileged: true
entrypoint: ["/usr/sbin/init"]
ports:
- "8280:8280"
I solved it myself by doing the following:
in the docker-compose.yml file I have these two lines for specifying the image and container's name
version: "3"
services:
app:
image: my_image
container_name: my-container
so to run it with the --privileged flag I used the command: sudo docker run --privileged my-container

How to use docker command in container?

I want to use docker command in container on the centos 7.8
I already installed docker at the centos and want to use docker command in the docker container.
So, I added volume in the docker compose file like below.
services:
test_container:
container_name: test
image: app:${DOCKER_TAG}
privileged: true
ports:
- 80:3000
environment:
ENVIRONMENT: develop
volumes:
- /var/lib/docker:/var/lib/docker
- /lib/systemd/system/docker.service:/lib/systemd/system/docker.service
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
- /etc/sysconfig/docker:/etc/sysconfig/docker
But when I run docker compose and use docker command in the container, it shows like this.
You don't have either docker-client or docker-client-latest installed. Please install either one and retry.
How could I fix this? or How could I use the docker command in docker container?
Thank you for reading my questions.
In order to run docker in a docker container, you should use "DinD"( docker in docker ) with privileges. Something like this should work;
docker run --privileged -d docker:find
Another option - instead of starting “child” containers like DinD, it will start “sibling” containers.
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
For docker compose;
version: "2"
services:
docker-in-docker:
image: docker:dind
privileged: true
expose:
- 2375
- 2376
node1:
image: docker
links:
- docker-in-docker
environment:
DOCKER_HOST: tcp://docker-in-docker:2375
command: docker ps -a

passing docker swarm secret to sibling container

I have a docker swarm which I start with this compose file:
version: "3.1"
services:
my_service:
image: my_image
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /run/secrets:/run/secrets
secrets:
- my-secret
secrets:
my-secret:
file: my_secret.txt
Now, within the container running my_service, I start a new sibling container (note I've mounted the docker socket), which I want to have access to my-secret, although it's not part of the swarm.
What's the best way to do this?
Simply mounting the secrets as a volume (docker run -v /run/secrets:/run/secrets sibling_image) doesn't work, the sibling container can see my-secret, but it's empty.
Passing an environment variable works, but it's a little too cumbersome if I have many secrets: docker run -it --env MY_SECRET=$(cat /run/secrets/my-secret) sibling_image

I need to run many apache2.0 servers on different docker container and give each one port number

I am quite new to Docker and I need to run 8 apache2.0 servers on different docker containers and give each container a port number using compose.
I found apache2.0 image and I created a container through this command:
docker create -t -i lamsley/apache2.0
How can I create many webservers and give each one a port number in way I can access it through the internet ?
With just Docker you can run:
docker run --name server1 -d -p 8000:80 lamsley/apache2.0
docker run --name server2 -d -p 8001:80 lamsley/apache2.0
...
It's easier with Docker Compose:
version: '2'
services:
httpd1:
image: lamsley/apache2.0
container_name: httpd1
ports:
- "8000:80"
httpd2:
image: lamsley/apache2.0
container_name: httpd1
ports:
- "8000:80"
...
But I strongly suggest you learn Docker first because these snippets are simplistic. You need to know about volumes to pass the content to be served, etc. Why use lamsley/apache2.0 when you can use the official httpd image? You can build your own custom image. The possibilities are endless and it is fun.
To learn about Docker Compose:
https://docs.docker.com/compose/
To learn about volumes:
https://docs.docker.com/engine/tutorials/dockervolumes/

Resources