Cannot connect to the Docker daemon from within container - docker

I run a container (cAdvisor) that needs to access the Docker Engine of the host.
When I run it as a service with the command line, everything works fine:
docker service create --name cadvisor
--network clusternetwork -p 8080:8080
--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock,ro
--mount type=bind,src=/,dst=/rootfs,ro
--mount type=bind,src=/sys,dst=/sys,ro
--mount type=bind,src=/var/lib/docker,dst=/var/lib/docker,ro
gcr.io/google-containers/cadvisor:latest
But when I transpose the following service to a docker-compose file and run it using docker stack deploy -c myCadvisor-compose.yml cAdvisor, it doesn't work and I get the following error: failed to get docker info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Here is my docker-compose file. Did I forget to transpose something from the above service call?
version: "3.7"
services:
cadvisor:
image: gcr.io/google-containers/cadvisor:latest
ports:
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock,ro
- /:/rootfs,ro
- /sys:/sys,ro
- /var/lib/docker:/var/lib/docker,ro
networks:
- clusternetwork
networks:
clusternetwork:
external: true

I have tested your code. The main issue is the "," you must put an":". RW or RO doesn't matter. In my case the working one looks like this:
volumes:
- /:/rootfs:ro
- /var/run/docker.sock:/var/run/docker.sock:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
Cheers Jules

Related

How do I convert docker-compose configuration to dockerfile

I am a bit confused I was trying to convert dockercompose of elastic kibana to dockerfile. But networking part and connectivity part is bit confusing for me. Can anyone help me with conversion and a bit of explanation.
Thanks a lot!
version: "3.0"
services:
elasticsearch:
container_name: es-container
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
environment:
- xpack.security.enabled=true
- "discovery.type=single-node"
networks:
- es-net
ports:
- 9200:9200
kibana:
container_name: kb-container
image: docker.elastic.co/kibana/kibana:6.5.4
environment:
- ELASTICSEARCH_HOSTS=http://es-container:9200
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
es-net:
driver: bridge
Docker Compose and Dockerfiles are completely different things. The Dockerfile is a configuration file used to create Docker images. The docker-compose.yml file is a configuration file used by Docker Compose to launch Docker containers using Docker images.
To launch the above containers without using Docker Compose you could run:
docker network create es-net
docker run -d -e xpack.security.enabled=true -e "discovery.type=single-node" -p 9200:9200 --network es-net --name es-container docker.elastic.co/elasticsearch/elasticsearch:6.5.4
docker run -d -e ELASTICSEARCH_HOSTS=http://es-container:9200 -p 5601:5601 --network es-net --name kb-container docker.elastic.co/kibana/kibana:6.5.4
Alternatively, you could run the containers on the hosts network stack (rather than the es-net nework). Kibana would then be able to talk to ElasticSearch on localhost:
docker run -d -e xpack.security.enabled=true -e "discovery.type=single-node" --network host --name es-container docker.elastic.co/elasticsearch/elasticsearch:6.5.4
docker run -d -e ELASTICSEARCH_HOSTS=http://localhost:9200 --network host --name kb-container docker.elastic.co/kibana/kibana:6.5.4
(I haven't actually run these so the commands might need some tweaking).
In that docker-compose.yml file, the only thing that could be built into an image at all are the environment variables, and there's not much benefit to hard-coding your deployment configuration like this. In particular you cannot force the eventual container name or manually specify the eventual networking configuration in an image.
If you're looking for a compact self-contained description of what to run that you can redistribute, the docker-compose.yml is it. Don't try to send around images, or focus on trying to have a single container; instead, distribute the docker-compose.yml file as the way to run your application. I'd consider Compose a standard enough tool that anyone who has Docker already has it and knows how to run docker-compose up -d.
# How to run this application on a different system
# (with Docker and Compose preinstalled):
here$ scp docker-compose.yml there:
here$ ssh there
there$ sudo docker-compose up -d

How to use docker command in container?

I want to use docker command in container on the centos 7.8
I already installed docker at the centos and want to use docker command in the docker container.
So, I added volume in the docker compose file like below.
services:
test_container:
container_name: test
image: app:${DOCKER_TAG}
privileged: true
ports:
- 80:3000
environment:
ENVIRONMENT: develop
volumes:
- /var/lib/docker:/var/lib/docker
- /lib/systemd/system/docker.service:/lib/systemd/system/docker.service
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
- /etc/sysconfig/docker:/etc/sysconfig/docker
But when I run docker compose and use docker command in the container, it shows like this.
You don't have either docker-client or docker-client-latest installed. Please install either one and retry.
How could I fix this? or How could I use the docker command in docker container?
Thank you for reading my questions.
In order to run docker in a docker container, you should use "DinD"( docker in docker ) with privileges. Something like this should work;
docker run --privileged -d docker:find
Another option - instead of starting “child” containers like DinD, it will start “sibling” containers.
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
For docker compose;
version: "2"
services:
docker-in-docker:
image: docker:dind
privileged: true
expose:
- 2375
- 2376
node1:
image: docker
links:
- docker-in-docker
environment:
DOCKER_HOST: tcp://docker-in-docker:2375
command: docker ps -a

Merge two docker run commands together

I have two docker run commands as given below but I would like to merge these two commands together and execute it.
1st command - Start orthanc just with web viewer enabled
docker run -p 8042:8042 -e WVB_ENABLED=true osimis/orthanc
2nd command - Start Orthanc with mount directory tasks
docker run -p 4242:4242 -p 8042:8042 --rm --name orthanc -v
$(pwd)/orthanc/orthanc.json:/etc/orthanc/orthanc.json -v
$(pwd)/orthanc/orthanc-db:/var/lib/orthanc/db jodogne/orthanc-plugins
/etc/orthanc --verbose
As you can see, in both the cases the Orthanc is being started but I would like to merge these into one and start Orthanc. When it is started Web viewer should also be enabled and mount directory should also have happened
Can you let me know on how can this be done?
Use docker-compose, it is specially targeted for running multiple containers.
docker-compose.yml
version: '3'
services:
osimis:
image: osimis/orthanc
environment:
WVB_ENABLED: 'true'
ports:
- 8042:8042
orthanc:
image: jodogne/orthanc-plugins
environment:
WVB_ENABLED: 'true'
ports:
- 4242:4242
- 8042:8042
volumes:
- ./orthanc/orthanc.json:/etc/orthanc/orthanc.json
- ./orthanc/orthanc-db:/var/lib/orthanc/db
command: /etc/orthanc --verbose
and docker-compose up to finish the work

Connecting FTP container works with docker-compose and not with docker run

I need to connect FTP server from my_go_app container.
When I do it from it from docker compose, I can do it with:
apk add lftp
lftp -d ftp://julien:test#ftpd-server
and it connects well
but when I try to run my container via docker run, I cannot connect anymore to FTP server
Here the command I use:
docker run --name my_go_app --rm -v volume:/go my_go_app:exp --network=my_go_app_network --env-file ./test.env
Here is the working docker-compose.yml
version: '3'
services:
my_go_app:
image: my_go_app:exp
volumes:
- ./volume:/go
networks:
my_go_app_network:
env_file:
- test.env
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
ports:
- "21:21"
- "30000-30009:30000-30000"
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "julien"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/www/julien"
restart: on-failure
networks:
my_go_app_network:
networks:
my_go_app_network:
external: true
EDIT:
I added the network as external and created it manually with:
docker network create my_go_app_network
Now it appears that my_go_app is part of the default network:
my_go_app git:(tests) ✗ docker inspect my_go_app -f "{{json .NetworkSettings.Networks }}"
{"bridge":{"IPAMConfig":null,"Links":null,"Aliases":null,"NetworkID":"62b2dff15ff00d5cd56c966cc562b8013d06f18750e3986db530fbb4dc4cfba7","EndpointID":"6d0a81a83cdf639ff13635f0a38eeb962075cd729181b7c60fadd43446e13607","Gateway":"172.17.0.1","IPAddress":"172.17.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:11:00:02","DriverOpts":null}}
➜ my_go_app git:(tests) ✗ docker network ls
NETWORK ID NAME DRIVER SCOPE
62b2dff15ff0 bridge bridge local
f33ab34dd91d host host local
ee2d604d6604 none null local
61a661c82262 my_go_app_network bridge local
What am I missing ?
Your network my_go_app_network should be declared as "external", otherwise compose will create a network called "project_name_my_go_app_network". Therefore your go app was not in the same network with the ftp server.
(I guess you have created my_go_app_network manually so your docker run did not throw any network not found error.)
EDIT
You put the arguments in the wrong order. Image name has to be the last one, otherwise they are considered as "commands" for the container. Try
docker run --name my_go_app --rm -v volume:/go --network=my_go_app_network --env-file ./test.env my_go_app:exp

Running Kudu in a docker and master to tserver two-way connection / circular link issues - docker composition

How can you run Kudu, which requires two containers - one for the master and one for the tserver under docker, when the two containers need to connect to each other by DNS.
Kudu can be run under Docker using the following commands:
docker run --name kudu-master --hostname kudu-master --detach --publish 8051:8051 --publish 7051:7051 kunickiaj/kudu master
and:
docker run --name kudu-tserver --hostname kudu-tserver --detach --publish 8050:8050 --publish 7050:7050 --link kudu-master --env KUDU_MASTER=kudu-master kunickiaj/kudu tserver
However, the above defines a one way link, from kudu-tserver to kudu-master and not vice verse.
For Kudu to function correctly, bother kudu-master and kudu-tserver need to be able to connect to each other.
How can the Docker containers be configured, so that the two way link works?
Docker image reference
Similar image reference
The link parameter in docker run is a legacy feature which may be removed (references [1] and [2]).
You can raise multiple Docker containers and connect them to each other using docker-compose.
To get this working, create a folder named kudu and place the following docker-compose.yml file under it:
version: '3'
services:
kudu-master:
image: "kunickiaj/kudu"
hostname: kudu-master
ports:
- "8051:8051"
- "7051:7051"
command:
master
networks:
kudu_network:
aliases:
- kudu-master
kudu-tserver:
image: "kunickiaj/kudu"
hostname: kudu-tserver
ports:
- "8050:8050"
- "7050:7050"
environment:
- KUDU_MASTER=kudu-master
command:
tserver
networks:
kudu_network:
aliases:
- kudu-tserver
networks:
kudu_network:
This file includes 2 services (kudu-master and kudu-tserver) and a network within which both have aliases which are visible to the rest of the network (to each other). [File reference]
Then run docker-compose using the following command line:
docker-compose -f "filePathToYourDockerComposeYmlFile" up -d
or, if you want to recreate the Docker containers:
docker-compose -f "filePathToYourDockerComposeYmlFile" up -d --force-recreate
Other useful commands [reference]:
To stop the containers:
docker-compose -f "filePathToYourDockerComposeYmlFile" stop
To remove the containers:
docker-compose -f "filePathToYourDockerComposeYmlFile" rm -f

Resources