Windows 10 host wrongly uses UNIX socket instead of npipe - docker

Docker-compose should use npipe protocol on Windows by default, but it doesn't. The following logs proves that (Win 10 pro 64 bit host):
Log 1: Failed to retrieve
information of the docker client and server host: Cannot connect to
the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon
running?
Log 2: Provider connection error Cannot connect to the Docker daemon at
unix:///var/run/docker.sock. Is the docker daemon running?
As you can see by the logs, there was an attempt to use: unix:///var/run/docker.sock. This is a UNIX socket and not a pipe (named pipe), which Windows knows perfectly to handle.
Ok, so docker-compose has a problem with the default configuration. Let's set the pipe explicitly, to take place instead of the UNIX socket (using npipe of the long syntax):
#docker-compose.yml
version: '3.2'
services:
traefik:
image: traefik
command: --api --docker
ports:
- "80:80"
- "8080:8080"
volumes:
- type: npipe # here we are
source: ./pipe
target: /pipe/docker_engine
But guess what? We get the same UNIX socket error.
I have also tried: - ./pipe/docker_engine://./pipe/docker_engine, but failed once again.
What am I missing here?

Related

How to get around network mapping issue "Bind for 0.0.0.0:8080 failed: port is already allocated"

I'm trying to build a Jenkins docker container by following this page so I can test locally. Problem is with this is that once I've ran docker run -it -p 8080:8080 jenkins/jenkins:lts it seems I cannot use the same port for my docker-compose.yml:
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
user: root
privileged: true
ports:
- 8080:8080
- 50000:50000
volumes:
- .jenkins/jenkins_configuration:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
The error shown in PowerShell (I'm on windows 10 if that's relevant) is:
Error response from daemon: driver failed programming external connectivity on endpoint jenkins (xxxx): Bind for 0.0.0.0:8080 failed: port is already allocated
I've made sure it's not affected from another container, image or volume and have deleted everything apart from this.
I wish to use Jenkins locally but how can I get around this? I'm not familiar with networking and what I've googled so far has not seemed to work for me. I would like this to be able to use Jenkins ui on localhost:8080
If port 8080 is already allocated on your host machine, you can just map a different one to 8080 of the container instead. Two things can't be mapped to the same port on the host machine. In order to map 8081 for example, change your compose to the following:
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
user: root
privileged: true
ports:
- 8081:8080 # a different port is mapped here
- 50000:50000
volumes:
- .jenkins/jenkins_configuration:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
Then, you just need to access the container started by docker-compose with port localhost:8081 rather than localhost:8080.

Connect host http server from Docker container that runs in WSL

I have an HTTP client that runs in Docker. I need it to send request to the server that runs locally. Docker runs on Ubuntu that is WSL in Windows 10. I have the following docker-compose.yml:
version: "3.8"
test-http:
image: test-http
container_name: test-http
ports:
- "3010:3000"
environment:
- APP_PORT=3000
- APP_API_URL=http://host.docker.internal:18200
extra_hosts:
- host.docker.internal:host-gateway
As you may expect my HTTP server runs on Ubuntu via WSL on a 18200 port. When client sends request from Docker to host I get the following error:
Delete "http://host.docker.internal:18200": dial tcp 172.17.0.1:18200: connect: connection refused
How should I configure compose to make it working?

monitor host machine using metricbeat system module from inside a docker container

I'm using the docker-compose to configure the system module of Metricbeat I have created the metricbeat.yml and system.yml and mount in my docker compose file
ex:
/opt/prism/config/metricbeat/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml
but while restart the container I'm getting an error of
Exiting: error initializing publisher: error initializing processors: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.22/containers/json?limit=0: dial unix /var/run/docker.sock: connect: permission denied
Chances are you did not format the docker-compose.yml file correctly in the way in which you mount the docker socket.
Also it helps to run the conainer as root user.
version: '3.9'
services:
filebeat:
image: docker.elastic.co/beats/filebeat:7.15.2
restart: unless-stopped
environment:
- "ELASTICSEARCH_HOSTS=http://elasticsearch:9200"
volumes:
- ./monitor/filebeat/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /home/docker-data/filebeat:/usr/share/filebeat/data
- /home/docker-log:/usr/share/docker-log
user: root
This is what I have, unfortunately this setup reports the stats of the container as I've only configured the docker module, if you configure the system module correctly, this should work.
It will certainly solve your permissions problem.

Use of docker:dind in docker-compose

So for some reason, I'd like to use a docker:dind inside a docker-compose.yml.
I know that the "easy" way is to mount directly the socket inside the image (like that : /var/run/docker.sock:/var/run/docker.sock) but I want to avoid that (for security reasons).
Here is my experimental docker-compose.yml :
version: '3.8'
services:
dind:
image: docker:19.03.7-dind
container_name: dind
restart: unless-stopped
privileged: true
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- dind-certs-ca:/certs/ca
- dind-certs-client:/certs/client
networks:
- net
expose:
- 2375
- 5000
volumes:
dind-certs-ca:
dind-certs-client:
networks:
net:
driver: bridge
Nothing complexe here, then I try to see if the service is correctly set :
docker logs dind
Here no problem it is up and running.
However, once I try to use it with for instance :
docker run --rm -it --network net --link dind:docker docker version
I got the following error :
Cannot connect to the Docker deamon at tcp://docker:2375. Is there a deamon running ?
Do you have any idea why the deamon is not responding ?
---------------------------------------------------------- EDIT ----------------------------------------------------------
Following hariK's comment (thanks by the way) I add the port 2376 to the exposed one. I think I'm neer solving my issue. Here is the error that I get :
error during connect: Get http://docker:2375/v1.40/version dial tcp: lookup on docker on [ip]: no such host
So I look at this error and found that it seems to be a recurrent one on dind versions (there is a lot of issues on gitlab on it like this one). There is also a post on stackoverflow on a similar issue for gitlab here.
For the workaround I tried :
Putting this value DOCKER_TLS_CERTDIR: "" hopping to turn off TLS ... but it failed
Downgrading the version to docker:18.05-dind. It actualy worked but I don't think it's a good move to make.
If someone has an idea to keep TLS ON and make it works it would be great :) (I'll still be looking on my own but if you can give a nudge with interesting links it would be cool ^^)
To use Docker with disabled TLS (i.e. TCP port 2375 by default), unset the DOCKER_TLS_CERTDIR variable in your dind service definition in Docker Compose, like:
dind:
image: docker:dind
container_name: dind
privileged: true
expose:
- 2375
environment:
- DOCKER_TLS_CERTDIR=
(NB: do not initialize it to any value like '' or "")
So I found a solution, and I added to the basic docker-compose a resgistry with TLS options.
So I had fisrt to generate the certs and then correctly mount them.
If any of you run in a similar issue I made a github repo with the docker-compose and command lines for the certs.
Some time later, and I was looking for the same thing.
Here is an example that with specific versions for the images, that should still work in a few years from now:
version: '3'
services:
docker:
image: docker:20.10.17-dind-alpine3.16
privileged: yes
volumes:
- certs:/certs
docker-client:
image: docker:20.10.17-cli
command: sh -c 'while [ 1 ]; do sleep 1; done'
environment:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: /certs/client
volumes:
- certs:/certs
volumes:
certs:
The TLS certificates are generated by the "docker" service on startup and shared using a volume.
Use the client as follows:
docker-compose exec docker-client sh
#now within docker-client container
docker run hello-world

Syslog driver not working with docker compose and elk stack

I want to send logs from one container running my_service to another running the ELK stack with the syslog driver (so I will need the logstash-input-syslog plugin installed).
I am tweaking this elk image (and tagging it as elk-custom) via the following Dockerfile-elk
(using port 514 because this seems to be the default port)
FROM sebp/elk
WORKDIR /opt/logstash/bin
RUN ./logstash-plugin install logstash-input-syslog
EXPOSE 514
Running my services via a docker-compose as follows more or less:
elk-custom:
# image: elk-custom
build:
context: .
dockerfile: Dockerfile-elk
ports:
- 5601:5601
- 9200:9200
- 5044:5044
- 514:514
my_service:
image: some_image_from_my_local_registry
depends_on:
- elk-custom
logging:
driver: syslog
options:
syslog-address: "tcp://elk-custom:514"
However:
ERROR: for b4cd17dc1142_namespace_my_service_1 Cannot start service
my_service: failed to initialize logging driver: dial tcp: lookup
elk-custom on 10.14.1.31:53: server misbehaving
ERROR: for api Cannot start service my_service: failed to initialize
logging driver: dial tcp: lookup elk-custom on 10.14.1.31:53: server
misbehaving ERROR: Encountered errors while bringing up the project.
Any suggestions?
UPDATE: Apparently nothing seems to be listening on port 514, cause from within the container, the command netstat -a shows nothing on this port....no idea why...
You need to use tcp://127.0.0.1:514 instead of tcp://elk-custom:514. Reason being this address is being used by docker and not by the container. That is why elk-custom is not reachable.
So this will only work when you map the port (which you have done) and the elk-service is started first (which you have done) and the IP is reachable from the docker host, for which you would use tcp://127.0.0.1:514

Resources