I am struggling with a docker container and the network configuration. I am running it on a server that does not have IPv6 enabled for policy reasons, and I am using a docker image that uses nginx and IPv6 is enabled in the nginx config inside the docker image.
I have tried everything to disable IPv6 inside the image and in the docker-compose.yml file, but it just won't work. Whenever I try to bring up the compose with docker compose up the log constantly says nginx: [emerg] socket() [::]:5500 failed (97: Address family not supported by protocol).
I tried disabling IPv6 in the docker-compose.yml file
networks:
cont:
driver: bridge
enable_ipv6: false
But it still gives out the same error. I have tried entering the container with docker exec -it container /bin/bash, but the container is constantly restarting and I cannot enter the container to change the configuration, and if I remove the restart: unless-stopped parameter the container just stops and won't allow me to enter, so I can't add tty: true either because the container constantly stops and restarts.
How can I disable IPv6 for good and avoid nginx giving an error and restarting the whole container without having to enable IPv6 in my server, which I cannot do?
Edit: I have also tried adding following to the compose file
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
but I get Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: open /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory: unknown (I'm running RHEL 8.7 and docker 23.0.1, btw).
Related
If I want to access http://localhost from inside a docker container, I can add this to my docker-compose file:
extra_hosts:
- "host.docker.internal:host-gateway"
Then I can call http://host.docker.internal from inside the container and hit my local server.
How do I call http://subdomain.localhost from inside my container?
I tried this in my docker-compose file, but I get an error
extra_hosts:
- "subdomain.host.docker.internal:subdomain.host-gateway"
docker-compose run --service-ports main
Error response from daemon: invalid IP address in add-host: "subdomain.host-gateway"
I'm hoping there is a solution that doesn't involve extra tools (like using nginx to redirect traffic on the host)
I have an application that is meant to send UDP messages to other devices on a local network. When I run the application as a standalone Docker container using the docker run command, the behavior is as expected and the messages are sent to the correct address and port that corresponds to a computer on the local network. Please note that it works whether or not I run it with the bridge or host network. However, when attempting to run the application through docker compose the UDP messages are not sent. To verify that there was no conflict with other containers running in compose, I ran the container on its own in docker-compse and the messages were still not being sent. I tried running the container in docker-compose while specifying network_mode: host as well. I checked Wireshark and it reported that UDP messages were being sent when the application was started with docker run but none appearend when running with docker-compose. Additionally, I enabled Ipv4 forwarding from docker containers to the outside world on the host machine as described here with no luck.
Here are the two ways I am running the container:
Docker:
docker run --network host -e OUTPUT=192.168.1.3:14551 container_name
Docker-Compose:
version: "3"
services:
name:
image: name
network_mode: host # have tried with and without this
environment:
- OUTPUT=192.168.1.3:14551
I have also tried exposing the 14551 port in a ports section of the docker-compose, however that did not change anything.
What could explain the difference in behavior with docker vs docker-compose? Is it due to an extra layer of networking with docker compose specifically? Is there a workaround to get docker-compose working?
I am running a Debian docker container on a Windows 10 machine which needs to access a particular url on port 9000 (164.16.240.30:9000)
The host machine can access it fine via the browser, however when I log in to the terminal and run wget 172.17.240.30:9000 I get failed: No route to host.
In an attempt to resolve this I added:
ports:
- 9000:9000
to the docker-compose.yml file, however that doesn't seem to have made any difference.
In case you can't guess I'm new to this so what would you try next?
Entire docker-compose.yml file:
version: '3.4'
services:
tokengeneratorapi:
network_mode: host
image: ${DOCKER_REGISTRY}tokengeneratorapi
build:
context: .
dockerfile: TokenGeneratorApi/Dockerfile
ports:
- 5000:80
- 9000
environment:
ASPNETCORE_ENVIRONMENT: local
SSM_PATH: /ic/env1/tokengeneratorapi/
AWS_ACCESS_KEY_ID:
AWS_SECRET_ACCESS_KEY:
Command I'm running:
docker-compose build --build-arg BRANCH=featuretest --build-arg CHANGE_ID=99 --build-arg CHANGE_TARGET=develop --build-arg SONAR_SERVER=164.16.240.30
It seems it's the container having connectivity issues so your proposed solution is likely to not work, as that is only mapping a host port to a container port (considering your target URL is not the actual host).
Check out https://docs.docker.com/compose/compose-file/#network_mode and try setting it to host.
Your browser has access to 164.16.240.30:9000, because it is going through proxy (typical enteprise environment), so the proxy has network connectivity to 164.16.240.30. It doesn't mean that also your host has the same network connectivity. Actually, it looks like your host doesn't have that one. That is the reason why direct wget from the container or from terminal has error No route to host.
Everything must go through the proxy. Try to configure proxy properly - linux apps use environment variables http_proxy,https_proxy usually, but apps may have own option to configure proxy, eventualy you may configure it on the source code level. It depends on used app/code.
I think the issue is that you use host mode in your docker compose config file and do you have IPTABLES firewall allowed for the ports in the debian machine? How about windows?
network_mode: host
which actually bypasses the docker bridge completely so the ports section you specify is not applied. All the ports will be opened on the host system. You can check with
nestat -tunlp | grep 5000
And you will see that the port 5000 is not open and mapped to the 80 of the docker as you would expect. However ports 80 and 9000 should be open on the debian network but not binded to any docker bridge only to the debian ip.
From here: https://docs.docker.com/network/host/
WARNING: Published ports are discarded when using host network mode
As a solution could be to remove the network_mode line and it will work as expected.
Your code doesn't allow your container access to 164.16.240.30:9000. You should wget 164.16.240.30:9000 from the terminal instead of 172.17.240.30:9000.
I am working on a micro-service architecture where we have many different projects and all of them connect to the same redis instance. I want to move this architecture to the Docker to run on development environment. Since all of the projects have separate repositories I can not just simply use one docker-compose.yml file to connect them all. After doing some research I figured that I can create a shared external network to connect all of the projects, so I have started by creating a network:
docker network create common_network
I created a separate project for common services such as mongodb, redis, rabbitmq (The services that is used by all projects). Here is the sample docker-compose file of this project:
version: '3'
services:
redis:
image: redis:latest
container_name: test_project_redis
ports:
- "6379:6379"
networks:
- common_network
networks:
common_network:
external: true
Now when I run docker-compose build and docker-compose up -d it works like a charm and I can connect to the redis from my local machine using 127.0.0.1:6379. But there is a problem when I try to connect to this redis container from an other container.
Here is an other sample docker-compose.yml for another project which runs Node.js (I am not putting Dockerfile since it is irrelevant for this issue)
version: '3'
services:
api:
build: .
container_name: sample_project_api
networks:
- common_network
networks:
common_network:
external: true
There is no problem when I build and run this docker-compose as well but the Node.js project is getting CONNREFUSED 127.0.0.1:6379 error, which obviously it can not connect to the Redis server over 127.0.0.1
So I opened a live ssh into the api container (docker exec -i -t sample_project_api /bin/bash) and installed redis-tools to make some tests.
When I try to ping the redis-cli ping it returns Could not connect to Redis at 127.0.0.1:6379: Connection refused.
I checked the external network to see if all of the containers are connected to it properly, using docker network inspect common_network. There were no problem, all of the containers were listed under Containers, and from there I noticed that sample_project_redis container had an ip address of 192.168.16.3
As a final solution I tried to use internal ip address of the redis container:
From sample_project_api container I run redis-cli -h 192.168.16.3 ping and it return with PONG which it worked.
So my problem is that I can not connect to the redis server from other containers using ip address of 127.0.0.1 or 0.0.0.0 but I can connect using 192.168.16.3 which changes every time I restart docker container. What is the reason behind this ?
Containers have a namespaced network. Each container has its own loopback interface and an ip for the container per network you attach to. Therefore loopback or 127.0.0.1 in one container is that container and not the redis ip. To connect to redis, use the service name in your commands, which docker will resolve to the ip of the container running redis:
redis:6379
I am attempting to create a container that can access the host docker remote API via the docker socket file (host machine - /var/run/docker.sock).
The answer here suggests proxying requests to the socket. How would I go about doing this?
I figured it out. You can simply pass the the socket file through the volume argument
docker run -v /var/run/docker.sock:/container/path/docker.sock
As #zarathustra points out, this may not be the greatest idea however. See: https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container/
If one intends to use Docker from within a container, they should clearly understand security implications.
Accessing Docker from within the container is simple:
Use the docker official image or install Docker inside the container. Or you may download archive with docker client binary as described here
Expose Docker unix socket from host to container
That's why
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
should do the trick.
Alternatively, you may expose into container and use Docker REST API
UPD: Former version of this answer (based on previous version of jpetazzo post ) advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
Considerations:
All host containers will be accessible to container, so it can stop them, delete, run any commands as any user inside top-level Docker containers.
All created containers are created in a top-level Docker.
Of course, you should understand that if container has access to host's Docker daemon, it has a privileged access to entire host system. Depending on container and system (AppArmor) configuration, it may be less or more dangerous
Other warnings here dont-expose-the-docker-socket
Other approaches like exposing /var/lib/docker to container are likely to cause data corruption. See do-not-use-docker-in-docker-for-ci for more details.
Note for users of official Jenkins CI container
In this container (and probably in many other) jenkins process runs as a non-root user. That's why it has no permission to interact with docker socket. So quick & dirty solution is running
docker exec -u root ${NAME} /bin/chmod -v a+s $(which docker)
after starting container. That allows all users in container to run docker binary with root permissions. Better approach would be to allow running docker binary via passwordless sudo, but official Jenkins CI image seems to lack the sudo subsystem.
I stumbled across this page while trying to make docker socket calls work from a container that is running as the nobody user.
In my case I was getting access denied errors when my-service would try to make calls to the docker socket to list available containers.
I ended up using docker-socket-proxy to proxy the docker socket to my-service. This is a different approach to accessing the docker socket within a container so I though I would share it.
I made my-service able to receive the docker host it should talk to, docker-socker-proxy in this case, via the DOCKER_HOST environment variable.
Note that docker-socket-proxy will need to run as the root user to be able to proxy the docker socket to my-service.
Example docker-compose.yml:
version: "3.1"
services:
my-service:
image: my-service
environment:
- DOCKER_HOST=tcp://docker-socket-proxy:2375
networks:
- my-service_my-network
docker-socket-proxy:
image: tecnativa/docker-socket-proxy
environment:
- SERVICES=1
- TASKS=1
- NETWORKS=1
- NODES=1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- my-service_my-network
deploy:
placement:
constraints: [node.role == manager]
networks:
my-network:
driver: overlay
Note that the above compose file is swarm ready (docker stack deploy my-service) but it should work in compose mode as well (docker-compose up -d). The nice thing about this approach is that my-service does not need to run on a swarm manager anymore.