docker-compse failed to add IPv4 address to bridge - docker

I am using WSL (ubuntu 18.04.5) on window 10 , docker and docker-compose. My docker works fine and able to launch container.
But now I want to use docker-componse for that write simple docker-compose file. Now when I am trying to run it using docker-compose -f docker-compose.yml up command then it gives me some networking issue.
I am really not good in Networking but what I can understand is my docker trying to use already used subnet.
Following are the some steps that I am taking for debugging.
rount -n command show docker is attached with subnet 172.17.0.0 / 1
ip addr gives following result
and I am trying to deploy my stack then it gives me following error.
my understanding is docker-compose trying to use ip 172.16.238.1/24 which is conflicting.
Could someone help me what going here in networking?

Related

Can't connect to localhost of the host machine from inside of my Docker container

The question is basic: how to I connect to the localhost of the host machine from inside of a Docker container?
I tried answers from this post, using add-host host.docker.internal:host-gateway or writing --network=host when running my container but none of these methods seem to work.
I have a simple hello world webserver up on my machine, and I can see it's contents with curl localhost:8000 from my host, but I can't curl it from inside the container. I tried curl host.docker.internal:8000, curl localhost:8000, and curl 127.0.0.1:8000 from inside the container (based on the solution I used to make localhost available there) but none of them seem to work and I get a Connection refused error every time.
I asked somebody else to try this out for me on their own machine and it worked for them, so I don't think I'm doing anything wrong.
Does anybody have any idea what is wrong with my containers?
Host machine: Ubuntu 20.04.01
Docker version: 20.10.7
Used image: Ubuntu 20.04 (and i386/ubuntu18.04)
Temporary solution
This does not completely solve the problem for production purposes, but at least in order to get the localhost working, by adding these lines into docker-compose.yml it solved my issue for now (source):
services:
my-service:
network_mode: host
I am using apache nifi to use Java REST endpoints with the same ubuntu and docker versions, so in my case, it looks like this:
services:
nifi:
network_mode: host
After changing docker-compose.yml, I recommend stopping docker container, removing containers(docker-compose rm - do not use if you need some containers, otherwise use docker container rm container_id) and build with docker-compose up --build again.
In this case, I needed to use another localhost IP for my service to access with a browser (nifi started on other ip - 127.0.1.1 but works fine as well).
Searching for the problem / deeper into ubuntu-docker networking
Firstly, I will write down some useful commands that may be useful to find out a solution for the docker-ubuntu networking issue:
ip a - show all routing, network devices, interfaces and tunnels (mainly I can observe state DOWN with docker0)
ifconfig - list all interfaces
brctl show - ethernet bridge administration (docker0 has no attached interface / veth pair)
docker network ls - manages docker networks - names, drivers, scope...
docker network inspect bridge - I can see for docker0 bridge has no attached docker containers - empty and not used bridge
(useful link for ubuntu-docker networking explanation)
I guess that problem lays within veth pair (see link above), because when docker-compose occurs, there is a new bridge created (not docker0) that is connected to veth pair in my case, and docker0 is not used. My guess is that if docker0 is used, then host.docker.internal:host-gateway would work. Somehow in ubuntu networking, there is docker0 not used as the default bridge and this maybe should be changed.
I don't have much time left actually, well, I suppose someone can use this information and resolve the core of the problem later on.

Docker Swarm - Unable to connect to other containers because IP lookup fails

Say we provision an overlay network using docker swarm and create various containers with following names:
Alice
Bob
Larry
John
now if we try to ping any container from another it fails because it does not know how to do the IP lookup i.e., alice does not know bob's IP and so on. We have been taking care of this by manually editing the /etc/hosts on every container and entering the name/IP key value pair in that file but this is becoming very tedious with every restart of our network. There ought to be a better way of handling this.
E.g., services created using docker stack do not suffer from this problem. Due to various reasons we are stuck with creating containers using the vanilla docker create. How can we make containers discover each other on the overlay network without manual labor of editing /etc/hosts?
Below is detailed workflow we currently have to follow:
we first provision a docker swarm and overlay network
Then for each container, we create it using the docker create command and then start it using docker start command. we use the --network flag to attach the container to the overlay network at time of creation
We then use docker container inspect to get the IP address of each container. This involves running n commands and noting down IP address.
Then we log into each container and edit the /etc/hosts file by hand and enter the (name, IP) key-value pair of the other containers. So this means having to enter n*(n-1) records by hand when summed across containers.
Not sure why docker create does not do all this automatically - docker already knows (or can know) all the IP addresses. Containers provisioned using docker stack e.g., do not have to go through this manual process to "discover" each other. The reason why we cannot use docker stack is because:
it does not allow us to specify container name
we run various commands (mostly docker cp) before starting the container and not possible to do this using stack
You might have seen this already: DNS on User defines networks
Have you created your services like in the section „Attach a service to an overlay“ in this doc?
It seems that the only thing that is needed is to refer the containers by their {name}.{network} instead of just the {name}. No need to edit /etc/hosts or use the --add-host flag or run some additional dns server. Refer https://forums.docker.com/t/need-help-connecting-containers-in-swarm-mode/77944/6
Further details: the official documentation for docker does not mention anywhere the necessity to add .{network} suffix to the {containername}. Indeed on this link, Step #7 under the Walk-through, there is no .{network} suffix used. So not sure why we need to do that. The version of docker we are using is 18.06.1-ce for linux.
I had a similar issue : I'm following this official tutorial to create a docker swarm overlay network on two Raspberry pi 3 and the ping was impossible unless I found on Github the answer : as I understood, it seems that latest version of alpine (for a reason that I ignore) is not suitable for Raspberry pi 3 so the solution would be the use of the version 3.12.3 like this : sudo docker run -dit --name alpine1 --network test1 alpine:3.12.3
Hope that this might help someone :)

Unable to connect outside database from Docker container App

we have two machine…one is windows machine and another in Linux machine. My application is running under Docker Container at Linux machine. our data base is running at Windows machine.our application need to get data from windows machine DB.
As we have given proper data source detail like IP, username ,password in our application. it works when we do not use docker container but when we use docker container it do not work.
Can anyone help me out to get this solution that how we can connect outside DB from Docker enabled application as we are totally new guys in term of Docker.
Any help would be much appreciated.
Container's default network is "bridge",you should choose macvlan or host network.
method 1
docker run -d --net host image
this container will share your host IP address and will be able to access your database.
method 2
Use docker network create command to create a macvlan network,refrence here
then create your container by
docker run -d --net YOURNETWORK image
The container will have an IP address which is the same gateway with its host.
There are a lot of issues that could be affecting your container's ability to communicate with your database. In the future you should compose your question with as much detail as possible. To correctly answer this you will, at a minimum, need to include the following details:
Linux distribution name & version
Docker version
Output of docker inspect from the container
Linux firewall configuration
Network configuration
Is your Windows machine running on the same local network / subnet as your Linux machine? If so, please provide information about the subnet, as the default bridge set up by Docker may restrict access to local resources, whereas those over a wide area network would still be accessible.
You can try passing the --network=host option to your docker run command like so: docker run --network=host <image name>. Doing so eliminates the need to specify port mappings in your run command, as they are ignored when using the host's network.
Please edit your question and include the above requested details to get a complete answer.

localhost not working docker windows 10

I am using VS2017 docker support. VS created DockerFile for me and when I build docker-compose file, it creates the container and runs the app on 172.x.x.x IP address. But I want to run my application on localhost.
I did many things but nothing worked. Followed the docker docs as a starter and building microsoft sample app . The second link is working perfectly but I get HTTP Error 404 when tried the first link approach.
Any help is appreciated.
Most likely a different application already runs at port 80. You'll have to forward your web site to a different port, eg:
docker run -d -p 5000:80 --name myapp myasp
And point your browser to http://localhost:5000.
When you start a container you specify which inner ports will be exposed as ports on the host through the -p option. -p 80:80 exposes the inner port 80 used by web sites to the host's port 80.
Docker won't complain though if another application already listens at port 80, like IIS, another web application or any tool with a web interface that runs on 80 by default.
The solution is to:
Make sure nothing else runs on port 80 or
Forward to a different port.
Forwarding to a different port is a lot easier.
To ensure that you can connect to a port, use the telnet command, eg :
telnet localhost 5000
If you get a blank window immediatelly, it means a server is up and running on this port. If you get a message and timeout after a while, it means nobody is running. You anc use this both to check for free ports and ensure you can connect to your container web app.
PS I run into this just a week ago, as I was trying to set up a SQL Server container for tests. I run 1 default and 2 named instances already, and docker didn't complain at all when I tried to create the container. Took me a while to realize what was wrong.
In order to access the example posted on Docker Docs, that you pointed out as not working, follow the below steps,
1 - List all the running docker containers
docker ps -a
After you run this command you should be able to view all your docker containers that are currently running and you should see a container with the name webserver listed there, if you have followed the docker docs example correctly.
2 - Get the IP address where your webserver container is running. To do that run the following command.
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" webserver
You should now get the IP address which the webserver container is running, hope you are familiar with this step as it was even available within the building Microsoft sample app example that you attached with the question.
Access the IP address you get once running the above command and you should see the desired output.
Answering to your first question (accessing docker container with localhost in docker for windows), in Windows host you cannot access the container with localhost due to a limitation in the default NAT network stack. A more detailed explanation for this issue can be obtained by visiting this link. Seems like the docker documentation is not yet updated but this issue only exists in Windows hosts.
There is an issue reported for this as well - Follow this link to see that.
Hope this helps you out.
EDIT
The solution for this issue seems to be coming in a future Windows release. Yet that release comes out this limitation is available in Windows host. Follow this link -> https://github.com/MicrosoftDocs/Virtualization-Documentation/issues/181
For those who encountering this issue in 2022, changing localhost to 127.0.0.1 solved an issue for me.
There is other problem too
You must have correct order with parameters
This is WRONG
docker run container:latest -p 5001:80
This sequence start container but parameter -p is ignore, therefore container have no ports mapping
This is good
docker run -p 5001:80 container:latest

Port binding is not working in docker on windows

I have installed docker on my Windows m/c.
I am trying to install Gerrit on that.
Pull image is done-Successfully
Run image is also done -->
docker run -d -p 8080:8080 -p 29418:29418 ******/gerrit
I try to connect it through browser with my container id:8080 but it throws error
This site can’t be reached
What is oing wrong.. Please help with suggestions.
BR,
Rash
You need to access your container by IP of virtual machine. You can obtain it with command: docker-machine ls. Then access container in browser by (replace ip) http://192.168.99.100:8080
This is a known limitation of windows containers at the moment as per the docker documentation (https://docs.docker.com/docker-for-windows/troubleshoot/#limitations-of-windows-containers-for-localhost-and-published-ports).
As of Windows 10 Creator's update this has kinda been fixed where you can use host IP with the bounded host port(http://<hostIp>:<hostBoundedPort>), but still not localhost or any of it's aliases.
Alternatively you can avoid port mapping hit the container IP directly. There is numerous ways to get your container IP. Personally I would use:
docker ps
This lists out all the the running docker containers allowing you to find the Container ID for the container that you want to hit followed by:
docker inspect <initial_part_or_full_id>
This will output low level information about the container, including it's Network settings where you will find the NAT-ed endpoint details containing the IP. Then simply http://<containerIP>:<containerPort>.

Resources