Can't access localhost from docker - docker

I'm a beginner in this docker world and as it is very suffering to set all these 'localhost' thingy with apache and stuff, it's the same with docker.
I don't know if it's me because but i tried with the help of other post to solve my problem but after several hour i give up and i ask for your help, because some post are just not comprehensible for me ( post that includes bridges stuff NAT iptables docker-machine, etc )
After several hour i'm just simply trying to access apache website on localhost:5000 on windows who is launched with service apache2 start within a docker, and if i do w3m localhost in this docker i can see it running.
But when i'm trying to access it with a browser no response.
I also tried this command :
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' bce97a49b68c
172.17.0.2
The adress with :5000 don't have an access, i even it put in the hosts file. No success.
If someone has the last solution for this problem, it's seems there are plenty and everything seems to be so simple in blog of article (i even tried something with docker-composer, it deleted docker i had to reinstall the whole thing)

I'm a little unsure what you're asking, but it seems like you may need to expose your ports. When running something in Docker, it runs in its own little box unconnected to the outside world - the rest of your machine. If you want to connect ports - say to access a web server running inside a Docker container, you need to use the -p or --publish option when running your Docker container. There are similar commands for mounting drives and such.
Here's an example from the database I run locally in Docker:
docker run \
--publish=7474:7474 \
--volume=/home/me/logs:/logs \
--env=NEO4J_AUTH=none \
neo4j:4.2.
This says:
Allow the outside system to access port 7474 inside the Docker container from the port 7474 outside the docker container
Mount the outside system's /home/me/logs folder as /logs inside the Docker container
Set the environment variable NEO4J_AUTH inside the Docker container to the value none

Related

Cannot make http requests from docker container to outside

I have set up a new Ubuntu 22.04.1 server with Docker version 20.10.21, using docker images from the exact same dockerfiles that work without any problems on another Ubuntu server (20.04 though).
In my new docker installation, I experience problem reaching into the docker containers, but I can neither reach the outside world from within the docker containers.
For example, issuing this from a bash within the docker container:
# wget google.com
Resolving google.com (google.com)... 216.58.212.142, 2a00:1450:4001:82f::200e
Connecting to google.com (google.com)|216.58.212.142|:80...
That's all, it just hangs there forever. Doing the same in the other installation works just fine. So I suspect there is some significant difference between those installations, but I can't find out what it is.
I'm also running a reverse proxy docker container within the same docker network, and it cannot reach the app container in the broken environment. However, I feel that if I knew what block my outgoing requests, this would explain the other issues as well.
How can I find out what causes the docker container requests to be blocked?
This is my docker network setup:
Create the network
docker network create docker.mynet --driver bridge
Connect container #1
docker network connect docker.mynet container1
Run and connect container 2
docker run --name container2 -d -p 8485:8080 \
--network docker.mynet \
$IMAGE:$VERSION
Now
I can always wget outside from container1
I can wget outside from container2 on the old server, but not on the new one
Turned out that, while the default bridge worked as expected, any user-defined network (although defined with bridge driver) did not work at all:
requests from container to outside world not possible
requests into container not possible
requests between containers in the same network not possible
Because container1 was created first, then connected connected to user-defined network, it was still connected the default bridge, too, and thus was able to connect to the outside while container2 wasn't.
The solution is actually in the Docker docs under Enable forwarding from Docker containers to the outside world:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
I don't think I had to make these changes on my Ubuntu 20.04 server, but I'm not 100% sure. However, after applying these changes, the connection issues were resolved.
I'm still looking how to make this configuration changes permanent (so they survive a reboot). Once I know it, I'll update this answer.

Docker container talks to docker container in the same local host? [duplicate]

This question already has answers here:
accessing a docker container from another container
(8 answers)
Closed 1 year ago.
I am in a confusion right now. I try many things I can find on the web, but, none solved it. I have Win10 and Docker desktop installed using WSL 2 to host Linux containers. I use the following command to start the Jenkins website.
docker run --name jenkins-master-c -d -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:2.282-alpine
This works fine. I can access the website using http://localhost:8080/
The problem is, I try to curl http://localhost:8080 from another alpine docker container, but, I am not getting the web page back, it said connection refused. I tried my own tiny web service on my Windows machine without docker. Same thing. I can access the web service using web browser on Windows 10. However, if I get inside a container, I couldn't access the web service on the localhost.
I know I am missing some thing really basic, because the web doesn't seem to have this topic. I am just on my own computer without anything fancy. Thus, I just want to use localhost. The web said the default is supposed to use bridge which the container should talk to each other easily, but, it is not working for me. What am I missing. Maybe I shouldn't type localhost? But, what else should I do?
thank you
Edit: just want to explain what I did to get my problem solved. The creating network --network my-network-name was what I originally did, which failed because the way I curl the webpage is wrong. I did --name jenkins-master-c only to make it easy locate my container on the docker ps. But, as pointed out in my question, I suspected the localhost is wrong, which is confirmed by the solution. Instead of using localhost, I do curl http://jenkins-master-c:8080 which worked. Thanks
localhost is always a question of perspective, it refers to the current machine. This means if you call localhost from a container it speaks to himself and not the machine you see as localhost. If you want to call a service running on this one you have to use its real IP address.
You can imagine that docker containers are individual virtual machines, they have their own localhost. And they are isolated from your host pc and other containers.
Now if you want to communicate among two or more docker containers then you can use bridge network. In docker perspective, a bridge network allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. You can see the docker doc for bridge network.
On the other hand, if you want to communicate with your docker container from your host then you need to fort-forward for opening/exposing a port to connect with the container (which you did in -p 8080:8080)
Another way you can bring your containers under a local host is using kubernetes, in kuernetes you can run one or more containers in a pod and then they will share same network space. kubernetes pod
Probably these two containers are not in the same network, you they cannot see and talk to each other.
First of all, create a network by docker command docker network create SOMENAME, and then run containers again (both of them):
docker run --name jenkins-master-c --network SOMENAME -d -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:2.282-alpine
Now it should talk to another docker container.

Problem accessing jupyter notebook from Docker Toolbox Container

I just set up a Docker Container with the Docker Toolbox and ran jupyter notebook inside the container using
docker run --name container -v %somedirectory%:%someotherdir% -d -p 127.0.0.1:8888:8888 quay.io/fenicsproject/stable:2017.2.0 'jupyter-notebook --ip=0.0.0.0'
Afterwards I can check the log of the container to see the URL and token that jupyter notebook created.
If I now go ahead and copy the link to my browser, it won't be able to connect to localhost. Accessing 127.0.0.1 does not work either.
Since the Docker Toolbox relies on Virtual Box VMs, I also tried to use the IP address of the VM, in this case 192.168.99.100:2376. According to the Kitematic UI, this is the IP:Port combination that is being published by the docker-machine and indeed this does not lead to a generic connection error. Instead the browser's output is:
Client sent an HTTP request to an HTTPS server.
I don't really know what to do from this point on. What does this "error" mean? Does it even make sense to use the VM's IP address? And most importlanty: what else can I do in order to finally get access to the jupyter notebook?
PS: I also tried the suggestions made in the threads Can't access jupyter notebook from docker and Access Jupyter notebook running on Docker container and couldn't make any of them work unfortunately.
I hope someone can help, thank you very much in advance.
You need to do two things to make this work:
Remove the 127.0.0.1 part of the port mapping; docker run -p 8888:8888 ...
Connect to the docker-machine ip address with the published port; http://192.168.99.100:8888.
Docker Toolbox runs Docker in a separate Linux virtual machine. Any docker run -p options will get interpreted from the point of view of that VM. If you docker run -p 127.0.0.1:... then the published port will be bound to the VM's lo0 localhost interface, so it won't be reachable from outside the VM.
Once you have the port published, you need to connect to that specific port. Port 2376 is typically the port to reach the Docker daemon inside the VM, with mutual TLS security; you only need this for manual docker commands. To reach services running inside the VM you need to connect to the published port (the first number in the docker run -p option).

Why can't I curl one docker container from another via the host

I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)

Obtaining the ip address of a docker container

I have a ubuntu machine which is a VM where I have installed docker in it. I am using this machine from my local windows machine and doing ssh , opening the terminal to the ubuntu machine.
Now , I am going to take a docker image which contains all the necessary softwares for eg: apache installed in it. Later I am going to deploy a sample appication(which is a web applicationP on to it and save it .
Now , I am in a confused mode as in how to check the deployed application if its running properly. i.e., what would be the address of the container which containds the deployed application.
for eg:- If I type http://127.x.x.x which is the address of the ubuntu machine , I am just getting time out .
Can anyone tell me how to verify the deployed application . Also, the printing the output of the program on the console works seemlessly fine , as the output gets printed , only thing I have some doubts is regarding the web application.
There are some possibilities to check whether your app is running.
Remote API
As JimiDini said, one possibility is the Docker remote API. You can use it to see all running containers (which would be your use case, right?), inspect a certain container or start and stop containers. The API is a REST-API with several binding for programming languages (at https://docs.docker.io/reference/api/remote_api_client_libraries/). Some of them are very outdated. To use the Docker remote API from another machine, I needed to open it explicitly:
docker -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d &
Note that the API is open to the world now! In a real scenario you would need to secure it in some way (e.g. see the example at http://java.dzone.com/articles/securing-docker%E2%80%99s-remote-api).
Docker PS
To see all running containers run docker ps on your host. This will list all running containers. If you do not see your app, it is not running. It also shows you the ports your app is exposing. You can also do this via the remote API.
Logs
You can also check the logs. You can run docker attach <container id> to attach to a certain container an see its stdout. You can run also run docker logs <container id> to receive the Docker logs. What I prefer is to write the logs to a certain directory, e.g. all logs to /var/log and mount this folder to my host machine. Then all your logs will end up in /home/ubuntu/docker-logs on your host.
docker run -p 80:8080 -v /home/ubuntu/docker-logs:/var/log:rw my/application
One word to ports and IP
Every container will get its own IP address. You can check this IP address via the remote API or via Docker on the host machine directly. You can also specify a certain host name for the container (by passing the --hostname="test42" to the run command). However, you mostly did not need that.
To access the application in the container, you need to open the port in the container and bind to a port on the host.
In your Dockerfile you need to EXPOSE the port your app runs on:
FROM ubuntu
...
EXPOSE 8080
CMD run-my-app.sh
When you start your container, you need to bind this port to a port of the host:
docker run -p 80:8080 my/application
Now you can access your app on http://localhost:80 or http://127.0.0.1:80.
If you app does not response, check if the container is running by typing docker ps or the remote API. If it is not running, check the logs for the reason.
(Note: If you run your Ubuntu VM in something like VirtualBox and you try to access it from your Windows machine, make sure you opened the ports in VirtualBox too!).
Docker container has a separate IP address. By default it is private (accessible only from the host-machine).
Docker provides all metadata (including IP address) via its API:
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#inspect-a-container
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#monitor-docker-s-events
You can also take a look at a little tool called docker-gen for inspiration. It monitors docker-events and created configuration-files on host machine using templates.
To obtain the ip address of a docker container, if you know its id (a long hex string) or if you named it:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' <container-id-or-name>
Docker is running its own network and to get information about it you can run the following commands:
docker network ls
docker network inspect <network name>
docker inspect <container id>
In the output, you should be able to find the IP.
But there is also a couple of things you need to be aware of, regarding Dockerfile and docker run command:
when you EXPOSE a port in Dockerfile, the service in the container is not accessible from outside Docker, but from inside other Docker containers
and when you EXPOSE and use docker run -p ... flag, the service in the container is accessible from anywhere, even outside Docker
So for example, if your apache is running on port 8080 you should expose it in Dockerfile and then you can run it as:
docker run -d -p 8080:8080 <image name> and you should be able to access it from your host on HTTP://localhost:8080.
It is an old question/answer but it might help somebody else ;)
working as of 2020
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id

Resources