I just set up a Docker Container with the Docker Toolbox and ran jupyter notebook inside the container using
docker run --name container -v %somedirectory%:%someotherdir% -d -p 127.0.0.1:8888:8888 quay.io/fenicsproject/stable:2017.2.0 'jupyter-notebook --ip=0.0.0.0'
Afterwards I can check the log of the container to see the URL and token that jupyter notebook created.
If I now go ahead and copy the link to my browser, it won't be able to connect to localhost. Accessing 127.0.0.1 does not work either.
Since the Docker Toolbox relies on Virtual Box VMs, I also tried to use the IP address of the VM, in this case 192.168.99.100:2376. According to the Kitematic UI, this is the IP:Port combination that is being published by the docker-machine and indeed this does not lead to a generic connection error. Instead the browser's output is:
Client sent an HTTP request to an HTTPS server.
I don't really know what to do from this point on. What does this "error" mean? Does it even make sense to use the VM's IP address? And most importlanty: what else can I do in order to finally get access to the jupyter notebook?
PS: I also tried the suggestions made in the threads Can't access jupyter notebook from docker and Access Jupyter notebook running on Docker container and couldn't make any of them work unfortunately.
I hope someone can help, thank you very much in advance.
You need to do two things to make this work:
Remove the 127.0.0.1 part of the port mapping; docker run -p 8888:8888 ...
Connect to the docker-machine ip address with the published port; http://192.168.99.100:8888.
Docker Toolbox runs Docker in a separate Linux virtual machine. Any docker run -p options will get interpreted from the point of view of that VM. If you docker run -p 127.0.0.1:... then the published port will be bound to the VM's lo0 localhost interface, so it won't be reachable from outside the VM.
Once you have the port published, you need to connect to that specific port. Port 2376 is typically the port to reach the Docker daemon inside the VM, with mutual TLS security; you only need this for manual docker commands. To reach services running inside the VM you need to connect to the published port (the first number in the docker run -p option).
Related
This may seem trivial, but after some trial error I come to the SO community for a little help!
I create a network, call it docker-net.
I have a linux container, let's all it LC1, that has a published port of 6789 (so when created it had the parameter -p 6789:6789) and I make it join docker-net network (--network docker-net)
This works fine, through my host, I can communicate with it no problem.
I switch to the windows containers and check that LC1 is still running. It does! Amazing.
I create a container, let's call it WC1. It also publishes a port of 9000 that maps internally to 80 (-p 9000:80)
The application inside WC1 tries to connect to LC1 using the IP assigned from the network (docker inspect LC1) and I can't communicate.
There's probably a concept that I can't get my head around to.
I understand that the WC1 and LC1 have different gateways and subnets. Could that be the culprit?
Any help to get me to make that work is appreciated !
EDIT:
Here are the commands I ran for the scenario above:
docker network create docker-net
docker run -d -p 6789:6789 --name LC1 --network docker-net LC1
docker inspect LC1
The IP is 172.18.0.2
switch to the windows container
docker run -d -p 9000:80 --name WC1 WC1
In the docker network connect documentation it states that you can assign an IP to a container the same should work with docker run --network name --ip. Then use that IP to access the container.
Specify the IP address a container will use on a given network
You can specify the IP address you want to be assigned to the
container’s interface.
$ docker network connect --ip 10.10.36.122 multi-host-network
container2
I have found these:
a deleted question on serverfault about the same issue. See the cached-by-google version: Connect Windows container to Linux container running on same Docker host [closed]
an article: Run Linux and Windows Containers on Windows 10
and I think that the only way to make the 2 containers communicate is through the host and by exposing ports. For exampple LC1 will use -p [your app port]:8080 and WC1 -p [your app port]:9090.
By saying [your app port] I mean that it is up to you to decide what to use (a tcp/udp listening socket, a REST api...)
As docker evolves maybe there will be a better solution in the near future.
I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)
I want to be able to access a docker container via its Ip eg the one I can see when I do docker container inspect foo
The reason is I am using zookeeper inside a docker container that is managing two other docker containers running solr. My code (not in docker and I don't at this stage want it to be) calls zookeeper to get the urls of the solr servers which zookeeper reports as the docker containers ip. My code then falls over because calling the docker containers ip from the host fails as it should be calling localhost.
So how can I allow a call to the docker containers ip from the host to be routed correctly. (I am using Docker native for Mac)
I'm not using Docker for Mac, so I'm not sure the newest version Docker for Mac is still based on Docker-machine (which based on VirtualBox) or not.
If you can confirm your Docker for Mac is based on VirtualBox, then you probably could get the inet IP of vboxnet0 network interface via ifconfig command. This IP should be used as your calling IP.
Besides, you should know the port number of your Zookeeper container. Normally the exposed port of a container could be configured in docker run command, for example:
docker run -p 5000:5001 -i -t ubuntu /bin/bash
Where -p indicated the exposed port of the container.
I am new to docker. I am running it on windows. I am trying to get a container named "ghost" (available from the Docker Hub) to work on a Windows 8.1 machine. While the container starts correctly and supposedly exposes url at http://localhost:2368, when I enter this address nothing happens. The same has happened when trying other containers from the Hub which expose urls.
I tried accessing the container's exposed URL from the IP Address I get from the "docker ip" but it failed too. I also tried running the container with the "--net="bridge"" option, to no avail. I think I'm missing something pretty basic, but I can't for the life of me figure out what. Can someone point me in the right direction?
When you install Docker on Windows that means you most likely installed boot2docker.
boot2docker starts a minimal Linux VM (based on VirtualBox) because Docker requires a Linux kernel to run. The Docker daemon is started on that VM and not on your localhost.
You can determine the VMs IP address by typing boot2docker ip on your command line. The standard boot2docker IP address is 192.168.59.103 if you did not configure something else or have multiple instances of that VM running.
So when you execute docker run --name ghost -p 2368:2368 -d ghost the port 2368 is opened at 192.168.59.103:2368. That is where you need to connect to.
For more information please read the official boot2docker documentation.
You haven't provided the complete 'docker run ...' command you executed, so I'm assuming you ran the one specified in the image's page on Docker Hub (reproduced below).
docker run --name some-ghost -p 8080:2368 -d ghost
The command is mapping Ghost's exposed port inside the container (2368) to port 8080 in your boot2docker VM. The first thing you need to do is run boot2docker ip to find out the IP address of your boot2docker VM. About the port number, you have two options:
Access Ghost via port 8080 (http://BOOT2DOCKER-IP:8080)
Change the port mapping to expose 2368 (-p 2368:2368)
I have installed docker on a CentOS machine. Now I am trying to run a MapR sandbox on it. After starting I get this:
Starting MapR Services.................
To manage this node go to: https://172.17.0.13:8443
But I am not able to access this URL from the windows machine in the same network as the CentOS machine.
This is an internal docker network inaccessible outside of the box. In order to access this container you need:
EXPOSE command in container (most likely it is already there)
run container with -p option
If you just specify -p port will be random - you could find it with inspect command, or you could use permanent port -p hostIp:externalPort:8443 where hostIp is address of your docker host.
After that you could access container from network as https://hostIp:externalPort