Establish conversation between hello-world apps in Docker containers - docker

I'm trying to run my hello-world apps inside Docker: frontend need to consume REST from backend.
I run
docker run -p 1337:1337 --net=bridge me/p-dockerfile-advanced-backend:latest
docker run -p 1338:1338 --net=bridge me/p-dockerfile-advanced-frontend:latest http://127.0.0.1:1337
I am able to connect to both of them using a browser from the host OS (My desktop Windows 10 x64) :
The http://127.0.0.1:1337 parameter needed for the frontend application to know where the restful services reside. But the app cannot connect to them. I cannot connect too.
Windows PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.
PS C:\Users\user1> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4b0852253b8a me/p-dockerfile-advanced-frontend:latest "/usr/bin/java -ja..." 24 minutes ago Up 24 minutes 0.0.0.0:1338->1338/tcp laughing_noyce
e73f8a6efa24 me/p-dockerfile-advanced-backend:latest "/usr/bin/java -ja..." 26 minutes ago Up 26 minutes youthful_chandrasekhar
PS C:\Users\user1> docker exec -it 4b0852253b8a bash
root#4b0852253b8a:/# apt-get install telnet
<...>
root#4b0852253b8a:/# telnet localhost 1337
Trying 127.0.0.1...
Trying ::1...
telnet: Unable to connect to remote host: Cannot assign requested address
root#4b0852253b8a:/#
Unable to connect, but it should because I specified --net=bridge on both containers and backend listen the port 1337 :
root#e73f8a6efa24:/# netstat -lntu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:1337 0.0.0.0:* LISTEN
root#e73f8a6efa24:/#
PS: I spent almost all day trying to make it work before asking here.

The problem is the 127.0.0.1 address.
Each container is assigned, by default, 2 interfaces: eth0 and lo (the loopback interface with the 127.0.0.1 address).
You need to specify the name or address of the previous container. For this simple application you may use the --link option.
docker run -p 1337:1337 --name backend me/p-dockerfile-advanced-backend:latest
docker run -p 1338:1338 --link backend:backend me/p-dockerfile-advanced-frontend:latest http://backend:1337
Note that the --link option is deprecated as stated in:
https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/

Since these are different containers, you have to expose ports on both of them. Run the first with:
docker run -p 1337:1337 --net=bridge me/p-dockerfile-advanced-backend:latest
Note that bridge is the default network so you it is extra. Both containers will be on the same bridge network by default anyway.

Related

Is it possible to bind on another IP than 127.0.0.1 on MacOS?

On MacOS 12, using Docker 20.10, I'm not able to start a container on another IP:
% docker run -p 127.123.2.13:80:80 -d nginx
a9216ae29940f7357b9b4826ecddf041f1805c9ee48ba1336361277fc0dcb524
docker: Error response from daemon: Ports are not available: listen tcp 127.0.17.1:80: bind: can't assign requested address.
Is there any other way?
In order to bind to the ip, other than 0.0.0.0, you need to have an interface in your system with the desired ip. For example, watch docker failing to bind to a non-existent ip of 127.0.0.2:
docker run -p 127.0.0.2:80:80 -d nginx
cc79b1b60c9f5e245b326bbfcc17d4a1f1abe6fad6fd12f9677b66bbee972a12
docker: Error response from daemon: Ports are not available: listen tcp 127.0.0.2:80: bind: can't assign requested address.
Now I create an alias for my existing interface lo0:
sudo ifconfig lo0 alias 127.0.0.2 netmask 0xff000000
and try again:
docker run -p 127.0.0.2:80:80 -d nginx
05223ecb6ae99a25b7423f014b9b95422c621717705ce1c255bea04072c45263
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cc79b1b60c9f nginx "/docker-entrypoint.…" 2 minutes ago Created hardcore_haslett
05223ecb6ae9 nginx "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 127.0.0.2:80->80/tcp pensive_bardeen

Docker Pgadmin 4

EDIT
Turned out to a problem with the image, I tried another one and it works fine
I'm trying to run Pgadmin 4 as server mode using Docker on Debian 9. I have followed the instructions on https://hub.docker.com/r/dpage/pgadmin4/ I start it by the following command
docker run -p 5050:5050 -e "PGADMIN_DEFAULT_EMAIL=myemail#gmail.com" -e "PGADMIN_DEFAULT_PASSWORD=a12345678" -d dpage/pgadmin4
I don't get any errors, and docker ps shows the status as below
root#poweredge:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c4b11e4bceb7 dpage/pgadmin4 "/bin/bash /entry.sh" 12 seconds ago Up 10 seconds 80/tcp, 443/tcp, 0.0.0.0:5050->5050/tcp upbeat_jackson
But when I go to serverip:5050 nothing loads. Any idea what the problem may be here?
On the local machine when I execute curl http://localhost:5050 I get Connection reset by peer if the docker instance is running
root#poweredge:~# curl http://localhost:5050
curl: (56) Recv failure: Connection reset by peer
if I stop the Docker instance, I get
root#poweredge:~# curl http://localhost:5050
curl: (7) Failed to connect to localhost port 5050: Connection refused
PgAdmin 4 docker container has exposed port 80 and 443 by default. You can checck the Dockerfile here https://github.com/postgres/pgadmin4/blob/master/pkg/docker/Dockerfile
So the port mapping parameter in the command has to be updated (-p host_port: container_port)
Below is the updated command to access pgadmin4 via http (port 80)
docker run -p 5050:80 -e "PGADMIN_DEFAULT_EMAIL=myemail#gmail.com" -e "PGADMIN_DEFAULT_PASSWORD=a12345678" -d dpage/pgadmin4
After starting the container you should be able to access it via http://localhost:5050
Are you trying to access it out side your virtual box? If yes, check if you have port forwarding rules of your Virtual machine set correctly:

How to get my docker centos sshd passwordless server running?

I'm running my docker container with:
docker run -d sequenceiq/hadoop-docker:2.6.0
The Dockerfile is here.
After it is started on my mac - I'm running docker ps and getting:
6bfa4f2fd3b5 sequenceiq/hadoop-docker:2.6.0 "/etc/bootstrap.sh -d" 4 minutes ago Up 4 minutes 22/tcp, 8030-8033/tcp, 8040/tcp, 8042/tcp, 8088/tcp, 49707/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp kind_hawking
Then I'm running
ssh -v localhost -p 22
and I'm getting
OpenSSH_7.4p1, LibreSSL 2.5.0
debug1: Reading configuration data /Users/User/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Connecting to localhost [::1] port 22.
debug1: connect to address ::1 port 22: Connection refused
debug1: Connecting to localhost [127.0.0.1] port 22.
debug1: connect to address 127.0.0.1 port 22: Connection refused
ssh: connect to host localhost port 22: Connection refused
Assumptions: I think this is not a duplicate of the other centos sshd questions as this is a different centos version. (For those that are similar - it is doing what the potentially similar question is asking and it is not working).
My question is: How to get my docker centos sshd passwordless server running?
Edit:
#Andrew has been super-helpful in helping me refine my question - so here goes.
Here is my updated Dockerfile
FROM sequenceiq/hadoop-docker:2.6.0
CMD ["/etc/bootstrap.sh", "-d"]
# Hdfs ports
EXPOSE 50010 50020 50070 50075 50090 8020 9000
# Mapred ports
EXPOSE 10020 19888
#Yarn ports
EXPOSE 8030 8031 8032 8033 8040 8042 8088
#Other ports
EXPOSE 49707 2122
EXPOSE 9000
EXPOSE 2022
Now I'm building this with:
sudo docker build -t my-hdfs .
Then I'm running this with:
sudo docker run -d -p my-hdfs
Then I'm checking the processes with:
sudo docker ps
with a result like:
d9c9855cfaf0 my-hdfs "/etc/bootstrap.sh -d" 2 minutes ago
Up 2 minutes 0.0.0.0:32801->22/tcp, 0.0.0.0:32800->2022/tcp,
0.0.0.0:32799->2122/tcp, 0.0.0.0:32798->8020/tcp, 0.0.0.0:32797->8030/tcp,
0.0.0.0:32796->8031/tcp, 0.0.0.0:32795->8032/tcp, 0.0.0.0:32794->8033/tcp,
0.0.0.0:32793->8040/tcp, 0.0.0.0:32792->8042/tcp, 0.0.0.0:32791->8088/tcp,
0.0.0.0:32790->9000/tcp, 0.0.0.0:32789->10020/tcp, 0.0.0.0:32788->19888/tcp,
0.0.0.0:32787->49707/tcp, 0.0.0.0:32786->50010/tcp, 0.0.0.0:32785->50020/tcp,
0.0.0.0:32784->50070/tcp, 0.0.0.0:32783->50075/tcp, 0.0.0.0:32782->50090/tcp
agitated_curran
Then to get the IP address I'm running:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' d9c9855cfaf0
with a result like
172.17.0.3
Then I'm testing it with:
ssh -v 172.17.0.3 -p 32800
This gives a result:
OpenSSH_7.4p1, LibreSSL 2.5.0
debug1: Reading configuration data /Users/User/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Connecting to 172.17.0.3 [172.17.0.3] port 32800.
debug1: connect to address 172.17.0.3 port 32800: Operation timed out
ssh: connect to host 172.17.0.3 port 32800: Operation timed out
My question is: How to get my docker centos sshd passwordless server running?
You are trying to connect to you local ssh server instead of container. To connect to any port inside container, you need to expose and publish it and possibly map it to another one, especially in case when you want to run multiple similar containers on different ports on the same host. See Expose.
So in your case your command should be
docker run -p 2222:22 -d sequenceiq/hadoop-docker:2.6.0
And ssh command
ssh -v localhost -p 2222
Exposing docker port (as seen in your linked docker file) makes it accessible
to other docker containers, but not to your host machine. To understand difference between exposed and published ports see this question
However, when i tried to connect to port 2222 it haven't worked. Looking at Dockerfile of 2.6.0 version, i've found that it has a bug, where sshd configured to listen on port 2122, but exposed port is 22, as can be seen here. Also, when i'm tried to build a lastest Dockerfile you provided, it failed at step 31, so you might want to inverstigate further.
Edit after question update:
Look at docker ps output you provided, and on Dockerfile. sshd configured to listen on port 2122 (if you haven’t changed that though since we don't have a complete dockerfile of yours), and in output we see
0.0.0.0:32799->2122/tcp
0.0.0.0:32800->2022/tcp
You should connect as ssh -v localhost -p 32799 instead of 32800 since nothing is listening on port 2022 inside container

failed: port is already allocated

I use Docker for running Oracle 11g Express on macOS Sierra 10.12.2
https://github.com/wnameless/docker-oracle-xe-11g
This is my error:
Last login: Sat Jan 7 22:42:11 on ttys000
➜ ~ docker run -d -p 49160:22 -p 49161:1521 wnameless/oracle-xe-11g
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.
➜ ~ docker run -d -p 49160:22 -p 49161:1521 wnameless/oracle-xe-11g
043d8caecbb45d6e2e5999b69a2f760c20d53ff3aa2fad78cb1eb70acb058a1f
docker: Error response from daemon: driver failed programming external connectivity on endpoint serene_lalande (08bb0bd9684c0f92db7b736986bf894d3a57a714324405823496d13e175e7491): Error starting userland proxy: Bind for 0.0.0.0:49161 failed: port is already allocated.
➜ ~
I diagnostic:
➜ ~ netstat -anp tcp | grep 49161
tcp4 0 0 192.168.1.2.49161 17.188.166.13.5223 ESTABLISHED
➜ ~
➜ ~ docker --version
Docker version 1.12.5, build 7392c3b
My Dianostic ID: 20EB9506-CC72-4093-8A15-60E05A841ED1
I don't know why. Before that few weeks, it run success. Nearly, I change, release new DHCP IP. How to run Docker instance has Oracle 11g express success?
you can't launch twice
docker run -d -p 49160:22
as this means you want to allocate the port 49160 on the host twice, of course, the second time, you get you error message, try for the second run
docker run -d -p 49161:22
You will need to use a different port instead of 49161. Try a port less than 49152.
You have a pre-existing connection between the the port 49161 on your computer and port 5223 on a remote Apple server. That port, therefore, cannot be used for anything else until that connection ceases to exist. Port 5223 is used for Apple's push notifications. As best as I can tell, your computer so happened to use the random port 49161 to connect to Apple's server this time. Previously when that Docker container worked, I would bet port 49161 on your computer was not then used.
Whenever you connect to a remote server, your own computer allocates a random port number for that connection. This time around, your computer allocated 49161 when it connected to Apple's push notifications service. Next time, it could be a completely different number. See https://en.wikipedia.org/wiki/Ephemeral_port

Docker inside Linux VM cannot connect to web application

My setup is the following:
Host: Win10
Guest: Ubuntu 15.10 (clean install, only docker and nodejs are added)
Base image: https://hub.docker.com/r/microsoft/aspnet/ 1.0.0-beta8-coreclr
Inside the guest I have installed Docker and created image (added sample webapp using yeoman to the image above). When I run the image inside container I can ping the container IP sucessfuly using the container IP from the linux (e.g. 172.17.0.2).
$sudo docker run -d -p 80:5000 --name web myapp
$sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' "web"
172.17.0.2
$ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.060 ms
1 packets transmitted, 1 received, 0% packet loss, time 999ms
$curl 172.17.0.2:80
curl: (7) Failed to connect to 172.17.0.2 port 80: Connection refused
I can also connect to the container and execute commands like ping, however from the linux machine (guest in VirtualBox, host for docker) I cannot access the web app that is hosted inside the container as seen above. I tried several approaches like mapping to the host IP addresses etc, but none of them worked. Did anyone have ideas where to start from ? Is the issue comes from that the docker is installed inside VirtualBox machine?
Thank you in advance.
Edit: Here are the logs from the container:
Could not open /etc/lsb_release. OS version will default to the empty string.
Hosting environment: Production
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Your command tells Docker to essentially proxy requests from port 80 of the Linux guest to port 5000 of the container. So the curl command you tried doesn't work because you're trying on port 80 on the container, while the container itself has a service listening on port 5000.
To connect to the container directly, you would use (on the Linux guest):
curl 172.17.0.2:5000
To access via the published port on the Linux guest (from your host):
curl (Linux guest IP)
Or (from the Linux guest):
curl localhost
Edit: This will also prove to be problematic:
Now listening on: http://localhost:5000
You'll want your app inside the container to bind to all interfaces (0.0.0.0) so it listens on the container's assigned IP. With localhost it won't be accessible.
You might find this example useful:
https://github.com/aspnet/Home/blob/dev/samples/1.0.0-beta8/HelloWeb/project.json
This line specifies that the app bind to all interfaces (using "*") on port 5004:
21 "kestrel": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.Kestrel --server.urls http://*:5004"
You'll need similar configuration.

Resources