Docker DB connection refused - database-connection

I have fedora 20 running in my container, I can start my container and point it to a specific port through docker and the Websphere Liberty page loads just fine. (which is what i have in it). However, in the same container i have my db connection string- i can ping it fine, but in the logs when the wlp service start it throws db connection exception- cant connect. Maybe i ned to expose a port that db is running on? not sure, or maybe I am doing something completely wrong? I just got Dockers and dont have much experience with it...any help would be great! thanks!

When you run a container, Docker has two methods of assigning ports on the Docker host:
Docker can randomly assign a high port from the range 49000 to 49900
on the Docker host that maps to port 80 on the container.
You can specify a specific port on the Docker host that maps to port
80 on the container.
This will open a random port on the Docker host that will connect to port 80 on
the Docker container.
The -p flag manages which network ports Docker exposes at runtime.
$ sudo docker ps -l command will let you view the Docker port mapping.

Related

How do I "point" container to port 8080?

I'm learning the ins and outs of Docker from book and I'm asked to:
"Open a web browser and navigate to the DNS name or IP address of the Docker host
that you're running the container from, and point it to port 8080."
I don't understand what I'm asked to do. I've got a container with image running on my machine
but I don't understand how do I get IP address of Docker host ? I can run docker-machine ip [instance] but I've got no instance running in the cloud and the container is up locally.
Can anyone explain to me what I'm asked to do ?
0c7d84a472ed test:latest "node ./app.js" 15 minutes ago Up 15 minutes 8080/tcp, 0.0.0.0:8080->80/tcp web1
You need to map the port when running the container by adding flags of -p <hostport>:<serviceport_inside_container>
Since the container is running locally in your machine (desktop/laptop), open the web browser to the url http://localhost:8080 or https://localhost:8080.
Container port 80 is mapped to 8080 of the node (which is the docker host - the host machine that is running the container. This docker host in your example is localhost itself.
Hence, http://localhost:8080 should work
Phrase "Point to" actually means to open the web page hosted in this context.
I don't understand how do I get the IP address of Docker host? You have use server IP and port for the specific app or service. find below example.
ex: docker run -d --name app1 -p 8080:8080 tomcat ( ServerIpaddress:8080)
docker run -d --name app2 -p 8081:8080 tomcat ( ServerIPaddress:8081)
Note: make sure you have open the port on the security group.

Redirect same port of two Docker containers on different ports

I need to run a Java app into several Docker containers in order to isolate their execution.
This app listens on port 12345 and I run my docker container with "-p 12345:5000" to redirect the port 12345 (from Docker container) to the port 5000 of my host. It works fine.
But when I run another Docker container with "-p 12345:50001", I have an error "Bind for 0.0.0.0:12345 failed: port is already allocated."
I don't understand why .. Thank you :)
You've mixed up your host and container ports!
The host port comes first and must be unique. The container port comes second. You probably want something like this, if your java apps both run on the same port in the container:
"-p 12345:50000"
"-p 12346:50000"
Or this if they really expose different ports in the container:
"-p 12345:50000"
"-p 12346:50001"

What does --net=host option in Docker command really do?

I'm a little bit beginner to Docker. I couldn't find any clear description of what this option does in docker run command in deep and bit confused about it.
Can we use it to access the applications running on docker containers without specifying a port? As an example if I run a webapp deployed via a docker image in port 8080 by using option -p 8080:8080 in docker run command, I know I will have to access it on 8080 port on Docker containers ip /theWebAppName. But I cannot really think of a way how --net=host option works.
After the docker installation you have 3 networks by default:
docker network ls
NETWORK ID NAME DRIVER SCOPE
f3be8b1ef7ce bridge bridge local
fbff927877c1 host host local
023bb5940080 none null local
I'm trying to keep this simple. So if you start a container by default it will be created inside the bridge (docker0) network.
$ docker run -d jenkins
1498e581cdba jenkins "/bin/tini -- /usr..." 3 minutes ago Up 3 minutes 8080/tcp, 50000/tcp friendly_bell
In the dockerfile of jenkins the ports 8080 and 50000 are exposed. Those ports are opened for the container on its bridge network. So everything inside that bridge network can access the container on port 8080 and 50000. Everything in the bridge network is in the private range of "Subnet": "172.17.0.0/16", If you want to access them from the outside you have to map the ports with -p 8080:8080. This will map the port of your container to the port of your real server (the host network). So accessing your server on 8080 will route to your bridgenetwork on port 8080.
Now you also have your host network. Which does not containerize the containers networking. So if you start a container in the host network it will look like this (it's the first one):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1efd834949b2 jenkins "/bin/tini -- /usr..." 6 minutes ago Up 6 minutes eloquent_panini
1498e581cdba jenkins "/bin/tini -- /usr..." 10 minutes ago Up 10 minutes 8080/tcp, 50000/tcp friendly_bell
The difference is with the ports. Your container is now inside your host network. So if you open port 8080 on your host you will acces the container immediately.
$ sudo iptables -I INPUT 5 -p tcp -m tcp --dport 8080 -j ACCEPT
I've opened port 8080 in my firewall and when I'm now accesing my server on port 8080 I'm accessing my jenkins. I think this blog is also useful to understand it better.
The --net=host option is used to make the programs inside the Docker container look like they are running on the host itself, from the perspective of the network. It allows the container greater network access than it can normally get.
Normally you have to forward ports from the host machine into a container, but when the containers share the host's network, any network activity happens directly on the host machine - just as it would if the program was running locally on the host instead of inside a container.
While this does mean you no longer have to expose ports and map them to container ports, it means you have to edit your Dockerfiles to adjust the ports each container listens on, to avoid conflicts as you can't have two containers operating on the same host port. However, the real reason for this option is for running apps that need network access that is difficult to forward through to a container at the port level.
For example, if you want to run a DHCP server then you need to be able to listen to broadcast traffic on the network, and extract the MAC address from the packet. This information is lost during the port forwarding process, so the only way to run a DHCP server inside Docker is to run the container as --net=host.
Generally speaking, --net=host is only needed when you are running programs with very specific, unusual network needs.
Lastly, from a security perspective, Docker containers can listen on many ports, even though they only advertise (expose) a single port. Normally this is fine as you only forward the single expected port, however if you use --net=host then you'll get all the container's ports listening on the host, even those that aren't listed in the Dockerfile. This means you will need to check the container closely (especially if it's not yours, e.g. an official one provided by a software project) to make sure you don't inadvertently expose extra services on the machine.
Remember one point that the host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server
you can create your own new network like --net="anyname"
this is done to isolate the services from different container.
suppose the same service are running in different containers, but the port mapping
remains same, the first container starts well , but the same service from second container will fail.
so to avoid this, either change the port mappings or create a network.

Run docker with all ports open

I want to start docker but the VNC ports keep changing everytime i start docker container. So I was wondering if there is anyway to start docker image with ALL Ports OPEN?
-p external port:internal port (e.g. -p 80:80) if you wanted it to map port 80 in the container to port 80 on the host O/S. Docker documentation.

How to make a container visible to the outside network, and handle I.P addresses in production

I have:
a Windows server on bare metal with Hyper-V
Ubuntu server running in Hyper-V
a Docker container with an NGINX web application running in Ubuntu server
Every time I run a Docker image it gets a new I.P. address on the Docker0 network interface. For production, I don't know how to make the Docker container visible to the external network. I also don't know how to handle the fact that the I.P address changes every time the image is run.
What's the correct way to:
make a Docker container visible to the external network?
handle Docker container I.P. addresses in a repeatable way in production?
When you run your Docker container with docker run, you should use the -p switch to forward ports, for example:
docker run -p 80:80 nginx
This would route port 80 from the Ubuntu server to port 80 within the Nginx container.
You should check the Docker documentation on this at https://docs.docker.com/reference/run/#expose-incoming-ports.
When you have multiple containers and links, you should use EXPOSE in the Dockerfile as documented here: https://docs.docker.com/reference/builder/#expose.

Resources