Run a .war on a Docker container - docker

I'm running a Java web application on a Docker cluster running those commands:
PS C:\Users\Marco\test_workspace> docker run -v test_web_application.war:/usr/local/tomcat/webapps/TestWebApplication.war -it -p 8080:8080 --network "host" -d Tomcat
The actual output confirms that the container is running:
At this point i want to access to the container through it's IP address from my host and i'm using the command inspect to identify the IP:
But, as the screenshot shows, i don't see any IP assigned.
Thus, my questions are:
Why the command --network "host" to assign an IP address shared with the host didn't worked ?
Finally, how can i access to my web application from the host ?

Command option --network="host" isn't supported for Docker for Windows (more information: https://docs.docker.com/network/host/).
You can access your application on localhost:8080 with launch option -p 8080:8080.

Related

container not accessible when using --network host

I am writing a simple nodejs container to forward requests on localhost to a port, the container exposes port 4433
docker build . -t myproxy
when i run the container by publishing ports like
docker run --rm -p 4433:4433 myproxy
I am able to access my server through http://localhost:4433 as expected but if i try to run the container with --network host i.e
docker run --rm --net host myproxy
I cannot access the container and get site cannot be reached error.
why is container not binding to my host network?
if i provide both options i.e.
docker run --rm --net host -p 4433:4433 myproxy
then i do get warning on console that
WARNING: Published ports are discarded when using host network mode which means it does recognize that i am trying to use host network.
OS: MAC
From the Docker docs:
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.

docker local host url not opening

I installed docker and with tensorflow image I am unable to open in browser with jupyter notebook.
What am I missing??
command used: docker run -it -v /home/$USER_NAME/tf_files:/tf_files gcr.io/tensorflow/tensorflow
where "gcr.io/tensorflow/tensorflow" is the tensorflow image and "/home/surya" is $HOME.
in terminal
output in browser
PS: docker installation is correct as "docker run hello-world" gives required message.
You missed to bind some ports. The official documentation of tensorflow provides the exposed ports with this command:
docker run -it -p 8888:8888 -v /home/surya/tf_files:/tf_files gcr.io/tensorflow/tensorflow
where -p 8888:8888 means: link the port 8888 of my local machine with the service in the container, which is also 8888. Then you can access the service at http://localhost:8888
Why do I have to map a port?
Your container shows the following:
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://localhost:8888/?token=1b3ec72ff1ed67f77a09beaee1dc4b9ad4e7aee26401b6f0
which means that you have to connect to the running process inside the container with the port 8888. To make the port of the container accessible from your local machine, you have to add -p 8888:8888 to your command. Then accessing the URL given to you from your container makes it possible to access the container's notebook via your local browser.

How can I run a docker container on localhost over the default IP?

I'm following the following tutorial on how to start a basic nginx server in a docker container. However, the example's nginx docker container runs on localhost (0.0.0.0) as shown here:
Meanwhile, when I run it it for some reason it runs on the IP 10.0.75.2:
Is there any particular reason why this is happening? And is there any way to get it to run on localhost like in the example?
Edit: I tried using --net=host but had no results:
The default network is bridged. The 0.0.0.0:49166->443 shows a port mapping of exposed ports in the container to high level ports on your host because of the -P option. You can manually map specific ports by changing that flag to something like -p 8080:80 -p 443:443 to have port 8080 and 443 on your host map into the container.
You can also change the default network to be your host network as you've requested. This removes some of the isolation and protections provided by the container, and limits your ability to configure integrations between containers, which is why it is not the default option. That syntax would be:
docker run --name nginx1 --net=host -d nginx
Edit: from your comments and a reread I see you're also asking about where the 10.0.75.2 ip address comes from. This is based on how you launch the docker daemon. That IP binding is assigned when you pass the --ip flag to the daemon documentation here. If you're running docker in a vm with docker-machine, I'd expect this to be the IP of your vm.
A good turnaround is to set using -p flag (--publish short)
docker run -d -p 3000:80 --name <your_image_name> nginx:<version_tag>

Docker container doesn't expose ports when --net=host is mentioned in the docker run command

I have a CentOS docker container on a CentOS docker host. When I use this command to run the docker image docker run -d --net=host -p 8777:8777 ceilometer:1.x the docker container get host's IP but doesn't have ports assigned to it.
If I run the same command without "--net=host" docker run -d -p 8777:8777 ceilometer:1.x docker exposes the ports but with a different IP. The docker version is 1.10.1. I want the docker container to have the same IP as the host with ports exposed. I also have mentioned in the Dockerfile the instruction EXPOSE 8777 but with no use when "--net=host" is mentioned in the docker run command.
I was confused by this answer. Apparently my docker image should be reachable on port 8080. But it wasn't. Then I read
https://docs.docker.com/network/host/
To quote
The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
That's rather annoying as I'm on a Mac. The docker command should report an error rather than let me think it was meant to work.
Discussion on why it does not report an error
https://github.com/docker/for-mac/issues/2716
Not sure I'm convinced.
The docker version is 1.10.1. I want the docker container to have same ip as the host with ports exposed.
When you use --net=host it tells the container to use the hosts networking stack. So you can't expose ports to the host, because it is the host (as far as the network stack is concerned).
docker inspect might not show the expose ports, but if you have an application listening on a port, it will be available as if it were running on the host.
On Linux, I have always used --net=host when myapp needed to connect to an another docker container hosting PostgreSQL.
myapp reads an environment variable DATABASE in this example
Like Shane mentions this does not work on MacOS or Windows...
docker run -d -p 127.0.0.1:5432:5432 postgres:latest
So my app can't connect to my other other docker container:
docker run -e DATABASE=127.0.0.1:5432 --net=host myapp
To work around this, you can use host.docker.internal instead of 127.0.0.1 to resolve your hosts IP address.
Therefore, this works
docker run -e DATABASE=host.docker.internal:5432 -d myapp
Hope this saves someone time!

Docker in Docker: Port Mapping

I have found a similar thread, but failed to get it to work. So, the use case is
I start a container on my Linux host
docker run -i -t --privileged -p 8080:2375 mattgruter/doubledocker
When in that container, I want to start another one with GAE SDK devserver running.
At that, I need to access a running app from the host system browser.
When I start a container in the container as
docker run -i -t -p 2375:8080 image/name
I get an error saying that 2375 port is in use. I start the app, and can curl 0.0.0.0:8080 when inside both containers (when using another port 8080:8080 for example) but cannot preview the app from the host system, since lohalhost:8080 listens to 2375 port in the first container, and that port cannot be used when launching the second container.
I'm able to do that using the image jpetazzo/dind. The test I have done and worked (as an example):
From my host machine I run the container with docker installed:
docker run --privileged -t -i --rm -e LOG=file -p 18080:8080
jpetazzo/dind
Then inside the container I've pulled nginx image and run it with
docker run -d -p 8080:80 nginx
And from the host environment I can browse the nginx welcome page with http://localhost:18080
With the image you were using (mattgruter/doubledocker) I have some problem running it (something related to log attach).

Resources