Q: Docker: How to make several portranges available to ouside - docker

I just started playing around with Docker and was able to setup a docker image using Ubuntu 14.03 / LXDE / VNC which works fine since I can connect from outside to the VNC server.
Now I am trying to understand the Networking of Docker but it seems I am completely lost. Since I had to forward the port for VNC already it seems that no further ports could be forwarded?
Assuming I have an application running under Wine which requires several portranges, how to achieve that? Does it mean that I would need to create a further container running the Wine application on top of the base image?

You can specify the -p option as often as you want
ie -p 8080-8085:8080-8085 -p 1234:1234 -p 9000-9005:9000-9005

Related

Ignite Docker cluster without network mode

I am trying to setup a two server nodes Apache Ignite cluster, based on Docker containers hosted by two different hosts.
After several tries, the only way I found to have nodes communicating was using "--net=host".
But we are using user namespaces on these hosts, so it's not a solution I can deploy.
Is there some workaround ? I have read things about BasicAddressResolver but no results so far. Maybe it's not a right way.
And overlay networks seem a bit cumbersome for our needs.
Thanks for any help, maybe just a working config file I could adapt.
Regards
BAD
docker run -v "/tmp/apache_ignite_node.xml:/opt/ignite/apache-ignite/config/default-config.xml" -p "10800:10800" -p "11211:11211" -p "47100-47199:47100-47199" -p "47500-47599:47500-47599" -p "49112:49112" apacheignite/ignite:latest
WORKS
docker run --net=host -v "/tmp/apache_ignite_node.xml:/opt/ignite/apache-ignite/config/default-config.xml" -p "10800:10800" -p "11211:11211" -p "47100-47199:47100-47199" -p "47500-47599:47500-47599" -p "49112:49112" apacheignite/ignite:latest
(of course I could remove the ports exposition)
"For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network." (source)
Routing at the OS level I assume means --net=host. So, according to Docker, the answer is the overlay network. It looks like other options are available, but that would need extra software.

Docker cannot connect locally after swarm is set up

I do the docker tutorial document at part 3. Because my computer is windows, I use the docker toolbox. Before part 3, I use the command docker run -p 8080:80 test, and it can connect to 192.168.99.100:8080, that's successful.
But when creates a swarm and deploies the docker-compose.yml, it was a success.
ID NAME MODE REPLICAS IMAGE PORTS
uskmy4zkflhf testswarm_web replicated 5/5 ***/get-started:test *:6666->80/tcp
However, when I used 192.168.99.100:6666 to connect, the page could not be displayed, and using ping, I could see that 192.168.99.100 could be connected.
When I uninstall the toolbox and then reinstall it, I deploy it only once, which means that the entire program sets the port only once and no containers occupy it. It doesn't work in this case either.
What's the problem with that?
The port publishing mechanism works differently when you use standalone or swarm mode. If you're using a compose file in swarm mode, you should not be using docker-compose up but docker stack deploy instead.
I would suggest taking it step-by-step, instead of using the stack deploy or compose approach, first learn to use the docker service create command, and take it one service at a time.
Try docker service create --name proxy --publish 8080:80 nginx and see if you can reach NGINX in 192.168.99.100:8080. Once you're there, try scaling it with docker service update --replicas=5 proxy.
Once you feel comfortable with this, you should be able to tell what's going on with more precision.
If you want to delve deeper into how por publishing works in swarm mode, I suggest this docs article.

docker app serving on https and connecting to external rethinkdb

I'm trying to launch a docker container that is running a tornado app in python 3.
It serves a few API calls and is writing data to a rethinkdb service on the system. RethinkDB does not run inside a container.
The system it runs on is ubuntu 16.04.
Whenever I tried to launch the docker with docker-compose, it would crash saying the connection to localhost:28015 was refused.
I went researching the problem and realized that docker has its own network and that external connections must be configured prior to launching the container.
I used this command from a a question I found to make it work:
docker run -it --name "$container_name" -d -h "$host_name" -p 9080:9080 -p 1522:1522 "$image_name"
I've changed the container name, host name, ports and image name to fit my own application.
Now, the docker is not crashing, but I have two problems:
I can't reach it from a browser by pointing to https://localhost/login
I lose the docker-compose usage. This is problematic if we want to add more services that talk to each other in the future.
So, how do I launch a docker that can talk to my rethinkdb database without putting that DB into a container?
Please, let me know if you need more information to answer this question.
I'd appreciate your guidance in this.
The end result is that the docker will serve requests coming over https.
for exmaple I have an end-point called /getURL.
The request includes a token verified in the DB. The URL is like this:
https://some-domain.com/getURL
after verification with the DB it will send back a relevant response.
the docker needs to be able to talk on 443 and also on 28015 with the rethinkdb service.
(Since 443 and https include the use of certificates, I'd appreciate a solution that handles this on regular http with some random port too and I'll take it from there)
Thanks!
P.S. The service works when I launch it without a docker on pycharm it's the docker configuration I have problems with.
I found a solution.
I needed to add this so that the container can connect to both the database and the rethinkdb:
--network="host"
Since this solution works for me right now, but it isn't the best solution, I won't mark this as the answer for now.

localhost not working docker windows 10

I am using VS2017 docker support. VS created DockerFile for me and when I build docker-compose file, it creates the container and runs the app on 172.x.x.x IP address. But I want to run my application on localhost.
I did many things but nothing worked. Followed the docker docs as a starter and building microsoft sample app . The second link is working perfectly but I get HTTP Error 404 when tried the first link approach.
Any help is appreciated.
Most likely a different application already runs at port 80. You'll have to forward your web site to a different port, eg:
docker run -d -p 5000:80 --name myapp myasp
And point your browser to http://localhost:5000.
When you start a container you specify which inner ports will be exposed as ports on the host through the -p option. -p 80:80 exposes the inner port 80 used by web sites to the host's port 80.
Docker won't complain though if another application already listens at port 80, like IIS, another web application or any tool with a web interface that runs on 80 by default.
The solution is to:
Make sure nothing else runs on port 80 or
Forward to a different port.
Forwarding to a different port is a lot easier.
To ensure that you can connect to a port, use the telnet command, eg :
telnet localhost 5000
If you get a blank window immediatelly, it means a server is up and running on this port. If you get a message and timeout after a while, it means nobody is running. You anc use this both to check for free ports and ensure you can connect to your container web app.
PS I run into this just a week ago, as I was trying to set up a SQL Server container for tests. I run 1 default and 2 named instances already, and docker didn't complain at all when I tried to create the container. Took me a while to realize what was wrong.
In order to access the example posted on Docker Docs, that you pointed out as not working, follow the below steps,
1 - List all the running docker containers
docker ps -a
After you run this command you should be able to view all your docker containers that are currently running and you should see a container with the name webserver listed there, if you have followed the docker docs example correctly.
2 - Get the IP address where your webserver container is running. To do that run the following command.
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" webserver
You should now get the IP address which the webserver container is running, hope you are familiar with this step as it was even available within the building Microsoft sample app example that you attached with the question.
Access the IP address you get once running the above command and you should see the desired output.
Answering to your first question (accessing docker container with localhost in docker for windows), in Windows host you cannot access the container with localhost due to a limitation in the default NAT network stack. A more detailed explanation for this issue can be obtained by visiting this link. Seems like the docker documentation is not yet updated but this issue only exists in Windows hosts.
There is an issue reported for this as well - Follow this link to see that.
Hope this helps you out.
EDIT
The solution for this issue seems to be coming in a future Windows release. Yet that release comes out this limitation is available in Windows host. Follow this link -> https://github.com/MicrosoftDocs/Virtualization-Documentation/issues/181
For those who encountering this issue in 2022, changing localhost to 127.0.0.1 solved an issue for me.
There is other problem too
You must have correct order with parameters
This is WRONG
docker run container:latest -p 5001:80
This sequence start container but parameter -p is ignore, therefore container have no ports mapping
This is good
docker run -p 5001:80 container:latest

no route to host between 2 docker containers in same host

I have two docker containers which runs on the same host(centos 6 server).
container 1 >> my web application (Ports mapped to some random port of host)
container 2 >> python selenium testscripts ( Runs headless Firefox)
My Test cases fails saying problem loading page
Basically the issue is that the second container or any other container residing on the same host is not able to access my Web application.
But my web app is accesible to outside world
I linked both containers and still i am facing the problem
I tried replicating the same setup in my laptop(ubuntu) and its working fine!!!
Any help appreciated !!
Thanks in advance
I think order matters in linking containers. You should start container1 the web application and then link container2 with webapp.
You need to change your selenium scripts to use the docker link id or alias as the hostname.
For example if you did:
$ sudo docker run -d --name webapp my/webapp
$ sudo docker run -d -P --name selenium --link webapp:webapp my/selenium
then your selenium scripts should point to http://webapp/
I had this problem in Fedora(22) - for some containers (not all). Upon inspection, it showed up there is an special DOCKER chain on the iptables, that can make some connections go loose. Appending an accept rule for that chain made things work:
sudo iptables -A DOCKER -p tcp -j ACCEPT
(While searching for the problem before hitting this question, there are suggestions this also occurs in CentOS and RHEL)
Yes the order of container launch does matter, But i am launching my web application container through jenkins.
jenkins is configured in container 2.
So i can not launch my web application(container 1) manually.
Is there anyother solution for this, something like bidirectional linkage??

Resources