Some questions of Docker -p and Dockerfile - docker

1: docker run -d -p 3000:3000 images
If i up a localhost:3000 server in container,how can i open it in my machine browser, what's the ip?
I've tried localhost:3000 or 0.0.0.0:3000.
2: I use docker pull ubuntu and docker run it, after updating and deploying the server i commmited it.So now i have one ubuntu and a new image.
Next time i run a container using this new image,
The shell scripts still needs to be sourced, also the server needs to reopened again.
How can i commit the image that it can source the scripts and deployed by itself when i docker run it.
Thanks.

I don't quite understand questions 2 or 3, can you add more context?
Regarding your question about using -p, you should be able to visit in your browser using http://localhost:3000/ . However that assumes a couple of things are true.
First, you used -p 3000:<container-port> - looks good on this point.
Second, the image you have run exposed port 3000 (EXPOSE 3000).
And third, the service running in the container is listening on 0.0.0.0:3000. If it is listening on localhost inside the container, then the port export will not work. Each container has its own localhost which is usable only inside the container. So it needs to be listening on all IPs inside the container, in order for outside connections to reach the service from outside of the container.

Related

Is there a way to know which port is been published from within a docker container?

I have a python-gunicorn web server that runs on docker. I want to run multiple servers on the same machine so I assigned a "random" port to each container like this:
$ docker run -d -p 0:80 image
If I run $ docker ps, I can see the port been used by the container:
0.0.0.0:32771->80/tcp
Now, I want to retrieve this number (32771) from within the container. Is there anyway to do this?
Edit:
I want this information because I need to connect to these servers from another machine and the way it is implemented requires that the server sends its url via HTTP Post: IP:Port/path

Docker cannot access exposed port inside container

I have a container for which I expose my port to access a service running within the container. I am not exposing my ports outside the container i.e. to the host (using host network on mac). On getting inside the container using exec -t and running a curl for a post request, I get the error:
curl command: curl http://localhost:19999
Failed connect to localhost:19999; Connection refused.
I have the expose command in my dockerfile and do not want to expose ports to my host. My service is also up and running inside the container. I also have the property within config set as
"ExposedPorts": {"19999/tcp": {}}
(obtained through `docker inspect <container id/name>\ Any idea on why this is not working? Using docker for Mac
I'd post my docker-compose file too but this is being built through maven. I can ensure that I am exposing my port using 19999:19999. Another weird issue is that on disabling my proxies it would run a very light weight command for my custom service and wouldn't run it again returning the same error as above. The issue only occurs on my machine and not others
Hints:
The app must be listening on port 19999 which is probably not.
The EXPOSE that you're using inside the Dockerfile does nothing.
Usually there is no need to change the default port on which an application is listening, hence each container has its own IP and you shouldn't run in a port conflict.
Answer:
Instead of curling 19999 try to use the default port on which your app would normally be listening to(it's hard to guess what you are trying to run).
If you don't publish a port (with the docker run -p option or the Docker Compose ports: option), you cannot directly reach the container on Docker for Mac. See the Known limitations, use cases, and workarounds in the Docker Desktop for Mac documentation: the "per-container IP addressing is not possible" item ism what you're trying to attempt.
The docker inspect IP address is basically useless, except in one very specific Docker configuration (on a native-Linux host, calling from outside of Docker, on the same host); I wouldn't bother looking it up.
The Dockerfile EXPOSE directive and similar runtime options do very little and mostly serve as documentation. Even if you have that configured you still need to separately publish the port when you start the container to reach it from outside of Docker space.

docker app serving on https and connecting to external rethinkdb

I'm trying to launch a docker container that is running a tornado app in python 3.
It serves a few API calls and is writing data to a rethinkdb service on the system. RethinkDB does not run inside a container.
The system it runs on is ubuntu 16.04.
Whenever I tried to launch the docker with docker-compose, it would crash saying the connection to localhost:28015 was refused.
I went researching the problem and realized that docker has its own network and that external connections must be configured prior to launching the container.
I used this command from a a question I found to make it work:
docker run -it --name "$container_name" -d -h "$host_name" -p 9080:9080 -p 1522:1522 "$image_name"
I've changed the container name, host name, ports and image name to fit my own application.
Now, the docker is not crashing, but I have two problems:
I can't reach it from a browser by pointing to https://localhost/login
I lose the docker-compose usage. This is problematic if we want to add more services that talk to each other in the future.
So, how do I launch a docker that can talk to my rethinkdb database without putting that DB into a container?
Please, let me know if you need more information to answer this question.
I'd appreciate your guidance in this.
The end result is that the docker will serve requests coming over https.
for exmaple I have an end-point called /getURL.
The request includes a token verified in the DB. The URL is like this:
https://some-domain.com/getURL
after verification with the DB it will send back a relevant response.
the docker needs to be able to talk on 443 and also on 28015 with the rethinkdb service.
(Since 443 and https include the use of certificates, I'd appreciate a solution that handles this on regular http with some random port too and I'll take it from there)
Thanks!
P.S. The service works when I launch it without a docker on pycharm it's the docker configuration I have problems with.
I found a solution.
I needed to add this so that the container can connect to both the database and the rethinkdb:
--network="host"
Since this solution works for me right now, but it isn't the best solution, I won't mark this as the answer for now.

localhost not working docker windows 10

I am using VS2017 docker support. VS created DockerFile for me and when I build docker-compose file, it creates the container and runs the app on 172.x.x.x IP address. But I want to run my application on localhost.
I did many things but nothing worked. Followed the docker docs as a starter and building microsoft sample app . The second link is working perfectly but I get HTTP Error 404 when tried the first link approach.
Any help is appreciated.
Most likely a different application already runs at port 80. You'll have to forward your web site to a different port, eg:
docker run -d -p 5000:80 --name myapp myasp
And point your browser to http://localhost:5000.
When you start a container you specify which inner ports will be exposed as ports on the host through the -p option. -p 80:80 exposes the inner port 80 used by web sites to the host's port 80.
Docker won't complain though if another application already listens at port 80, like IIS, another web application or any tool with a web interface that runs on 80 by default.
The solution is to:
Make sure nothing else runs on port 80 or
Forward to a different port.
Forwarding to a different port is a lot easier.
To ensure that you can connect to a port, use the telnet command, eg :
telnet localhost 5000
If you get a blank window immediatelly, it means a server is up and running on this port. If you get a message and timeout after a while, it means nobody is running. You anc use this both to check for free ports and ensure you can connect to your container web app.
PS I run into this just a week ago, as I was trying to set up a SQL Server container for tests. I run 1 default and 2 named instances already, and docker didn't complain at all when I tried to create the container. Took me a while to realize what was wrong.
In order to access the example posted on Docker Docs, that you pointed out as not working, follow the below steps,
1 - List all the running docker containers
docker ps -a
After you run this command you should be able to view all your docker containers that are currently running and you should see a container with the name webserver listed there, if you have followed the docker docs example correctly.
2 - Get the IP address where your webserver container is running. To do that run the following command.
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" webserver
You should now get the IP address which the webserver container is running, hope you are familiar with this step as it was even available within the building Microsoft sample app example that you attached with the question.
Access the IP address you get once running the above command and you should see the desired output.
Answering to your first question (accessing docker container with localhost in docker for windows), in Windows host you cannot access the container with localhost due to a limitation in the default NAT network stack. A more detailed explanation for this issue can be obtained by visiting this link. Seems like the docker documentation is not yet updated but this issue only exists in Windows hosts.
There is an issue reported for this as well - Follow this link to see that.
Hope this helps you out.
EDIT
The solution for this issue seems to be coming in a future Windows release. Yet that release comes out this limitation is available in Windows host. Follow this link -> https://github.com/MicrosoftDocs/Virtualization-Documentation/issues/181
For those who encountering this issue in 2022, changing localhost to 127.0.0.1 solved an issue for me.
There is other problem too
You must have correct order with parameters
This is WRONG
docker run container:latest -p 5001:80
This sequence start container but parameter -p is ignore, therefore container have no ports mapping
This is good
docker run -p 5001:80 container:latest

If I use EXPOSE $PORT in a Dockerfile, can I un-expose the port it when I use `docker run`?

I'm working on a Dockerfile for a web app that will use an nginx-proxy container. It's also got a CLI for doing app domain stuff (creating/modifying db, running cleanup jobs, etc.)
In 99% of cases, when I boot a container, I want to use the webapp. I've got an EXPOSE 3000 in the Dockerfile and that works perfectly well for NGINX-proxy. Nginx-proxy uses docker-gen, which listens for docker's start and stop events and re-builds the NGINX config based on exposed ports.
The problem comes when I want to run a CLI-based container. I don't want to EXPOSE 3000. I want to un-expose port 3000, so NGINX-proxy doesn't change the NGINX configs.
Is this possible from docker run? Reading the docs didn't give any clarity, and trying docker run -p '' didn't do work (I got a docker: No port specified: ::<empty>. error).
Honestly, this is a bit of a nitpick, and it's not that big of a deal. I can take EXPOSE 3000 out of the Dockerfile and just do -p 3000 on the command line. I just like having it in the Dockerfile so it's on by default and I just have to turn it off in the few instances when I'll need it.
I also know that I can use a second Dockerfile that inherits from the first.
I'm just curious if it's possible to un-publish a port exposed in the Dockerfile when I do docker run (or possibly in docker-compose).
Even a couple of years later, the situation hasn't changed much. There is no UNEXPOSE but atleast there is a workaround with "docker save" and "docker load" allowing to edit the metadata of a docker image. Atleast to remove volumes and ports, I have created a little script that can automate the task, see docker-copyedit.
As far as I know that's not possible now. The best thing you can do is too use the -P option to map the 3000 to some random port, so it won't conflict with the main container instance, e.g.
docker run -it -P <some-image>
This will result in the following docker ps
main_container 0.0.0.0:3000->3000/tcp
run_container 0.0.0.0:32769->3000/tcp

Resources