I was running the container as mentioned below. Port 5005 was exposed for Intellij's remote debugging.
docker run -d -p 8080:8080 -p 5005:5005 --network=my_network --env "JAVA_TOOL_OPTIONS="-agentlib:jdwp=transport=dt_socket,address=*:5005,server=y,suspend=n"" container_name:latest
Due to some reasons now I have to run the container on host network which looks like this
docker run -d --network=host --env "JAVA_TOOL_OPTIONS="-agentlib:jdwp=transport=dt_socket,address=*:5005,server=y,suspend=n"" container_name:latest
Since there is no port mapping allowed on the host network, how is it possible to expose port 5005 for remote debugging?
Related
I'm learning docker and I'm testing running containers. It works fine only when I run a container listening on port 80.
Example:
Works OK:
docker run -d --name fastapicontainer_4 -p **8090**:80 fastapitest
docker run -d --name fastapicontainer_4 -p **8050**:80 fastapitest
Don´t work OK::
docker run -d --name fastapicontainer_4 -p **8050**:**8080** fastapitest
When I change the port where the program listens in the container and put a port different than 80, the page didn't work. Someone knows if it's possible to use a different port from 80? and how can I do it? I'm using fastapi.
Thanks,
Guillermo
The syntax of the -p argument is <host port>:<container port>. You can make the host port be anything you want, and Docker will arrange for it to redirect to the container port, but you cannot set the container port to an arbitrary value. There needs to be a service in the container listening on that port.
So if you have a web server in the container running on port 80, then the <container port> part of the -p option must always be 80, unless you change the web server configuration to listen on a different port.
What you are doing:
docker run -d --name fastapicontainer_4 -p 8050:8080 fastapitest
Explanation: What this is doing is forwarding the Host port 8050 to the container port 8080. In case your fastapi service is not listening on the port 8080, the connection will fail.
Host 8050 -> Container 8080
Correct way of doing it:
docker run -d --name fastapicontainer_4 -p 8080:80 fastapitest
Explanation: This is forwarding the host port 8080 to the container port 80
Host 8080 -> Container 80
Note: Docker doesn't validate the connection when you share a port, it just opens the gate so you can do whatever you want with that open port, so even if your service is not listening on that port, docker doesn't care.
You need to specify the custome port you want to use to run fastapi.
e.g.
uvicorn.run(app, host="0.0.0.0", port=8050)
Now if you run mapping 8050(or ay other) port on host with 8050 on container, it should work:
docker run -d --name fastapicontainer_4 -p 8050:8080 fastapitest
I am trying to run jenkins container. I used "docker run --restart always --name myjenkins -p 8080:80 jenkins" but cannot access jenkins at http://localhost:8080 on browser. If I use docker run --restart always --name myjenkins -p 8080:8080 jenkins, I can access the jenkins url.
Thanks in advance
Without Docker
Each application must use a different port.
You can access to your application using directly its ports (if are available of course):
APP_A : 192.168.4.5:8080
APP_B : 10.10.10.15:8081
APP_C : www.app.com:8082
With Docker
Applications could use any port because each one "is a different world"
You can not access to your docker applications using its internal ports:
APP_A : 192.168.4.5:8080
APP_B : 10.10.10.15:8080
APP_C : www.app.com:8080
Because for instance, 8080 of APP_B is only visible inside APP_B container. No body can access to this applications.
In order to access to your docker applications, You must explicitly establish a relationship between:
Linux host ports <-> inside containers ports.
To do that you could use -p parameter
docker run -d -p 8080:8080 APP_A ...
docker run -d -p 8081:8080 APP_B ...
docker run -d -p 8082:8080 APP_C ...
After this you could access to your docker applications using its new ports :
APP_A : 192.168.4.5:8080
APP_B : 10.10.10.15:8081
APP_C : www.app.com:8082
Also a common error when docker-compose & docker network are used is use localhost instead ip when a docker app needs to connect to another docker app. As you can see you need to use ip or domain + external port instead localhost:8080
what is the difference between publishing 8080:80 and 8080:8080 in a docker run?
With 8080:80 you expect that your application uses or start with the 80 internal port inside container.
With 8080:8080 you expect that your application uses or start with the 8080 internal port inside container.
You just need to research what is the internal container port used by your jenkins and put it in docker run -p ...
8080:80 refers that in the container you are using port 80 and you are forwarding that port to host machine's 8080 port. So you are running Jenkins on port 80 inside your container wherever in scenario 2 you are running Jenkins on port 8080 inside the container and exposing it over the same port on host machine.
For example if I am running mysql in container I may use 8080:3306 so mysql would be running on port 3306 but exposed on 8080 of host machine but if choose it to be 8080:80 for mysql it may not work because as per the code of mysql it binds itself on port 3306 not port 80. Same is the scenario in your case of Jenkins too.
When you say 8080:80, it means any request coming on port 8080 will be forwarded to service running on port 80 inside your docker container.
Similarly 8080:8080 means any request coming for port 8080 will be forwarded to service running on port 8080 inside your container
You can also think of it as -
Port for Outside World: Actual Port of service in container
Hope this helps
The syntax looks like below. More details about -p flag.
docker run -p [ip-on-host:]port-on-host:port-in-container image-name
In your case, -p 8080:80 means leading all traffic to port 80 in container. If you check port status on host by netstat -lntp|grep 8080, there is a process managed by docker-proxy who is listening on port 8080 on host machine. It would manage all traffic routing between port 8080 on host and port 80 in container.
In my workplace docker is running behind firewall, only the port that is meant to serve webpage is excluded by rule.
The container starts but website does not open for same port.
If I host the website from machine running container using python -m SimpleHTTPServer it works.
docker container run --restart=always -p 8081: 8082 -it vue-js-app: latest
From the Docker documentation:
Publish or expose port (-p, --expose)
$ docker run -p 127.0.0.1:80:8080/tcp ubuntu bash
This binds port 8080 of the container to TCP port 80 on 127.0.0.1 of
the host machine. You can also specify udp and sctp ports. The Docker
User Guide explains in detail how to manipulate ports in Docker.
$ docker run --expose 80 ubuntu bash
This exposes port 80 of the container without publishing the port to
the host system’s interfaces.
And, from the Docker User Guide:
You also saw how you can bind a container’s ports to a specific port
using the -p flag. Here port 80 of the host is mapped to port 5000 of
the container:
$ docker run -d -p 80:5000 training/webapp python app.py
So, as an example of how to expose the ports you can use:
docker container run --restart always -p 8081:8082 -it vue-js-app:latest
I'm having a rather awful issue with running a Redis container. For some reason, even though I have attempted to bind the port and what have you, it won't expose the Redis port it claims to expose (6379). Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
Docker Redis Page (for reference to where I pulled the image from): https://hub.docker.com/_/redis/
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
docker run --name ausbot-ranksync-redis -d redis
docker run --name ausbot-ranksync-redis --expose=6379 -d redis
https://gyazo.com/991eb379f66eaa434ad44c5d92721b55 (The last container I scan is a MariaDB container)
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
Those two should work and make the port available on your host.
Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
You shouldn't be checking the ports directly on the container from outside of docker. If you want to access the container from the host or outside, you publish the port (as done above), and then access the port on the host IP (or 127.0.0.1 on the host in your first example).
For docker networking, you need to run your application listening on all interfaces (not localhost/loopback). The official redis image already does this, and you can verify with:
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot netstat -lnt
or
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot ss -lnt
To access the container from outside of docker, you need to publish the port (docker run -p ... or ports in the docker-compose.yml). Then you connect to the host IP and the published port.
To access the container from inside of docker, you create a shared network, run your containers there, and access using docker's DNS and the container port (publish and expose are not needed for this):
docker network create app
docker run --name ausbot-ranksync-redis --net app -d redis
docker run --name redis-cli --rm --net app redis redis-cli -h ausbot-ranksync-redis ping
I am running Docker for Mac. When I run
docker run -d --rm --name nginx -p 80:80 nginx:1.10.3
I can access Nginx on port 80 on my Mac. When I run
docker run -d --rm --name nginx --network host -p 80:80 nginx:1.10.3
I can not.
Is it possible to use both "--network host" and publish a port so that it is reachable from my Mac?
Alternatively, can I access Nginx from my Mac via the IP of the HyperKit VM?
Without the --network flag the container is added to the bridge network by default; which creates a network stack on the Docker bridge (usually the veth interface).
If you specify --network host the container gets added to the Docker host network stack. Note the container will share the networking namespace of the host, and thus all its security implications.
Which means you don't need to add -p 80:80, instead run...
docker run -d --rm --name nginx --network host nginx:1.10.3
and access the container on http://127.0.0.1
The following link will help answer the HyperKit question and the current limitations:
https://docs.docker.com/docker-for-mac/networking/
There is no docker0 bridge on macOS
Because of the way networking is implemented in Docker for Mac, you
cannot see a docker0 interface in macOS. This interface is actually
within HyperKit.