running docker gives Ports are not available error - docker

docker run --rm -it -p 8080:80 mcr.microsoft.com/dotnet/core/runtime:3.1
docker run --rm -it -p 8080:80 mcr.microsoft.com/dotnet/core/sdk:3.1
docker run --rm -it -p 8080:80 mcr.microsoft.com/dotnet/core/aspnet:3.1
When I run any of the above docker commands to create a container, I get the following error. And I get this for both for linux as well as windows.
C:\Program Files\Docker\Docker\resources\bin\docker.exe: Error response from daemon: Ports are not available: listen tcp 0.0.0.0:8080: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
time="2020-03-24T17:20:44+05:30" level=error msg="error waiting for container: context canceled"
I tried the suggestion given in this SO ans to find the process Id and kill it.
Further I got the process hacker as suggested here to observe whats that process. Looks like its a system process.
Can anybody suggest what can be done?

-p 8080:80 says "forward port 8080 on the host to port 80 in the container". Port 80 is determined by the container image. Port 8080 is arbitrary—it's a port you're choosing.
So instead do -p 8081:80, and now you point your browser at localhost:8081 instead of localhost:8080.
If that doesn't work then maybe it's your firewall?
(See https://pythonspeed.com/articles/docker-connection-refused/ for diagrams of how port forwarding works).

You assign the same host port 8080 multiple times, which is not allowed - on any operating system.
After running this command
docker run --rm -it -p 8080:80 mcr.microsoft.com/dotnet/core/runtime:3.1
the first container gets immediately the port 8080 assigned on the host machine, what we can also see in the console screenshot that you provided. And others fail, because they simply don't get the port they want. So that all containers can be started, you should use a different port for each container, a la
docker run --rm -it -p 8080:80 mcr.microsoft.com/dotnet/core/runtime:3.1
docker run --rm -it -p 8081:80 mcr.microsoft.com/dotnet/core/sdk:3.1
docker run --rm -it -p 8082:80 mcr.microsoft.com/dotnet/core/aspnet:3.1
You should then be able to access those containers via the respective ports 8080, 8081, and 8082 (on localhost or local network IP of ur machine, e.g. 192.168.1.20).

This answer solves two errors I believe.
Error 1 (if the wrong port is specified in the Windows Defender Firewall for an existing rule for Docker):
Unable to find image 'docker102tutorial:latest' locally docker: Error
response from daemon: pull access denied for docker102tutorial,
repository does not exist or may require 'docker login': denied:
requested access to the resource is denied.
Error 2 (if no Windows Defender Firewall rule at all and #:8080 is specified in docker run command in the -p parameter):
Error response from daemon: Ports are not available: listen tcp
0.0.0.0:8080: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
In Windows 10, you will need to allow this through the Windows Defender Firewall. You'll get this dialog. You might be able to restrict the port for either the TCP by default (or UDP) line in the Windows Defender Firewall. The table screen shot of rules was taken before the port was modified and the last error was corrected. I believe the client in this case is WSL 2 and the server is Windows which means the incoming port needs to be opened on the server.
Allow Local Port 8080 in the Windows Defender Firewall so it matches the port after the ":" in the run command:
You will then get this error.
To correct, change from "Defer to user" to "Defer to application"

You can assign external port in the following ways:
Add an EXPOSE instruction in the Dockerfile such as EXPOSE 8080
Use the –expose flag at runtime to expose a port like the following: docker -expose=8080 test
Use the -p flag or -P flag in the Docker run string to publish a port as mentioned above, i.e. docker run --rm -it -p 8080:80 mcr.microsoft.com/dotnet/core/runtime:3.1
But you are allowed to do it only ONCE

Open PowerShell in administrator mode and enter the command
net stop http
The command lists (and gives you the option to stop) all services currently using port 80.
You can now start your docker container with port 80.

Related

Unable to access jenkins on port 8080 when running docker network host

I have a Jenkins running in a docker container in Linux ec2 instance. I am running testcontainers within it and I want to expose all ports to the host. For that I am using network host.
When I run the jenkins container with -p 8080:8080 everything works fine and I am able to access jenkins on {ec2-ip}:8080
docker run id -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts
however, If I want to run the same image using --network=host as I want to expose all ports to the host
docker run id --network=host jenkins/jenkins:lts
{ec2-ip}:8080 becomes unreachable. I can curl to it locally within the container localhost:8080 but accessing jenkins from the browser doesn't work.
I am not sure how network host would change the way I access jenkins on port 8080. the application should be still available on port 8080 on the host IP address?
Check if you are enabling the port 8080 in the security group for the instance.
When a Docker container is running in the host network mode using the --network=host option, it shares the network stack with the Docker host. This means that the container is not isolated and uses the same network interface as the host.
In your case, you should be able to access the Jenkins from the browser with ec2-ip:8080
I tested it by running Jenkins with the following command:
docker run -id --name jenkins --network=host jenkins/jenkins:lts
if the issue still persists, you can check the following:
make sure the container is running
make sure that there is no other process is running on port 8080
make sure that you enabled the port 8080 for your ec2
AFAIU --network doesn't do what you expect it to do. --network flag allows you to connect the container to a network. For example, when you do --nerwork=host your container will be able to use the Docker host network stack. Not the other way around. Take a look at the official documentation.
Figured it out. needed to update iptables to allow port 8080 on network host.
sudo iptables -D INPUT -i eth0 -p tcp -m tcp --dport 8080 -m comment --comment "# jenkins #" -j ACCEPT

Docker windows Ports are not available:

New to Docker. I am running Visual Studio 2019 community on Win 10 machine. Installed Docker desktop and created two solutions (service1 and service2). I am trying to run both of the solutions on their own containers.
I was able to build and run service1 using:
docker run -it --rm -p 3000:80 --name mymicroservicecontainer mymicroservice
Question what is 3000:80? is 80 a port? because I was able to run my api using http://localhost:3000/api/product/1 from browser.
Next, I am trying to run service2 on it's own container by:
docker run -it --rm -p 2000:80 --name myanotherservicecontainer myanotherservice
Since the port is 2000, I guess it should work however I get following error:
docker: Error response from daemon: Ports are not available: listen tcp 0.0.0.0:2000: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
time="2020-04-08T14:22:41-04:00" level=error msg="error waiting for container: context canceled"
Is that because I have :80 same as service1? What is the solution? I am running commands on admin mode in command prompt.
Please help. Thank you.
docker run -it --rm -p 3000:80 --name mymicroservicecontainer mymicroservice
Answer to your first question is YES, 80 is a port.
Basically what -p 3000:80 does is that it maps TCP port 80 in the container to port 3000 on the Docker host.
The error you are getting for services is because port 2000 is occupied some other process. It's clearly mentioned in the error message as well.
docker: Error response from daemon: Ports are not available
If you try to map it to some other port(that is free on your machine), then it would work as expected.
Maybe try -p 1111:80 or -p 1234:80
Read this for more detail on docker container networking.

How to access a specific running host port from inside docker container

I am trying from within a Docker container to acces/share a port (7497) on the host that is already running. I am trying to "talk" to a program on the host that has a socket port running on 7497. This is setup on a unix host.
How can i expose only that specefic port for two way operation from docker when the port is alredy active on the host? Is it possible?
I cant map the port with example -p 7497:7497, as then i get an error "bind: address already in use". This error is correct as the port is used by the program in the host.
The only way i manage to get acces is to use --network host --userns=host in the run command when starting the container, example:
nvidia-docker run -e HOME=/tmp -it --rm -v /home/kc/Deep_Learning:/projects --network host --userns=host tf_py3_gpu_science:1.4
But this way i am exposing all ports, why i am worried for some safety issues.

Run docker but get This site can’t be reached 192.168.99.100 refused to connect

I unable to access docker exposed port on windows machine. In details I do the following:
$ docker build -t abc01 .
$ docker run -d -p 80:4000 abc01
Then I try to reach docker container in browser:
http://192.168.99.100:4000
and get annoying result:
This site can’t be reached 192.168.99.100 refused to connect.
What is the issue?
You are exposing the right ports, however, you need to access the website at: 80 instead of 4000, given that 4000 is the port on which your application is listening.
The way exposing ports in Docker works is as follows:
docker run -p 80:4000 myImage
where
80[is the outside port]
The one is exposed on your host and you will use it in your browser
4000 [is the inside port]
The port that is used inside the container by the application

Remote access to webserver in docker container

I've started using docker for dev, with the following setup:
Host machine - ubuntu server.
Docker container - webapp w/ tomcat server (using https).
As far as host-container access goes - everything works fine.
However, I can't manage to access the container's webapp from a remote machine (though still within the same network).
When running
docker port <container-id> 443
output is as expected, so docker's port binding seems fine.
172.16.*.*:<random-port>
Any ideas?
Thanks!
I figured out what I missed, so here's a simple flow for accessing docker containers webapps from remote machines:
Step #1 : Bind physical host ports (e.g. 22, 443, 80, ...) to container's virtual ports.
possible syntax:
docker run -p 127.0.0.1:443:3444 -d <docker-image-name>
(see docker docs for port redirection with all options)
Step #2 : Redirect host's physical port to container's allocated virtual port. possible (linux) syntax:
iptables -t nat -A PREROUTING -i <host-interface-device> -p tcp --dport <host-physical-port> -j REDIRECT --to-port <container-virtual-port>
That should cover the basic use case.
Good luck!
Correct me if I'm wrong but as far as I'm aware docker host creates a private network for it's containers which is inaccessible from the outside. That said your best bet would probably be to access the container at {host_IP}:{mapped_port}.
If your container was built with a Dockerfile that has an EXPOSE statement, e.g. EXPOSE 443, then you can start the container with the -P option (as in "publish" or "public"). The port will be made available to connections from remote machines:
$ docker run -d -P mywebservice
If you didn't use a Dockerfile, or if it didn't have an EXPOSE statement (it should!), then you can also do an explicit port mapping:
$ docker run -d -p 80 mywebservice
In both cases, the result will be a publicly-accessible port:
$ docker ps
9bcb… mywebservice:latest … 0.0.0.0:49153->80/tcp …
Last but not least, you can force the port number if you need to:
$ docker run -d -p 8442:80 mywebservice
In that case, connecting to your Docker host IP address on port 8442 will reach the container.
There are some alternatives of how to access docker containers from an external device (in the same network), check out this post for more information http://blog.nunes.io/2015/05/02/how-to-access-docker-containers-from-external-devices.html

Resources