I'm trying to simulate gps data inside a docker container using gpsfake, but fail since all ports seem to be closed.
The docker container is based on the ubuntu:16.04 image and inside it I've run apt-get install gpsd gpsd-clients.
Then I try to simulate gps data from a file with
gpsfake -P 3001 file.nmea
but get the error
gpsd:ERROR: can't bind to IPv6 port 3001, Cannot assign requested address
Trying other ports doesn't seem to work. Upon running nmap -sTU -O localhost I get back that All 2000 scanned ports on localhost (127.0.0.1) are closed. I tried explicitly opening a port with ufw allow <port> but without luck, nmap -P <port> returns STATE=closed.
Should I expect ports to be closed? I must be missing something.
I've tried the same. Apparently, the issue is due to the fact that gpsd responds by default only on the localhost. In fact, if you try to connect from within the container (e.g. with gpspipe) you see that gpsd in container is working correctly.
Indeed, gpsd has a flag (-G) to modify this behaviour, but to my knowledge there isn't any option of gpsfake to instruct the latter to launch gpsd with such flag.
Related
There is a python application which I'm trying to run inside a docker container.
So inside the container when I'm trying to curl I can see the output but when I try to see the output on my host machine using curl it says
curl: (56) Recv failure: Connection reset by peer
and I'm not able to see any output in the browser as well
The port is exposed on 8050
host machine is centos 7
firewall and selinux are disabled
It would help if you posted the docker command / docker-compose file you use.
From what you say, it looks like you used the expose option (or, the container was made exposing that port).
I find the name "expose" a bit misleading.
Exposing a port simply means that the container listens to that port. It does not mean that this port is available ("exposed") to the host.
For that, you need to use publish (-p <host port>:<container port>).
How did you run the container ?
Connection Reset to a Docker container usually indicates that you've defined a port mapping for the container that does not point to an application.
So, if you've defined a mapping of 8050:8050, check that your process inside the docker instance is in fact running on port 8050 (netstat -an|grep LISTEN).
I have a container for which I expose my port to access a service running within the container. I am not exposing my ports outside the container i.e. to the host (using host network on mac). On getting inside the container using exec -t and running a curl for a post request, I get the error:
curl command: curl http://localhost:19999
Failed connect to localhost:19999; Connection refused.
I have the expose command in my dockerfile and do not want to expose ports to my host. My service is also up and running inside the container. I also have the property within config set as
"ExposedPorts": {"19999/tcp": {}}
(obtained through `docker inspect <container id/name>\ Any idea on why this is not working? Using docker for Mac
I'd post my docker-compose file too but this is being built through maven. I can ensure that I am exposing my port using 19999:19999. Another weird issue is that on disabling my proxies it would run a very light weight command for my custom service and wouldn't run it again returning the same error as above. The issue only occurs on my machine and not others
Hints:
The app must be listening on port 19999 which is probably not.
The EXPOSE that you're using inside the Dockerfile does nothing.
Usually there is no need to change the default port on which an application is listening, hence each container has its own IP and you shouldn't run in a port conflict.
Answer:
Instead of curling 19999 try to use the default port on which your app would normally be listening to(it's hard to guess what you are trying to run).
If you don't publish a port (with the docker run -p option or the Docker Compose ports: option), you cannot directly reach the container on Docker for Mac. See the Known limitations, use cases, and workarounds in the Docker Desktop for Mac documentation: the "per-container IP addressing is not possible" item ism what you're trying to attempt.
The docker inspect IP address is basically useless, except in one very specific Docker configuration (on a native-Linux host, calling from outside of Docker, on the same host); I wouldn't bother looking it up.
The Dockerfile EXPOSE directive and similar runtime options do very little and mostly serve as documentation. Even if you have that configured you still need to separately publish the port when you start the container to reach it from outside of Docker space.
I'm currently testing an Ansible role using Molecule.
Basically, Molecule launches a container that is Ansible compliant and runs the role on it.
In order to test the container, Molecule also embed unit tests using Testinfra. The python unit tests are run from within the container so you can check the compliance of the role.
As I'm working on an Nginx based role, one of the unit tests is simply issuing a curl http://localhost:80
I do get the below error message in response:
curl: (7) Failed to connect to localhost port 80: Connection refused
When I:
launch a Vagrant machine
apply the role with Ansible
connect via vagrant ssh
issue a curl http://localhost command
nginx answers correctly.
Therefore, I believe that:
the role is working properly and Nginx is installed correctly
Docker has a different way to set-up the network. In a way, localhost and 127.0.0.1 are not the same anymore.
My questions are the following:
Am I correct?
Can this difference be overcome so the curl would work?
Docker containers start in their own network namespace by default. This namespace includes a separate loopback interface (127.0.0.1) that is distinct from the same interface on the host and any other containers. If you want to access an application from another container or via a published port on the host, you need to listen on all interfaces (0.0.0.0) rather than the loopback interface.
One other issue I often see is at some layer in the connection (the host, or inside of a container), the "localhost" name is mapped to the IPv6 value of ::1 in the /etc/host file, and somewhere in that connection only the IPv4 value is valid (either where the port was published, the application is listening, or IPv6 isn't enabled on the host or docker engine). Therefore, make sure to try connecting to the IPv4 address directly, 127.0.0.1, to eliminate any potential IPv6 issues.
Regarding the curl command and how to correct it, I cannot answer that without more details on how you are running the curl (is it in a separate container), how you are running your application, and how the two are joined on the network (did you create a new network in docker for your application and unit tests to run). The typical solution is to create a new network in docker, run both containers on that network, and connect via docker's included DNS to the container or service name of the destination, e.g. curl http://my_app/.
Edit: based on the comments, if your application and curl command are both running inside the same container, then curl http://127.0.0.1/ should work. There's no change I'm aware of needed with to curl to make it work inside of a container vs on a VM. The error you are seeing is likely from the application not starting and listening on the port as expected, possibly a race condition where the curl command is run too soon, or the base assumptions of how the tool works is incorrect. Start by changing the unit test to verify the application is up and running and listening on the port with commands like ps -ef and ss -lt.
it actually have nothing to do with the differences between Docker and Vagrant (i.e. containers vs VMs).
The testInfra code is actually run from outside the container / VM, hence the fact the subprocess.call(['curl', 'http://localhost']) is failing.
In order to run a command from the container / VM, I should use:
host.check_output('curl http://localhost')
I am using VS2017 docker support. VS created DockerFile for me and when I build docker-compose file, it creates the container and runs the app on 172.x.x.x IP address. But I want to run my application on localhost.
I did many things but nothing worked. Followed the docker docs as a starter and building microsoft sample app . The second link is working perfectly but I get HTTP Error 404 when tried the first link approach.
Any help is appreciated.
Most likely a different application already runs at port 80. You'll have to forward your web site to a different port, eg:
docker run -d -p 5000:80 --name myapp myasp
And point your browser to http://localhost:5000.
When you start a container you specify which inner ports will be exposed as ports on the host through the -p option. -p 80:80 exposes the inner port 80 used by web sites to the host's port 80.
Docker won't complain though if another application already listens at port 80, like IIS, another web application or any tool with a web interface that runs on 80 by default.
The solution is to:
Make sure nothing else runs on port 80 or
Forward to a different port.
Forwarding to a different port is a lot easier.
To ensure that you can connect to a port, use the telnet command, eg :
telnet localhost 5000
If you get a blank window immediatelly, it means a server is up and running on this port. If you get a message and timeout after a while, it means nobody is running. You anc use this both to check for free ports and ensure you can connect to your container web app.
PS I run into this just a week ago, as I was trying to set up a SQL Server container for tests. I run 1 default and 2 named instances already, and docker didn't complain at all when I tried to create the container. Took me a while to realize what was wrong.
In order to access the example posted on Docker Docs, that you pointed out as not working, follow the below steps,
1 - List all the running docker containers
docker ps -a
After you run this command you should be able to view all your docker containers that are currently running and you should see a container with the name webserver listed there, if you have followed the docker docs example correctly.
2 - Get the IP address where your webserver container is running. To do that run the following command.
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" webserver
You should now get the IP address which the webserver container is running, hope you are familiar with this step as it was even available within the building Microsoft sample app example that you attached with the question.
Access the IP address you get once running the above command and you should see the desired output.
Answering to your first question (accessing docker container with localhost in docker for windows), in Windows host you cannot access the container with localhost due to a limitation in the default NAT network stack. A more detailed explanation for this issue can be obtained by visiting this link. Seems like the docker documentation is not yet updated but this issue only exists in Windows hosts.
There is an issue reported for this as well - Follow this link to see that.
Hope this helps you out.
EDIT
The solution for this issue seems to be coming in a future Windows release. Yet that release comes out this limitation is available in Windows host. Follow this link -> https://github.com/MicrosoftDocs/Virtualization-Documentation/issues/181
For those who encountering this issue in 2022, changing localhost to 127.0.0.1 solved an issue for me.
There is other problem too
You must have correct order with parameters
This is WRONG
docker run container:latest -p 5001:80
This sequence start container but parameter -p is ignore, therefore container have no ports mapping
This is good
docker run -p 5001:80 container:latest
We have a couple docker containers deployed on ECS. The application inside the container uses remote service, so it needs to access them using their 10.X.X.X private IPs.
We are using Docker 1.13 with CentOS 7 and docker/alpine as our base image. We are also using netwokMode: host for our containers. The problem comes when we can successfully run telnet 10.X.X.X 9999 from the host machine but if we run the same command from inside the container, it just hangs and it's not able to connect.
In addition, we have net.ipv4.ip_forward enabled in the host machines (where the container runs) but disabled in the remote machine.
Not sure what could be the issue, maybe iptables?
I have spent the day with the same problem (tried with both network mode 'bridge' and 'host'), and it looks like an issue with using busybox's telnet inside ECS - Alpine's telnet is a symlink to busybox. I don't know enough about busybox/networking to suggest what the root cause is, but I was able to prove the network path was clear by using other tools.
My 'go to' for testing a network path is using netcat as follows. The 'success' or 'failure' message varies from version to version, but a refusal or a timeout (-w#) is pretty obvious. All netcat does here is request a socket - it doesn't actually talk to the listening application, so you need something else to test that.
nc -vz -w2 HOST PORT
My problem today was troubleshooting an app's mongo connection. nc showed the path was clear, but telnet had the same issue as you reported. I ended up installing the mongo client and checking with that, and I could connect properly.
If you need to actually run commands over telnet from inside your ECS container, perhaps try installing a different telnet tool and avoiding the busybox inbuilt one.