I am running a docker-compose'd architecture with a registry, gateway (8080), uaa (9999), and 2 microservices (8081 and 8082) and I can see the Swagger API in the gateway app via dropdown selection. I can login to the gateway with admin and user. I've also modified to the code to accept an owner, agent, and monitor role. I can login just fine.
In a terminal I tried Baeldung's curl command (Blog posting) to get a token from the uaa server directly for testing APIs.
[~]$ curl -X POST --data "username=user&password=user&grant_type=password&scope=openid" http://localhost:9999/oauth/token
curl: (7) Failed to connect to localhost port 9999: Connection refused
I opened Kitematic and the uaa server is localhost (host) and 9999 (port) in the docker container log.
Can someone help me figure out why Curl is not working for me?
thanks,
David
This issue is almost certainly related to the network properties of the stack that you are deploying.
If you are issuing the curl command from the host machine to http://localhost:9999, then you need to make sure that the UAA server is mapping it's port to the host.
Does your UAA service have this in the docker-compose.yml?
ports:
- "9999:9999"
If not, you need to add it in order to test it from the host.
By default, docker-compose will create a bridge network for your stack, where your containers can talk to eachother and resolve eachother on container names. But from the host, you will not be able to address the containers unless you explicitly map their exposed ports to the ports on the host.
Related
I deployed a ghost blogging platform on my server using docker. Now I want to expose it to the internet but I'm having some difficulties doing so.
I opened port 8000 in my router a forwarded it to port 32769 which is the one assign to that container. Using port 32769 inside my network I can access the website fine but when I try to access it from the internet it gives a took too long to respond error.
Local IP + PORT: http://10.0.0.140:32769/
Docker port config
Port tester
Router settings
This post was also added to Super User since it has been said that it would be responded better in there.
Let's say your application inside docker is now working on port 8000
You want to expose your application to internet.
The request would go: internet -> router -> physical computer (host machine) -> docker.
You need to export your application to your host machine, this could be done via EXPOSE 8000 instruction in Dockerfile.
That port should be accessible from your host machine first, so, when starting your docker image as docker container, you should add -p parameter, such as
sudo docker run -d -it -p 8000:8000 --name docker_contaier_name docker_image_name
From now on, your docker application can be access within your host machine, let's say it is your physical computer.
Forward port from your router to your host machine
This time, you may want to do as what you did in your question.
Access your application from internet.
If I am thinking correctly, the ip address 10.0.0.140 is just your computer LAN IP address, it cannot accessible from internet.
You can only able to connect to your app via an internet IP, to do that, you can check your router to see what is your WAN IP address, which will be assigned to your router by your internet service provider. Or go google with "what is my IP"
What works for me, more or less, is setting up Apache2 as reverse proxy, redirecting a path in Apache2 to the port of the Docker container. This probably could also be done for example with NGINX.
This way the traffic from the net gets proxied to the container and back to the net, and I see the WordPress site. So regarding the question of OP, the docker container is now exposed to the internet.
However 1: This still doesn't explain why I don't get return traffic from the Docker container if I access it directly from the net.
However 2: Not all the url's in the WordPress site are correct, but that seems to be a WordPress issue and not a Docker / routing issue.
So I have a setup with a react service running in a docker-compose service and on a network in that compose. For that react service I use the http-proxy-middleware to be able to just use relative endpoints (/api/... instead of localhost:xxxx/api/...) both in development and in production but also because one of the libraries that I depend on requires it (for the same reason).
I also have a python flask backend that I want to run on the localhost network to be able to avoid restarting the entire docker-compose on every change.
Currently, the proxy (as expected I suppose) gives a "ECONNREFUSED" error when used as it cannot connect to the backend.
Does anyone have an idea of how I could get the proxy to be able to access the backend without having to run the backend in the docker-compose?
Thanks in advance, Vidar
So I finally got it working, with help from #Hikash, by setting my frontend proxy to connect to the localhost through the IP I get from ip -4 addr show docker0 | grep -Po 'inet \K[\d.]+'.
I'm a bit confused. Trying to run both a HTTP server listening on port 8080 and a SSH server listening on port 22 inside a Docker container I managed to accomplish the latter but strangely not the former.
Here is what I want to achieve and how I tried it:
I want to access services running inside a Docker container using the IP address assigned to the container:
ssh user#172.17.0.2
curl http://172.17.0.2:8080
Note: I know this is not how you would configure a real web server but I want the container to mimic an embedded device which runs both services and which I don't have available all the time. (So it's really just a local non-production thing with no security requirements).
I didn't expect integrating the SSH server to be easy, but to my surprise I just installed and started it and had to do nothing else to be able to connect to the machine via ssh (no EXPOSE 22 or --publish).
Now I wanted to access the container via HTTP on port 8080 and fiddled with --publish and EXPOSE but only managed to make the HTTP server available through localhost/127.0.0.1 on the host. So now I can access it via
curl http://127.0.0.1:8080/
but I want to access both services via the same IP address which is NOT localhost (e.g. the address the container got randomly assigned is totally OK for me).
Unfortunately
curl http://172.17.0.2:8080/
waits until it times out every time I tied it.
I tried docker run together with -p 8080, -p 127.0.0.1:8080:8080, -p 172.17.0.2:8080:8080 and much more combinations, together or without EXPOSE 8080 in the Dockerfile but without success.
Why can I access the container via port 22 without having exposed anything?
And how do I make it accessible via the container's IP address?
Update: looks like I'm experiencing exactly what's described here.
I have deployed a netflix hystrix dashboard with turbine on a docker container, I can access to http://ip:8081/hystrix but when I try to monitor the stream of turbine it freeze and doesn't return any information, I test using curl inside the container and execute curl http://localhost:8081/turbine.stream and curl http://containername:8081/turbine.stream, with those two command works perfectly but when I use the host ip as curl http://hostip:8081/turbine.stream the curl throws Failed to connect to hostip port 8081: No route to host, I can't found a solution, can someone help me with this issue?,
Thanks in advance.
In order to access the container through Host IP you need to ensure the following:
Port mapping is allowing through the Host/Public IP itself not only localhost.
You can check this by executing docker ps on the docker host and look for the PORTS column the default should be as the following 0.0.0.0:8081->8081/tcp which means it can accept connection from any interface either public, private or localhost.
The firewall is not blocking the connection on port 8081.
By default the firewall of the host should be managed by Docker daemon itself so the port 8081 will be allowed in the firewall but you might have a different case either Docker is not managing the firewall of the host or there is an extra layer that prevents the connection
I'm currently testing an Ansible role using Molecule.
Basically, Molecule launches a container that is Ansible compliant and runs the role on it.
In order to test the container, Molecule also embed unit tests using Testinfra. The python unit tests are run from within the container so you can check the compliance of the role.
As I'm working on an Nginx based role, one of the unit tests is simply issuing a curl http://localhost:80
I do get the below error message in response:
curl: (7) Failed to connect to localhost port 80: Connection refused
When I:
launch a Vagrant machine
apply the role with Ansible
connect via vagrant ssh
issue a curl http://localhost command
nginx answers correctly.
Therefore, I believe that:
the role is working properly and Nginx is installed correctly
Docker has a different way to set-up the network. In a way, localhost and 127.0.0.1 are not the same anymore.
My questions are the following:
Am I correct?
Can this difference be overcome so the curl would work?
Docker containers start in their own network namespace by default. This namespace includes a separate loopback interface (127.0.0.1) that is distinct from the same interface on the host and any other containers. If you want to access an application from another container or via a published port on the host, you need to listen on all interfaces (0.0.0.0) rather than the loopback interface.
One other issue I often see is at some layer in the connection (the host, or inside of a container), the "localhost" name is mapped to the IPv6 value of ::1 in the /etc/host file, and somewhere in that connection only the IPv4 value is valid (either where the port was published, the application is listening, or IPv6 isn't enabled on the host or docker engine). Therefore, make sure to try connecting to the IPv4 address directly, 127.0.0.1, to eliminate any potential IPv6 issues.
Regarding the curl command and how to correct it, I cannot answer that without more details on how you are running the curl (is it in a separate container), how you are running your application, and how the two are joined on the network (did you create a new network in docker for your application and unit tests to run). The typical solution is to create a new network in docker, run both containers on that network, and connect via docker's included DNS to the container or service name of the destination, e.g. curl http://my_app/.
Edit: based on the comments, if your application and curl command are both running inside the same container, then curl http://127.0.0.1/ should work. There's no change I'm aware of needed with to curl to make it work inside of a container vs on a VM. The error you are seeing is likely from the application not starting and listening on the port as expected, possibly a race condition where the curl command is run too soon, or the base assumptions of how the tool works is incorrect. Start by changing the unit test to verify the application is up and running and listening on the port with commands like ps -ef and ss -lt.
it actually have nothing to do with the differences between Docker and Vagrant (i.e. containers vs VMs).
The testInfra code is actually run from outside the container / VM, hence the fact the subprocess.call(['curl', 'http://localhost']) is failing.
In order to run a command from the container / VM, I should use:
host.check_output('curl http://localhost')