I have exposed port 80 in my application container's dockerfile.yml as well as mapping "80:80" in my docker-compose.yml but I only get a "Connection refused" after I do a "docker-compose up" and try to do a HTTP GET on port 80 on my docker-machine's IP address. My docker hub provided RethinkDB instance's admin panel gets mapped just fine through that same dockerfile.yml ("EXPOSE 8080") and docker-compose.yml (ports "8080:8080") and when I start the application on my local development machine port 80 gets exposed as expected.
What could be going wrong here? I would be very grateful for a quick insight from anyone with more docker experience!
So in my case, my service containers both bound to localhost (127.0.0.1) and therefore seemingly the exposed ports were never picked up via my docker-compose port mapping. I configured my services to bind to 0.0.0.0 respectively and now they works flawlessly. Thank you #creack for pointing me in the right direction.
In my case I was using
docker-compose run app
Apparently
docker-compose run command does not create any of the ports specified in the service configuration.
See https://docs.docker.com/compose/reference/run/
I started using
docker-compose create app
docker-compose start app
and problem solved.
In my case I found that the service I am trying to set up had all their networks as internal: true. It is strange that it didn't give me an issue when doing a docker stack deploy
I have opened up https://github.com/docker/compose/issues/6534 to ask for a proper error message so it will be obvious for other people.
If you are using the same Dockerfile, make sure you also expose the port 80 EXPOSE 80 otherwise, your compose mapping 80:80 will not work.
Also make sure that your http server listens on 0.0.0.0:80 and not localhost or a different port.
Related
I am currently facing following problem:
I build a docker container of a node server (a simple express server which sends tracing data to Zipkin on port 9411) and want to run it along Zipkin.
So as I understood, the node server should send tracing data to Zipkin using port 9411.
If I run the server with node only (not as docker), I can run it along Zipkin and everything is working fine.
But if I got Zipkin running and than want to fire up my Docker Container, I get the error
Error starting userland proxy: listen tcp4 0.0.0.0:9411: bind: address already in use.
My understanding is that there is a conflict concerning the port 9411, since it seems to be blocked by Zipkin, but obviously, also the server in the Docker container needs to use it to communicate with Zipkin.
I would appreciate if anybody got an idea how I could solve this problem.
Greetings,
Robert
When you start a docker container, you add a port binding like this:
docker run ... -p 8000:9000
where 8000 is the port you can use on the pc to access port 9000 within the container.
Don't bind the express server to 9411 as zipkin is already using that port.
I found the solution: using the flag --network="host" does the job, -p also is not needed.
This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 2 years ago.
I have a RESTful application that takes a callback URL as an argument running on port 8000. It does some work and calls the callback URL to make sure that it should continue work and calls if anything goes wrong. In my tests, I just spin up an HTTP server on localhost port 8009 to wait for responses.
When I run the application directly, it works fine because the localhost is the same. However, when I run the application locally in a container, it obviously doesn't work because of the network isolation of the container. Here's part of my docker-compose.yml:
version: '3'
services:
app:
build: .
ports:
- "8000:8000"
I tried to add - "8009:8009" under ports, but that only goes one direction. I can call port 8000 to call the API, but I can't figure out how to allow the container to call my laptop's http://localhost:8009.
Any idea how to allow the container to call out to another port on my laptop, or if there's a better way to test the callback functionality? I could run the tests inside the container, but I like being able to run them separately.
Use --network: "host" in your docker-compose file, then localhost in your docker container will point to your docker host.
If your laptop is running Mac and Windows, use host host.docker.internal, that resolves to host machine's IP address and then connect using your laptop hostname.
When trying to build and run my docker project using docker-compose up, it returns me this error output:
I have deleted all old containers, I'm not in swarm mode, and I have no more dockers images or containers running so ... I don't know why there is a problem about sockets in 5000 port.
Thanks buddies.
EDIT: It doesn't matter if I change the port on the docker-compose.yml, console will throw me the same issue.
EDIT 2: After changing the port to 9000:9000 in docker-compose.yml:
The error indicates the port 5000 is already in use. Please change up the port mapping to bind to some other port.
I have two docker-compose files set up - one for the frontend application, and one for the backend.
Frontend runs on 3000 port and is exposed on 80: 0.0.0.0:80:3000
Backend runs on 3001 port and is exposed on the same port also publicly: 0.0.0.0:3001:3001
From the host machine, I can easily make a request to the backend:
$ curl 127.0.0.1:3001
But I cannot do it from the frontend container - nothing is listening on that port because those are two different containers in different networks.
I tried to connect both of them in one network - then I can use the IP of the backend container, or a hostname, to make a valid request. But it's still not the localhost. How can I solve this?
When using Docker, localhost points to the container itself, not to your computer. There are a few ways to do what you want. But none of them will work with localhost from a container.
The cleanest way to do it is by setting up hostnames for your services within the yml and set up your applications to look for those hostnames instead of localhost.
Let me know if you need examples and I will look for them at home and post it here to you.
I've got a swarm set up with a two nodes, one manager and one worker. I'd like to have a port published in the swarm so I can access my applications and I wonder how I achieve this.
version: '2'
services:
server:
build: .
image: my-hub.company.com/application/server:latest
ports:
- "80:80"
This exposes port 80 when I run docker-compose up and it works just fine, however when I run a bundled deploy
docker deploy my-service
This won't publish the port, so it just says 80/tcp in docker ps, instead of pointing on a port. Maybe this is because I need to attach a load balancer or run some fancy command or maybe add another layer of config to actually expose this port in a multi-host swarm.
Can someone help me understand what I need to configure/do to make this expose a port.
My best case scenario would be that port 80 is exposed, and if I access it from different hostnames it will send me to different applications.
Update:
It seems to work if I run the following commands after deploying the application
docker service update -p 80:80 my-service_server
docker kill <my-service_server id>
I found this repository for running a HA proxy, it seems great and is supported by docker themselves, however I cannot seem to apply this separate to my services using the new swarm mode.
https://github.com/docker/dockercloud-haproxy
There's a nice description in the bottom describing how the network should look:
Internet -> HAProxy -> Service_A -> Container A
However I cannot find a way to link services through the docker service create command, optimally now looks like a way to set up a network, and when I apply this network on a service it will pick it up in the HAProxy.
-- Marcus
As far as I understood for the moment you just can publish ports updating the service later the creation, like this:
docker service update my-service --publish-add 80:80
Swarm mode publishes ports in a different way. It won't show up in docker ps because it's not publishing the port on the host, it publishes the port to all nodes so that takes it can load balancing between service replicas.
You should see the port from docker service inspect my-service.
Any other service should be able to connect to my-service:80
docker service ls will display the port mappings.