getting this error while curl the application ip
curl (56) Recv failure: Connection reset by peer - when hitting docker container
Do a small check by running:
docker run --network host -d <image>
if curl works well with this setting, please make sure that:
You're mapping the host port to the container's port correctly:
docker run -p host_port:container_port <image>
Your service application (running in the container) is running on localhost or 0.0.0.0 and not something like 127.0.0.1
I GOT the same error
umesh#ubuntu:~/projects1$ curl -i localhost:49161
curl: (56) Recv failure: Connection reset by peer
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
in my case it was due wrong port no
|---MY Projects--my working folder
--------|Dockerfile ---port defined 8080
--------|index.js-----port defined 3000
--------|package.json
then i was running ::::
docker run -p 49160:8080 -d umesh1/node-web-app1
so as the application was running in port 3000 in index.js it was not able to connect to the application got the error as u were getting
So TO SOLVE THE PROBLEM
deleted the last container/image that was created my worong port
just change the port no of INDEX.JS
|---MY Projects--my working folder
--------|Dockerfile ---port defined 8080
--------|index.js-----port defined 8080
--------|package.json
then build the new image
docker build -t umesh1/node-web-app1 .
running the image in daemon mode with exposed port
docker run -p 49160:8080 -d umesh1/node-web-app1
THUS MY APPLICATION WAS RUNNING without any error listing on port 49161
I have same when bind to port that is not lissened by any service inside container.
So check -p option
-p 9200:9265
-p <port in container>:<port in host os to be binded to>
Related
I have this Dockerfile:
FROM nginx:latest
COPY devops/nginx_proxy.conf /etc/nginx/conf.d/default.conf
EXPOSE 8080
and a devops/nginx_proxy.conf:
server {
listen 8080;
client_max_body_size 32M;
underscores_in_headers on;
}
Running the Dockerfile with docker run -p 8080:80 test and then testing with curl http://localhost/, I see this error:
curl: (7) Failed to connect to localhost port 80: Connection refused
Even more curious, curl http://localhost:8080/ returns this:
curl: (52) Empty reply from server
Why am I getting these errors?
With Docker you can bind containers ports to host ports using the -p option.
General rule
docker run -p HOST_PORT:CONTAINER_PORT
Bind container 8080 port to the 80 of the host
docker run -p 80:8080 test
Ports which are not bound to the host (i.e., -p 80:80 instead of -p 127.0.0.1:80:80) are accessible from the outside
Bind the port limiting the access to localhost
docker run -p 127.0.0.1:80:8080 test
My docker configuration needs to map ports for external access, but when trying to install the data hub central war file, mlDeploy and mlRedeploy encounter problems, that the ports are unavailable:
Task :mlDeployApp
Creating custom rewriters for staging and job app servers
Loading REST options for staging server
Initializing ExecutorService
Loading default query options from file default.xml
Shutting down ExecutorService
Loading REST options for jobs server
Initializing ExecutorService
Loading traces query options from file traces.xml
Shutting down ExecutorService
Writing traces query options to MarkLogic; port: 8013
Error occurred while loading modules; host: localhost; port: 8013;
cause: java.net.ConnectException: Failed to connect to localhost/127.0.0.1:8013
...
What went wrong:
Execution failed for task ':mlDeployApp'.
Error occurred while loading REST modules: Error occurred while loading modules; host: localhost; port: 8013; cause: java.net.ConnectException: Failed to connect to localhost/127.0.0.1:8013
Docker file contents
FROM store/marklogicdb/marklogic-server:10.0-7-dev-centos
WORKDIR /tmp
EXPOSE 7997-8040
EXPOSE 8080
EXPOSE 9000
CMD /etc/init.d/MarkLogic start && tail -f /dev/null
Original docker run command:
docker run -d --name=marklogic10.0-7_local -p 7997-8040:7997-8040 -p 8080:8080 -p 9000:9000 marklogic-initial-install:10.0-7-dev-centos
Revised docker run command:
docker run -d --name=marklogic10.0-7_local -p 7997-8012:7997-8012 -p 8014-8040:8014-8040 -p 8043:8013 -p 8090:8080 -p 9000:9000 marklogic-initial-install:10.0-7-dev-centos
Note: I originally had the same problem with port 8080 but mapped it to port 8090 which fixed the problem. Doing the same for port 8013 did not work.
The problem was with the installation steps and not the ports.
Following the tutorial on https://docs.docker.com/get-started/part2/.
I start my docker container with docker run -p 4000:80 friendlyhello
and see
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:8088/ (Press CTRL+C to quit)
But it's inaccessible from the expected path of localhost:4000.
$ curl http://localhost:4000/
curl: (7) Failed to connect to localhost port 4000: Connection refused
$ curl http://127.0.0.1:4000/
curl: (7) Failed to connect to 127.0.0.1 port 4000: Connection refused
Okay, so maybe it's not on my local host. Getting the container ID I retrieve the IP with
docker inspect --format '{{ .NetworkSettings.IPAddress }}' 7e5bace5f69c
and it returns 172.17.0.2 but no luck! curl continues to give the same responses. I can confirm something is running on 4000....
lsof -i :4000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 94812 travis 18u IPv6 0x7516cbae76f408b5 0t0 TCP *:terabase (LISTEN)
I'm pulling my hair out on this. I've read through the troubleshooting guide and can confirm
* not on a proxy
* don't use a custom dns
* I'm having issues connecting to docker, not docker connecting to my pip server.
Running the app.py with python app.py the server starts and I'm able to hit it. What am I missing?
Did you accidentally put port=8088 at the bottom of your app.py file? When you are running this the last line of your output is saying that your python app is exposed on port 8088 not 80.
To confirm you can run either modify the app.py file and rebuild the image, or alternatively you could run: docker run -p 4000:8088 friendlyhello which would map your local port 4000 to 8088 in the container.
Try to run it using:
docker run -p 4000:8088 friendlyhello
As you can see from the logs, your app starts on port 8088, but you connect 4000 to 80 where on 80, nothing is actually listening.
I'm having problems to get my ssh tunnel working for my container in a docker swarm cluster.
ssh connection on my local machine:
ssh -L 7180:test.XXX:7180 user#XXX
In my Dockerfile on the remote machine:
EXPOSE 7180
Container start:
docker -H test:2379 --tlsverify run -d -p 7180:7180 --net=my-net
I tried to connect in Firefox via:
localhost:7180
Unfortunately the connection gets refused on the remote machine:
channel 3: open failed: connect failed: Connection refused
"docker container ls" prints following for the ports:
xxx:7180->7180/tcp
Inside my container "netstat -ntlp | grep LISTEN" prints:
tcp 0 0 0.0.0.0:7180 0.0.0.0:* LISTEN -
I'm new to this but after all what I've read so far this should actually work. I'm using "--net=my-net" because I want to setup my own network later. I had the same issue with "--net=host". What am I doing wrong?
The ssh command should be:
ssh -L 7180:127.0.0.1:7180 user#XXX
And then from your browser, you would go to:
http://127.0.0.1:7180
I've avoided using "localhost" because some machines map this to IPv6 even if you don't have IPv6 configured.
When testing this tunnel, make sure your application is listening on the remote server by doing an ssh to that server and run a curl command directly on the server to 127.0.0.1:7180. If it doesn't work there, you would repeat your debugging with netstat inside the container and verifying the port is published in thedocker ps` output.
I got it working with
ssh -D localhost:7180 -f -C -q -N user#XXX
and using
xxx:7180
in my browser (instead of localhost).
localhost and --net=host did not work for me with ssh -L.
EDIT
Turned out to a problem with the image, I tried another one and it works fine
I'm trying to run Pgadmin 4 as server mode using Docker on Debian 9. I have followed the instructions on https://hub.docker.com/r/dpage/pgadmin4/ I start it by the following command
docker run -p 5050:5050 -e "PGADMIN_DEFAULT_EMAIL=myemail#gmail.com" -e "PGADMIN_DEFAULT_PASSWORD=a12345678" -d dpage/pgadmin4
I don't get any errors, and docker ps shows the status as below
root#poweredge:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c4b11e4bceb7 dpage/pgadmin4 "/bin/bash /entry.sh" 12 seconds ago Up 10 seconds 80/tcp, 443/tcp, 0.0.0.0:5050->5050/tcp upbeat_jackson
But when I go to serverip:5050 nothing loads. Any idea what the problem may be here?
On the local machine when I execute curl http://localhost:5050 I get Connection reset by peer if the docker instance is running
root#poweredge:~# curl http://localhost:5050
curl: (56) Recv failure: Connection reset by peer
if I stop the Docker instance, I get
root#poweredge:~# curl http://localhost:5050
curl: (7) Failed to connect to localhost port 5050: Connection refused
PgAdmin 4 docker container has exposed port 80 and 443 by default. You can checck the Dockerfile here https://github.com/postgres/pgadmin4/blob/master/pkg/docker/Dockerfile
So the port mapping parameter in the command has to be updated (-p host_port: container_port)
Below is the updated command to access pgadmin4 via http (port 80)
docker run -p 5050:80 -e "PGADMIN_DEFAULT_EMAIL=myemail#gmail.com" -e "PGADMIN_DEFAULT_PASSWORD=a12345678" -d dpage/pgadmin4
After starting the container you should be able to access it via http://localhost:5050
Are you trying to access it out side your virtual box? If yes, check if you have port forwarding rules of your Virtual machine set correctly: