Connect from docker container to host elasticsearch runing instance - docker

I have this simple setup:
A Dockerfile:
FROM centos:latest
RUN yum update -y
RUN yum install -y epel-release
RUN yum install -y java-1.8.0-openjdk
RUN yum install -y curl
RUN mkdir /var/totest
EXPOSE 9200
EXPOSE 9300
And on host machine I have a running instance of elasticsearch 5.4.2.
When I do on host curl http://localhost:9200 I get the correct response:
{
"name" : "T8apV_J",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "YFkkaM8dSJCnXbDFRFD_aw",
"version" : {
"number" : "5.4.2",
"build_hash" : "929b078",
"build_date" : "2017-06-15T02:29:28.122Z",
"build_snapshot" : false,
"lucene_version" : "6.5.1"
},
"tagline" : "You Know, for Search"
}
but, if i build the docker image and I run
docker run -p 9200:9200 -it my-simple-image /bin/bash
I get this error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint infallible_brown (71c6fae1275d149b2708a1aa9b737278d340c4e7b073d858b4222eb0268ef285): Error starting userland proxy: listen tcp 0.0.0.0:9200: bind: address already in use.
How can I connect from inside the docker container to the host running elasticsearch instance?
All I need is to be able to perform from inside the container
curl http://localhost:9200/index_name/_search

You need to remove the EXPOSE 9200 directive in the Dockerfile because port number 9200 is already taken by the elastic search service.
You should curl with the ip address of your host machine.
Docker attaches containers to the bridge network to start with so you need a way to get the ip address of the host.
You may need to set an alias depending on whether or not your host is connected to a wider network. I set this alias for my bridge0 interface.
sudo ifconfig bridge0 alias <ip_address>
If your host connected to a wider network, use the inet address of assigned to your ethernet device. You can get the inet address by running:
ifconfig en0 | grep "inet " | cut -d " " -f2
You can either way pass the inet address of your network interface as an environment variable to docker:
docker run -e MY_HOST_IP=$(ip_address) -it my-simple-image /bin/bash
# or
docker run -e MY_HOST_IP=$(ifconfig en0 | grep "inet " | cut -d " " -f2) -it my-simple-image /bin/bash
curl $MY_HOST_IP:8000
See this thread for more information about your question

Elastic search uses port 9200. Now you want to publish port 9200 from a container to localhost. You can not have two applications listening on the same port of localhost.

Here is a quick hack to get it working. With Elastic accessible on your host with curl -X GET 'http://0.0.0.0:9200', I think docker run --net host --name your_app will allow the container'd app to query the Elastic instance running on your host. So, you can omit any EXPOSE or -p settings. But be aware, according to their docs:
--network="host" gives the container full access to local system services such as D-bus and is therefore considered insecure.

Related

Connection between containers [duplicate]

I have an app whose only dependency is flask, which runs fine outside docker and binds to the default port 5000. Here is the full source:
from flask import Flask
app = Flask(__name__)
app.debug = True
#app.route('/')
def main():
return 'hi'
if __name__ == '__main__':
app.run()
The problem is that when I deploy this in docker, the server is running but is unreachable from outside the container.
Below is my Dockerfile. The image is ubuntu with flask installed. The tar just contains the index.py listed above;
# Dockerfile
FROM dreen/flask
MAINTAINER dreen
WORKDIR /srv
# Get source
RUN mkdir -p /srv
COPY perfektimprezy.tar.gz /srv/perfektimprezy.tar.gz
RUN tar x -f perfektimprezy.tar.gz
RUN rm perfektimprezy.tar.gz
# Run server
EXPOSE 5000
CMD ["python", "index.py"]
Here are the steps I am doing to deploy
$> sudo docker build -t perfektimprezy .
As far as I know the above runs fine, the image has the contents of the tar in /srv. Now, let's start the server in a container:
$> sudo docker run -i -p 5000:5000 -d perfektimprezy
1c50b67d45b1a4feade72276394811c8399b1b95692e0914ee72b103ff54c769
Is it actually running?
$> sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1c50b67d45b1 perfektimprezy:latest "python index.py" 5 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp loving_wozniak
$> sudo docker logs 1c50b67d45b1
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
Yep, seems like the flask server is running. Here is where it gets weird. Lets make a request to the server:
$> curl 127.0.0.1:5000 -v
* Rebuilt URL to: 127.0.0.1:5000/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 127.0.0.1:5000
> Accept: */*
>
* Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from server
Empty reply... But is the process running?
$> sudo docker top 1c50b67d45b1
UID PID PPID C STIME TTY TIME CMD
root 2084 812 0 10:26 ? 00:00:00 python index.py
root 2117 2084 0 10:26 ? 00:00:00 /usr/bin/python index.py
Now let's ssh into the server and check...
$> sudo docker exec -it 1c50b67d45b1 bash
root#1c50b67d45b1:/srv# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:47677 127.0.0.1:5000 TIME_WAIT
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
root#1c50b67d45b1:/srv# curl -I 127.0.0.1:5000
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 5447
Server: Werkzeug/0.10.4 Python/2.7.6
Date: Tue, 19 May 2015 12:18:14 GMT
It's fine... But not from the outside.
What am I doing wrong?
The problem is you are only binding to the localhost interface, you should be binding to 0.0.0.0 if you want the container to be accessible from outside. If you change:
if __name__ == '__main__':
app.run()
to
if __name__ == '__main__':
app.run(host='0.0.0.0')
It should work.
Note that this will bind to all interfaces on the host, which may in some circumstances be a security risk - see https://stackoverflow.com/a/58138250/4332 for more information on binding to a specific interface.
When using the flask command instead of app.run, you can pass the --host option to change the host. The line in Docker would be:
CMD ["flask", "run", "--host", "0.0.0.0"]
or
CMD flask run --host 0.0.0.0
Your Docker container has more than one network interface. For example, my container has the following:
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
32: eth0#if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
if you run docker network inspect bridge, you can see that your container is connected to that bridge with the second interface in the above output. This default bridge is also connected to the Docker process on your host.
Therefore you would have to run the command:
CMD flask run --host 172.17.0.2
To access your Flask app running in a Docker container from your host machine. Replace 172.17.0.2 with whatever the particular IP address is of your container.
You need to modify the host to 0.0.0.0 in the docker file. This is a minimal example
# Example of Dockerfile
FROM python:3.8.5-alpine3.12
WORKDIR /app
EXPOSE 5000
ENV FLASK_APP=app.py
COPY . /app
RUN pip install -r requirements.txt
ENTRYPOINT [ "flask"]
CMD [ "run", "--host", "0.0.0.0" ]
and the file app.py is
# app.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def home():
return "Hello world"
if __name__ == "__main__":
app.run()
Then compile with
docker build . -t deploy_flask
and run with
docker run -p 5000:5000 -t -i deploy_flask:latest
You can check the response with curl http://127.0.0.1:5000/ -v
First of all in your python script you need to change code from
app.run()
to
app.run(host="0.0.0.0")
Second, In your docker file, last line should be like
CMD ["flask", "run", "-h", "0.0.0.0", "-p", "5000"]
And on host machine if 0.0.0.0:5000 doesn't work then you should try with localhost:5000
Note - The CMD command has to be proper. Because CMD command provide defaults for executing container.
To build on other answers:
Imagine you have two computers. Each computer has a network interface (WiFi, say), which is its public IP. Each computer has a loopback/localhost interface, at 127.0.0.1. This means "just this computer."
If you listed on 127.0.0.1 on computer A, you would not expect to be able to connect to that via 127.0.0.1 when running on computer B. After all, you asked to listen on computer A's local, private address.
Docker is similar setup; technically it's the same computer, but the Linux kernel is allowing each container to run with its own isolated network stack. So 127.0.0.1 in a container is the same as 127.0.0.1 on a different computer than your host—you can't connect to it.
Longer version, with diagrams: https://pythonspeed.com/articles/docker-connection-refused/
For fast readers, three quick things to check:
Make sure you have exposed the port in the Dockerfile.
Running the command in container using flask run --host=0.0.0.0
Specifying the port in your docker run command docker run -it -p5000:5000 yourImageName
In my case, binding the host to 0.0.0.0 only worked on my local environment, and it failed when deploying on a server.
Then it's working when I replaced the port with --network=host:
Before:
docker run -d -p 5000:5000 <docker_image>
After:
docker run -d --network=host <docker_image>
ps. I still used the 0.0.0.0:5000 inside the container when running the flask app.

Docker doesn't expose port [duplicate]

I have an app whose only dependency is flask, which runs fine outside docker and binds to the default port 5000. Here is the full source:
from flask import Flask
app = Flask(__name__)
app.debug = True
#app.route('/')
def main():
return 'hi'
if __name__ == '__main__':
app.run()
The problem is that when I deploy this in docker, the server is running but is unreachable from outside the container.
Below is my Dockerfile. The image is ubuntu with flask installed. The tar just contains the index.py listed above;
# Dockerfile
FROM dreen/flask
MAINTAINER dreen
WORKDIR /srv
# Get source
RUN mkdir -p /srv
COPY perfektimprezy.tar.gz /srv/perfektimprezy.tar.gz
RUN tar x -f perfektimprezy.tar.gz
RUN rm perfektimprezy.tar.gz
# Run server
EXPOSE 5000
CMD ["python", "index.py"]
Here are the steps I am doing to deploy
$> sudo docker build -t perfektimprezy .
As far as I know the above runs fine, the image has the contents of the tar in /srv. Now, let's start the server in a container:
$> sudo docker run -i -p 5000:5000 -d perfektimprezy
1c50b67d45b1a4feade72276394811c8399b1b95692e0914ee72b103ff54c769
Is it actually running?
$> sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1c50b67d45b1 perfektimprezy:latest "python index.py" 5 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp loving_wozniak
$> sudo docker logs 1c50b67d45b1
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
Yep, seems like the flask server is running. Here is where it gets weird. Lets make a request to the server:
$> curl 127.0.0.1:5000 -v
* Rebuilt URL to: 127.0.0.1:5000/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 127.0.0.1:5000
> Accept: */*
>
* Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from server
Empty reply... But is the process running?
$> sudo docker top 1c50b67d45b1
UID PID PPID C STIME TTY TIME CMD
root 2084 812 0 10:26 ? 00:00:00 python index.py
root 2117 2084 0 10:26 ? 00:00:00 /usr/bin/python index.py
Now let's ssh into the server and check...
$> sudo docker exec -it 1c50b67d45b1 bash
root#1c50b67d45b1:/srv# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:47677 127.0.0.1:5000 TIME_WAIT
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
root#1c50b67d45b1:/srv# curl -I 127.0.0.1:5000
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 5447
Server: Werkzeug/0.10.4 Python/2.7.6
Date: Tue, 19 May 2015 12:18:14 GMT
It's fine... But not from the outside.
What am I doing wrong?
The problem is you are only binding to the localhost interface, you should be binding to 0.0.0.0 if you want the container to be accessible from outside. If you change:
if __name__ == '__main__':
app.run()
to
if __name__ == '__main__':
app.run(host='0.0.0.0')
It should work.
Note that this will bind to all interfaces on the host, which may in some circumstances be a security risk - see https://stackoverflow.com/a/58138250/4332 for more information on binding to a specific interface.
When using the flask command instead of app.run, you can pass the --host option to change the host. The line in Docker would be:
CMD ["flask", "run", "--host", "0.0.0.0"]
or
CMD flask run --host 0.0.0.0
Your Docker container has more than one network interface. For example, my container has the following:
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
32: eth0#if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
if you run docker network inspect bridge, you can see that your container is connected to that bridge with the second interface in the above output. This default bridge is also connected to the Docker process on your host.
Therefore you would have to run the command:
CMD flask run --host 172.17.0.2
To access your Flask app running in a Docker container from your host machine. Replace 172.17.0.2 with whatever the particular IP address is of your container.
You need to modify the host to 0.0.0.0 in the docker file. This is a minimal example
# Example of Dockerfile
FROM python:3.8.5-alpine3.12
WORKDIR /app
EXPOSE 5000
ENV FLASK_APP=app.py
COPY . /app
RUN pip install -r requirements.txt
ENTRYPOINT [ "flask"]
CMD [ "run", "--host", "0.0.0.0" ]
and the file app.py is
# app.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def home():
return "Hello world"
if __name__ == "__main__":
app.run()
Then compile with
docker build . -t deploy_flask
and run with
docker run -p 5000:5000 -t -i deploy_flask:latest
You can check the response with curl http://127.0.0.1:5000/ -v
First of all in your python script you need to change code from
app.run()
to
app.run(host="0.0.0.0")
Second, In your docker file, last line should be like
CMD ["flask", "run", "-h", "0.0.0.0", "-p", "5000"]
And on host machine if 0.0.0.0:5000 doesn't work then you should try with localhost:5000
Note - The CMD command has to be proper. Because CMD command provide defaults for executing container.
To build on other answers:
Imagine you have two computers. Each computer has a network interface (WiFi, say), which is its public IP. Each computer has a loopback/localhost interface, at 127.0.0.1. This means "just this computer."
If you listed on 127.0.0.1 on computer A, you would not expect to be able to connect to that via 127.0.0.1 when running on computer B. After all, you asked to listen on computer A's local, private address.
Docker is similar setup; technically it's the same computer, but the Linux kernel is allowing each container to run with its own isolated network stack. So 127.0.0.1 in a container is the same as 127.0.0.1 on a different computer than your host—you can't connect to it.
Longer version, with diagrams: https://pythonspeed.com/articles/docker-connection-refused/
For fast readers, three quick things to check:
Make sure you have exposed the port in the Dockerfile.
Running the command in container using flask run --host=0.0.0.0
Specifying the port in your docker run command docker run -it -p5000:5000 yourImageName
In my case, binding the host to 0.0.0.0 only worked on my local environment, and it failed when deploying on a server.
Then it's working when I replaced the port with --network=host:
Before:
docker run -d -p 5000:5000 <docker_image>
After:
docker run -d --network=host <docker_image>
ps. I still used the 0.0.0.0:5000 inside the container when running the flask app.

Can't connect to docker container on localhost [duplicate]

I have an app whose only dependency is flask, which runs fine outside docker and binds to the default port 5000. Here is the full source:
from flask import Flask
app = Flask(__name__)
app.debug = True
#app.route('/')
def main():
return 'hi'
if __name__ == '__main__':
app.run()
The problem is that when I deploy this in docker, the server is running but is unreachable from outside the container.
Below is my Dockerfile. The image is ubuntu with flask installed. The tar just contains the index.py listed above;
# Dockerfile
FROM dreen/flask
MAINTAINER dreen
WORKDIR /srv
# Get source
RUN mkdir -p /srv
COPY perfektimprezy.tar.gz /srv/perfektimprezy.tar.gz
RUN tar x -f perfektimprezy.tar.gz
RUN rm perfektimprezy.tar.gz
# Run server
EXPOSE 5000
CMD ["python", "index.py"]
Here are the steps I am doing to deploy
$> sudo docker build -t perfektimprezy .
As far as I know the above runs fine, the image has the contents of the tar in /srv. Now, let's start the server in a container:
$> sudo docker run -i -p 5000:5000 -d perfektimprezy
1c50b67d45b1a4feade72276394811c8399b1b95692e0914ee72b103ff54c769
Is it actually running?
$> sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1c50b67d45b1 perfektimprezy:latest "python index.py" 5 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp loving_wozniak
$> sudo docker logs 1c50b67d45b1
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
Yep, seems like the flask server is running. Here is where it gets weird. Lets make a request to the server:
$> curl 127.0.0.1:5000 -v
* Rebuilt URL to: 127.0.0.1:5000/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 127.0.0.1:5000
> Accept: */*
>
* Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from server
Empty reply... But is the process running?
$> sudo docker top 1c50b67d45b1
UID PID PPID C STIME TTY TIME CMD
root 2084 812 0 10:26 ? 00:00:00 python index.py
root 2117 2084 0 10:26 ? 00:00:00 /usr/bin/python index.py
Now let's ssh into the server and check...
$> sudo docker exec -it 1c50b67d45b1 bash
root#1c50b67d45b1:/srv# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:47677 127.0.0.1:5000 TIME_WAIT
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
root#1c50b67d45b1:/srv# curl -I 127.0.0.1:5000
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 5447
Server: Werkzeug/0.10.4 Python/2.7.6
Date: Tue, 19 May 2015 12:18:14 GMT
It's fine... But not from the outside.
What am I doing wrong?
The problem is you are only binding to the localhost interface, you should be binding to 0.0.0.0 if you want the container to be accessible from outside. If you change:
if __name__ == '__main__':
app.run()
to
if __name__ == '__main__':
app.run(host='0.0.0.0')
It should work.
Note that this will bind to all interfaces on the host, which may in some circumstances be a security risk - see https://stackoverflow.com/a/58138250/4332 for more information on binding to a specific interface.
When using the flask command instead of app.run, you can pass the --host option to change the host. The line in Docker would be:
CMD ["flask", "run", "--host", "0.0.0.0"]
or
CMD flask run --host 0.0.0.0
Your Docker container has more than one network interface. For example, my container has the following:
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
32: eth0#if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
if you run docker network inspect bridge, you can see that your container is connected to that bridge with the second interface in the above output. This default bridge is also connected to the Docker process on your host.
Therefore you would have to run the command:
CMD flask run --host 172.17.0.2
To access your Flask app running in a Docker container from your host machine. Replace 172.17.0.2 with whatever the particular IP address is of your container.
You need to modify the host to 0.0.0.0 in the docker file. This is a minimal example
# Example of Dockerfile
FROM python:3.8.5-alpine3.12
WORKDIR /app
EXPOSE 5000
ENV FLASK_APP=app.py
COPY . /app
RUN pip install -r requirements.txt
ENTRYPOINT [ "flask"]
CMD [ "run", "--host", "0.0.0.0" ]
and the file app.py is
# app.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def home():
return "Hello world"
if __name__ == "__main__":
app.run()
Then compile with
docker build . -t deploy_flask
and run with
docker run -p 5000:5000 -t -i deploy_flask:latest
You can check the response with curl http://127.0.0.1:5000/ -v
First of all in your python script you need to change code from
app.run()
to
app.run(host="0.0.0.0")
Second, In your docker file, last line should be like
CMD ["flask", "run", "-h", "0.0.0.0", "-p", "5000"]
And on host machine if 0.0.0.0:5000 doesn't work then you should try with localhost:5000
Note - The CMD command has to be proper. Because CMD command provide defaults for executing container.
To build on other answers:
Imagine you have two computers. Each computer has a network interface (WiFi, say), which is its public IP. Each computer has a loopback/localhost interface, at 127.0.0.1. This means "just this computer."
If you listed on 127.0.0.1 on computer A, you would not expect to be able to connect to that via 127.0.0.1 when running on computer B. After all, you asked to listen on computer A's local, private address.
Docker is similar setup; technically it's the same computer, but the Linux kernel is allowing each container to run with its own isolated network stack. So 127.0.0.1 in a container is the same as 127.0.0.1 on a different computer than your host—you can't connect to it.
Longer version, with diagrams: https://pythonspeed.com/articles/docker-connection-refused/
For fast readers, three quick things to check:
Make sure you have exposed the port in the Dockerfile.
Running the command in container using flask run --host=0.0.0.0
Specifying the port in your docker run command docker run -it -p5000:5000 yourImageName
In my case, binding the host to 0.0.0.0 only worked on my local environment, and it failed when deploying on a server.
Then it's working when I replaced the port with --network=host:
Before:
docker run -d -p 5000:5000 <docker_image>
After:
docker run -d --network=host <docker_image>
ps. I still used the 0.0.0.0:5000 inside the container when running the flask app.

curl (56) Recv failure: Connection reset by peer - when hitting docker container

getting this error while curl the application ip
curl (56) Recv failure: Connection reset by peer - when hitting docker container
Do a small check by running:
docker run --network host -d <image>
if curl works well with this setting, please make sure that:
You're mapping the host port to the container's port correctly:
docker run -p host_port:container_port <image>
Your service application (running in the container) is running on localhost or 0.0.0.0 and not something like 127.0.0.1
I GOT the same error
umesh#ubuntu:~/projects1$ curl -i localhost:49161
curl: (56) Recv failure: Connection reset by peer
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
in my case it was due wrong port no
|---MY Projects--my working folder
--------|Dockerfile ---port defined 8080
--------|index.js-----port defined 3000
--------|package.json
then i was running ::::
docker run -p 49160:8080 -d umesh1/node-web-app1
so as the application was running in port 3000 in index.js it was not able to connect to the application got the error as u were getting
So TO SOLVE THE PROBLEM
deleted the last container/image that was created my worong port
just change the port no of INDEX.JS
|---MY Projects--my working folder
--------|Dockerfile ---port defined 8080
--------|index.js-----port defined 8080
--------|package.json
then build the new image
docker build -t umesh1/node-web-app1 .
running the image in daemon mode with exposed port
docker run -p 49160:8080 -d umesh1/node-web-app1
THUS MY APPLICATION WAS RUNNING without any error listing on port 49161
I have same when bind to port that is not lissened by any service inside container.
So check -p option
-p 9200:9265
-p <port in container>:<port in host os to be binded to>

Open docker daemon container to outside

The docker daemon container is isolated from outside when we run it below,
$ docker run -d --name test_container ubuntu/ping \
/bin/sh -c "while true do echo hello world; sleep 1; done"
$ docker inspect test_container | grep IPAddress
[ip of test_container]
$ ping [ip of test_container]
[timeout]
$ ifconfig docker0 | grep "inet addr"
[ip of docker bridge]
$ ping [ip of docker bridge]
[ok]
$ docker exec -it test_container /bin/bash
# ping [ip of test_container]
[ok]
# ping [ip of docker bridge]
[timeout]
How to open the ip address of the docker daemon container inside out?
By default docker daemon is running on a unix socket
You can enable to listen on tcp socket by doing :
docker daemon -H tcp://validIpOnYourHost:port
By default port is 2375 if you do not provide some.
cf this page for more explanation : https://docs.docker.com/v1.11/engine/reference/commandline/daemon/
Be careful, if you expose docker throught TCP, this is not security enabled.
Probably I replied to something else, after reading your question :
could you do a :
docker network inspect bridge
and paste the json output.
I had similar issues when the attribute :
"com.docker.network.bridge.enable_ip_masquerade"
was set to false

Resources