HTTP Request from Dockerfile not successful - docker

I'm playing around with Docker and am trying to get a Dockerfile working running ubuntu and nginx.
The result from "docker build" is that curl is unable to do a HTTP request at localhost, however if I later start the container created from the Dockerfile it works just fine..
What could possibly be the issue here?
See the Dockerfile below:
$ cat Dockerfile
FROM ubuntu:14.10
RUN apt-get update
RUN apt-get install -y curl nginx
RUN service nginx start
RUN echo "niklas9 was here" > /usr/share/nginx/html/index.html
RUN /usr/bin/curl -v "http://localhost/"
Result from "docker build":
$ sudo docker.io build .
...
Step 5 : RUN /usr/bin/curl -v "http://localhost/"
---> Running in 46f773be22a2
* Hostname was NOT found in DNS cache
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying ::1...
* connect to ::1 port 80 failed: Connection refused
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to localhost port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 80: Connection refused
2014/11/26 22:47:38 The command [/bin/sh -c /usr/bin/curl -v "http://localhost/"] returned a non-zero code: 7
Results from starting the container and attaching into it:
root#65c55d5974cb:/# curl -v "http://localhost/"
* Hostname was NOT found in DNS cache
* Trying ::1...
* Connected to localhost (::1) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost
> Accept: */*
>
< HTTP/1.1 200 OK
* Server nginx/1.6.2 (Ubuntu) is not blacklisted
< Server: nginx/1.6.2 (Ubuntu)
< Date: Wed, 26 Nov 2014 21:50:16 GMT
< Content-Type: text/html
< Content-Length: 17
< Last-Modified: Wed, 26 Nov 2014 21:38:11 GMT
< Connection: keep-alive
< ETag: "54764843-11"
< Accept-Ranges: bytes
<
niklas9 was here
* Connection #0 to host localhost left intact
I'm running Ubuntu 14 with docker installed with apt-get, see version below.
$ docker.io --version
Docker version 0.9.1, build 3600720

Fundamentally, you should not be thinking of firing up your service at build time.
Dockerfile RUN commands are intended to create some state for the final container you are trying to make. Each command creates a new container layer, based on the one before, and Docker caches them to speed things up, so any given RUN command may not actually run at all for a build, unless the things that go before it have changed.
Some notes from Docker on how this works

After the line
RUN service nginx start
is executed, only the changes in the file system are persistent.
The nginx process isn't available when the next line of the dockerfile is executed.
It would work like that, but if you want the nginx process to be started in the container, you would need to add a CMD at the end.
FROM ubuntu:14.10
RUN apt-get update
RUN apt-get install -y curl nginx
RUN echo "niklas9 was here" > /usr/share/nginx/html/index.html
RUN service nginx start && /usr/bin/curl -v "http://localhost/"

Related

curl fails when ran inside script

Trying to communicate with a running docker container by running a simple curl:
curl -v -s -X POST http://localhost:4873/_session -d \'name=some\&password=thing\'
Which works fine from any shell (login/interactive), but miserably fails when doing it in a script:
temp=$(curl -v -s -X POST http://localhost:4873/_session -d \'name=some\&password=thing\')
echo $temp
With error output suggesting a connection reset:
* Trying 127.0.0.1:4873...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 4873 (#0)
> POST /_session HTTP/1.1
> Host: localhost:4873
> User-Agent: curl/7.68.0
> Accept: */*
> Content-Length: 29
> Content-Type: application/x-www-form-urlencoded
>
} [29 bytes data]
* upload completely sent off: 29 out of 29 bytes
* Recv failure: Connection reset by peer <-- this! why?
* Closing connection 0
I'm lost and any hint is appreciated.
PS: tried without subshell and same happens so it's something with the script or the way it's executed.
Edit 1
Added docker compose file. I don't see why regular shell works, but script does not. Note that script is not ran inside docker, it's also running from host.
version: "2.1"
services:
verdaccio:
image: verdaccio/verdaccio:4
container_name: verdaccio-docker-local-storage-vol
ports:
- "4873:4873"
volumes:
- "./storage:/verdaccio/storage"
- "./conf:/verdaccio/conf"
volumes:
verdaccio:
driver: local
Edit 2
So doing temp=$(curl -v -s http://www.google.com) works fine in the script. It's some kind of networking issue, but I still haven't managed to figure out why.
Edit 3
Lots of people suggested to reformat the payload data, but even without a payload same error is thrown. Also note I'm on Linux so not sure if there are any permissions that can play a role here.
if you are using bash script, Can you update the script with below change and try to run again.
address="http://127.0.0.1:4873/_session"
cred="{\"name\":\"some\", \"password\":\"thing\"}"
temp="curl -v -s -X POST $address -d $cred"
echo $temp
I suspect the issue is within the script and not with docker.
If you run your container in default mode, docker daemon will locate it in another network, so 'localhost' of your host machine and that one of your container are different.
If you want to see the host machine ports from your container, try to run it with key --network="host" (detailed description can be found here)

Cannot conect to Docker container running in VSTS

I have a test which starts a Docker container, performs the verification (which is talking to the Apache httpd in the Docker container), and then stops the Docker container.
When I run this test locally, this test runs just fine. But when it runs on hosted VSTS, thus a hosted build agent, it cannot connect to the Apache httpd in the Docker container.
This is the .vsts-ci.yml file:
queue: Hosted Linux Preview
steps:
- script: |
./test.sh
This is the test.sh shell script to reproduce the problem:
#!/bin/bash
set -e
set -o pipefail
function tearDown {
docker stop test-apache
docker rm test-apache
}
trap tearDown EXIT
docker run -d --name test-apache -p 8083:80 httpd
sleep 10
curl -D - http://localhost:8083/
When I run this test locally, the output that I get is:
$ ./test.sh
469d50447ebc01775d94e8bed65b8310f4d9c7689ad41b2da8111fd57f27cb38
HTTP/1.1 200 OK
Date: Tue, 04 Sep 2018 12:00:17 GMT
Server: Apache/2.4.34 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
<html><body><h1>It works!</h1></body></html>
test-apache
test-apache
This output is exactly as I expect.
But when I run this test on VSTS, the output that I get is (irrelevant parts replaced with …).
2018-09-04T12:01:23.7909911Z ##[section]Starting: CmdLine
2018-09-04T12:01:23.8044456Z ==============================================================================
2018-09-04T12:01:23.8061703Z Task : Command Line
2018-09-04T12:01:23.8077837Z Description : Run a command line script using cmd.exe on Windows and bash on macOS and Linux.
2018-09-04T12:01:23.8095370Z Version : 2.136.0
2018-09-04T12:01:23.8111699Z Author : Microsoft Corporation
2018-09-04T12:01:23.8128664Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613735)
2018-09-04T12:01:23.8146694Z ==============================================================================
2018-09-04T12:01:26.3345330Z Generating script.
2018-09-04T12:01:26.3392080Z Script contents:
2018-09-04T12:01:26.3409635Z ./test.sh
2018-09-04T12:01:26.3574923Z [command]/bin/bash --noprofile --norc /home/vsts/work/_temp/02476800-8a7e-4e22-8715-c3f706e3679f.sh
2018-09-04T12:01:27.7054918Z Unable to find image 'httpd:latest' locally
2018-09-04T12:01:30.5555851Z latest: Pulling from library/httpd
2018-09-04T12:01:31.4312351Z d660b1f15b9b: Pulling fs layer
[…]
2018-09-04T12:01:49.1468474Z e86a7f31d4e7506d34e3b854c2a55646eaa4dcc731edc711af2cc934c44da2f9
2018-09-04T12:02:00.2563446Z % Total % Received % Xferd Average Speed Time Time Time Current
2018-09-04T12:02:00.2583211Z Dload Upload Total Spent Left Speed
2018-09-04T12:02:00.2595905Z
2018-09-04T12:02:00.2613320Z 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8083: Connection refused
2018-09-04T12:02:00.7027822Z test-apache
2018-09-04T12:02:00.7642313Z test-apache
2018-09-04T12:02:00.7826541Z ##[error]Bash exited with code '7'.
2018-09-04T12:02:00.7989841Z ##[section]Finishing: CmdLine
The key thing is this:
curl: (7) Failed to connect to localhost port 8083: Connection refused
10 seconds should be enough for apache to start.
Why can curl not communicate with Apache on its port 8083?
P.S.:
I know that a hard-coded port like this is rubbish and that I should use an ephemeral port instead. I wanted to get it running first wirth a hard-coded port, because that's simpler than using an ephemeral port, and then switch to an ephemeral port as soon as the hard-coded port works. And in case the hard-coded port doesn't work because the port is unavailable, the error should look different, in that case, docker run should fail because the port can't be allocated.
Update:
Just to be sure, I've rerun the test with sleep 100 instead of sleep 10. The results are unchanged, curl cannot connect to localhost port 8083.
Update 2:
When extending the script to execute docker logs, docker logs shows that Apache is running as expected.
When extending the script to execute docker ps, it shows the following output:
2018-09-05T00:02:24.1310783Z CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2018-09-05T00:02:24.1336263Z 3f59aa014216 httpd "httpd-foreground" About a minute ago Up About a minute 0.0.0.0:8083->80/tcp test-apache
2018-09-05T00:02:24.1357782Z 850bda64f847 microsoft/vsts-agent:ubuntu-16.04-docker-17.12.0-ce-standard "/home/vsts/agents/2…" 2 minutes ago Up 2 minutes musing_booth
The problem is that the VSTS build agent runs in a Docker container. When the Docker container for Apache is started, it runs on the same level as the VSTS build agent Docker container, not nested inside the VSTS build agent Docker container.
There are two possible solutions:
Replacing localhost with the ip address of the docker host, keeping the port number 8083
Replacing localhost with the ip address of the docker container, changing the host port number 8083 to the container port number 80.
Access via the Docker Host
In this case, the solution is to replace localhost with the ip address of the docker host. The following shell snippet can do that:
host=localhost
if grep '^1:name=systemd:/docker/' /proc/1/cgroup
then
apt-get update
apt-get install net-tools
host=$(route -n | grep '^0.0.0.0' | sed -e 's/^0.0.0.0\s*//' -e 's/ .*//')
fi
curl -D - http://$host:8083/
The if grep '^1:name=systemd:/docker/' /proc/1/cgroup inspects whether the script is running inside a Docker container. If so, it installs net-tools to get access to the route command, and then parses the default gw from the route command to get the ip address of the host. Note that this only works if the container's network default gw actually is the host.
Direct Access to the Docker Container
After launching the docker container, its ip addresses can be obtained with the following command:
docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' <container-id>
Replace <container-id> with your container id or name.
So, in this case, it would be (assuming that the first ip address is okay):
ips=($(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' nuance-apache))
host=${ips[0]}
curl http://$host/

docker apt-get got `Cannot inititate the connection to 8000:80 (...) - connect (22: Invalid Argument)`

I was trying to run apt-get update in a docker container.
I got these errors:
W: Failed to fetch
http://ftp.osuosl.org/pub/mariadb/repo/10.2/debian/dists/jessie/Release.gpg
Cannot initiate the connection to 8000:80 (0.0.31.64).
- connect (22: Invalid argument)
W: Failed to fetch
http://security.debian.org/dists/jessie/updates/Release.gpg
Cannot initiate the connection to 8000:80 (0.0.31.64).
- connect (22: Invalid argument)
I googled around, and some problems related to docker apt-get are related to proxy settings or DNS settings. I think I have addressed both but I am still getting the above error. Any ideas?
proxy settings
Error messages would be this -- I am NOT seeing these errors anymore.
W: Failed to fetch
http://ftp.osuosl.org/pub/mariadb/repo/10.2/debian/dists/jessie/Release.gpg
Cannot initiate the connection to ftp.osuosl.org:80 (2600:3404:200:237::2).
- connect (101: Network is unreachable) [IP: 2600:3404:200:237::2 80]
W: Failed to fetch
https://repo.percona.com/apt/dists/jessie/main/binary-amd64/Packages
Connection timed out after 120001 milliseconds
My solution has been putting lines like these in my dockerfile. Since
then the error messages have changed, so I believe this is the right fix
for the proxy problem.
ENV http_proxy <myCorpProxy>:8000
ENV https_proxy <myCorpProxy>:8000
dns settings
The error would be this -- I am NOT seeing these errors anymore.
W: Failed to fetch
http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg
Could not resolve 'my.proxy.net'
W: Failed to fetch
http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg
Could not resolve 'my.proxy.net'
W: Failed to fetch
http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg
Could not resolve 'my.proxy.net'
Solutions: Fix Docker's networking DNS config
Other forums
I found a discussion thread on forums.docker.com but could not
reach a solution yet
https://forums.docker.com/t/cannot-run-apt-get-update-successfully-behind-proxy-beta-13-1/14170
Update with correct syntax of proxy settings that solved the problem!
Thanks for Matt's answer. I realized that the syntax that
I was using was wrong. They CANNOT be
ENV http_proxy <myCorpProxy.domain.name>:8000
ENV https_proxy <myCorpProxy.domain.name>:8000
but has to be
ENV http_proxy http://<myCorpProxy.domain.name>:8000
ENV https_proxy http://<myCorpProxy.domain.name>:8000
Once I changed that, my apt-get started to work. Thanks a lot!
It looks like your proxy settings are invalid. Running the following command produces the same error message:
docker run -ti --rm \
-e https_proxy=http://8000:80 \
-e http_proxy=http://8000:80 \
debian apt-get update
You need a valid http://hostname:port that you can connect to as a proxy
docker run -ti --rm \
-e https_proxy=http://10.8.8.8:3142 \
-e http_proxy=http://10.8.8.8:3142 \
debian apt-get update
You should be able to get some form of response from the proxy address
→ curl -v http://10.8.8.8:3142
* Rebuilt URL to: http://10.8.8.8:3142/
* Trying 10.8.8.8...
* Connected to 10.8.8.8 (10.8.8.8) port 3142 (#0)
> GET / HTTP/1.1
> Host: 10.8.8.8:3142
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 406 Usage Information
< Connection: close
< Transfer-Encoding: chunked
< Content-Type: text/html
<
I also met this problem on Mac OSX. I tried the solution from #Matt. I use docker run -it -e http_proxy=http://127.0.0.1:1235 -e https_proxy=http://127.0.0.1:1235 ubuntu to set the proxy. I got Could not connect to 127.0.0.1:1235 (127.0.0.1). - connect (111: Connection refused).
I use docker inspect <container_id> to get the proxy information of that container. I got "HTTP_PROXY=docker.for.mac.localhost:1235", "HTTPS_PROXY=docker.for.mac.localhost:1235". I change the command to docker run -it -e http_proxy=http://docker.for.mac.localhost:1235 -e https_proxy=http://docker.for.mac.localhost:1235 ubuntu. It works.

Can't connect to my Bluemix container (Connection: close)

Hello everybody and happy new year,
I have some issues connecting to my bluemix container,
I followed this IBM's Bluemix guide
to learn how to pull a docker image to bluemix image repository, which works.
Then I executed the command to open port 9080 server side (the vm one is 5000 according to my Dockerfile)
PS: ((I have tried with -P instead of -p 9080 or -p 9080:5000 but none works in fixing this issue))
cf ic -v run -d -p 9080 --name testdebug registry.ng.bluemix.net/datainjector/esolom python testGUI.py
after a "cf ic ps" I obtain:
CONTAINER ID IMAGE
8457d4bb-247 registry.ng.bluemix.net/datainjector/esolom:latest
COMMAND PORTS NAMES
"python testGUI.py " 9080/tcp testdebug
the debug command (executed while running the image) reports me this:
DEMANDE : [2017-01-12T11:57:42+01:00]
POST /UAALoginServerWAR/oauth/token HTTP/1.1
Host: login.ng.bluemix.net
Accept: application/json
Authorization: [DONNEES PRIVEES MASQUEES]
Connection: close
Content-Type: application/x-www-form-urlencoded
User-Agent: go-cli 6.22.2+a95e24c / darwin
grant_type=refresh_token&refresh_token=eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiJmMTE5ZTI2MC0zZGE4LTQ5NzctOTI4OS05YjY1ZDcwMmM2OWQtciIsInN1YiI6ImZkMWVmM2Q3LTI2OTQtNDQ4Ni1iNjY2LWRmNTVjY2M4MzVmOCIsInNjb3BlIjpbIm9wZW5pZCIsInVhYS51c2VyIiwiY2xvdWRfY29udHJvbGxlci5yZWFkIiwicGFzc3dvcmQud3JpdGUiLCJjbG91ZF9jb250cm9sbGVyLndyaXRlIl0sImlhdCI6MTQ4NDIxMTE1NSwiZXhwIjoxNDg2ODAzMTU1LCJjaWQiOiJjZiIsImNsaWVudF9pZCI6ImNmIiwiaXNzIjoiaHR0cHM6Ly91YWEubmcuYmx1ZW1peC5uZXQvb2F1dGgvdG9rZW4iLCJ6aWQiOiJ1YWEiLCJncmFudF90eXBlIjoicGFzc3dvcmQiLCJ1c2VyX25hbWUiOiJlbW1hbnVlbC5zb2xvbUBmci5pYm0uY29tIiwib3JpZ2luIjoidWFhIiwidXNlcl9pZCI6ImZkMWVmM2Q3LTI2OTQtNDQ4Ni1iNjY2LWRmNTVjY2M4MzVmOCIsInJldl9zaWciOiI2MWNkZjM4MiIsImF1ZCI6WyJjZiIsIm9wZW5pZCIsInVhYSIsImNsb3VkX2NvbnRyb2xsZXIiLCJwYXNzd29yZCJdfQ.PIQlkKPDwxfa0c6951pO52qcAzggfPGrsCMuFl4V-eY&scope=
REPONSE : [2017-01-12T11:57:42+01:00]
HTTP/1.1 200 OK
Connection: close
Transfer-Encoding: chunked
Cache-Control: no-cache, no-store, max-age=0, must-revalidate,no-store
Content-Security-Policy: default-src 'self' www.ibm.com 'unsafe-inline';
Content-Type: application/json;charset=UTF-8
Date: Thu, 12 Jan 2017 10:57:42 GMT
Expires: 0
Pragma: no-cache,no-cache
Server: Apache-Coyote/1.1
Strict-Transport-Security: max-age=2592000 ; includeSubDomains
X-Archived-Client-Ip: 169.54.180.83
X-Backside-Transport: OK OK,OK OK
X-Client-Ip: 169.54.180.83,91.151.65.169
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Global-Transaction-Id: 3429006079
X-Powered-By: Servlet/3.1
X-Vcap-Request-Id: 8990fd56-827c-4956-696d-497922464ac0,04094b44-0ace- 4891-6ec0-c4855fd481f7
X-Xss-Protection: 1; mode=block
6f6
{"access_token":"[DONNEES PRIVEES MASQUEES]","token_type":"[DONNEES PRIVEES MASQUEES]","refresh_token":"[DONNEES PRIVEES MASQUEES]","expires_in":1209599,"scope":"cloud_controller.read password.write cloud_controller.write openid uaa.user","jti":"9619d1dd-995f-41b4-8a8a-825af8397ccb"}
0
ae4b3e08-4ba6-47d9-bf1f-30654af7fcfc
Next I bind an IP I requested with:
cf ic ip request // 169.46.18.243 was given
cf ic ip bind 169.46.18.243 testdebug
OK
The IP address was bound successfully.
And the "cf ic ps" command gave me this:
CONTAINER ID IMAGE
1afd6916-718 registry.ng.bluemix.net/datainjector/esolom:latest
COMMAND CREATED STATUS
"python testGUI.py " 5 minutes ago Running 5 minutes ago
PORTS NAMES
169.46.18.243:9080->9080/tcp testdebug
Associated logs:
cf ic logs -ft testdebug
2017-01-12T12:45:54.531512850Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:334: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
32017-01-12T12:45:54.531555759Z SNIMissingWarning
�2017-01-12T12:45:54.531568916Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:132: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
92017-01-12T12:45:54.531576871Z InsecurePlatformWarning
�2017-01-12T12:45:54.836875400Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:132: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
92017-01-12T12:45:54.836895214Z InsecurePlatformWarning
�2017-01-12T12:45:54.884966459Z WebSocket transport not available. Install eventlet or gevent and gevent-websocket for improved performance.
Y2017-01-12T12:45:54.889150620Z * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
So, is the fact that the same ports are attributed to host and VM are responsible for "connection: close" in the debug log? or are those two different problems?
Is "connection: close" explains that I can't connect to the web app?
Do you have an idea of how to fix this? (something to fix in the image? or an additional option in the CLI?)
Thank you for reading, your commitment and for your answers!
PS:
Clues: I'm looking for modifying the Dockerfile, I read that I have to had 2 instructions in order to integrate my docker image to Bluemix, a sleep command so that I'm sure the container is up before calling him, and apparently the "ENV PORT 3000" instruction, here is my Dockerfile, don't hesitate to review it, a simple error easily happens.
FROM ubuntu:14.04
RUN apt-get update && apt-get -y install python2.7
RUN apt-get -y install python-pip
RUN pip install Flask
RUN pip install ibmiotf
RUN pip install requests
RUN pip install flask-socketio
RUN pip install cloudant
ENV PORT=3000
EXPOSE 3000
ADD ./SIARA /opt/SIARA/
WORKDIR /opt/SIARA/
CMD (sleep 60)
CMD ["python", "testGUI.py"]
changed the Dockerfile for:
CMD sleep 60 && python testGUI.py
after seeing the #gile comment.
Even if it didn't works immediately, after 3 hours it works surprisingly well!
EDIT: seems it was just a lucky shot, I can't access to the container again, my idea is that it worked because my dockerHub rebuilt the image after commiting the changes on the Dockerfile because it worked just after I copied the images from dockerhub repo to bluemix private repo.
Does it highlight an issue? Because I don't understand why it didn't worked from again...

"404 page not found" when exec `curl --unix-socket /var/run/docker.sock http:/containers/json`

docker version: 1.11.2
curl version: 7.50.3 (x86_64-pc-linux-gnu) libcurl/7.50.3 OpenSSL/1.0.1e zlib/1.2.7
/usr/local/sbin/bin/curl --unix-socket /var/run/docker.sock http://images/json -v
* Trying /var/run/docker.sock...
* Connected to images (/var/run/docker.sock) port 80 (#0)
> GET /json HTTP/1.1
> Host: images
> User-Agent: curl/7.50.3
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Thu, 22 Sep 2016 06:11:52 GMT
< Content-Length: 19
<
404 page not found
* Curl_http_done: called premature == 0
* Connection #0 to host images left intact
Is there anything wrong with my docker daemon? How can I get the containers info from the docker unix-socket?
docker deamon is absolutely started.
I followed this page:https://docs.docker.com/engine/reference/api/docker_remote_api/#/v1-23-api-changes, its suggestion us to use curl 7.40 or later, command curl --unix-socket /var/run/docker.sock http:/containers/json. You can found that there is a unavild URL http:/containers/json in this command.
Then I download the newest curl 7.50.3, the key of this problem is the curl's version, we should exec like below:
curl --unix-socket /var/run/docker.sock http://localhost/images/json
More detail watch this page.https://superuser.com/questions/834307/can-curl-send-requests-to-sockets. Hope it help some other people who confused.

Resources