Can't connect to my Bluemix container (Connection: close) - docker

Hello everybody and happy new year,
I have some issues connecting to my bluemix container,
I followed this IBM's Bluemix guide
to learn how to pull a docker image to bluemix image repository, which works.
Then I executed the command to open port 9080 server side (the vm one is 5000 according to my Dockerfile)
PS: ((I have tried with -P instead of -p 9080 or -p 9080:5000 but none works in fixing this issue))
cf ic -v run -d -p 9080 --name testdebug registry.ng.bluemix.net/datainjector/esolom python testGUI.py
after a "cf ic ps" I obtain:
CONTAINER ID IMAGE
8457d4bb-247 registry.ng.bluemix.net/datainjector/esolom:latest
COMMAND PORTS NAMES
"python testGUI.py " 9080/tcp testdebug
the debug command (executed while running the image) reports me this:
DEMANDE : [2017-01-12T11:57:42+01:00]
POST /UAALoginServerWAR/oauth/token HTTP/1.1
Host: login.ng.bluemix.net
Accept: application/json
Authorization: [DONNEES PRIVEES MASQUEES]
Connection: close
Content-Type: application/x-www-form-urlencoded
User-Agent: go-cli 6.22.2+a95e24c / darwin
grant_type=refresh_token&refresh_token=eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiJmMTE5ZTI2MC0zZGE4LTQ5NzctOTI4OS05YjY1ZDcwMmM2OWQtciIsInN1YiI6ImZkMWVmM2Q3LTI2OTQtNDQ4Ni1iNjY2LWRmNTVjY2M4MzVmOCIsInNjb3BlIjpbIm9wZW5pZCIsInVhYS51c2VyIiwiY2xvdWRfY29udHJvbGxlci5yZWFkIiwicGFzc3dvcmQud3JpdGUiLCJjbG91ZF9jb250cm9sbGVyLndyaXRlIl0sImlhdCI6MTQ4NDIxMTE1NSwiZXhwIjoxNDg2ODAzMTU1LCJjaWQiOiJjZiIsImNsaWVudF9pZCI6ImNmIiwiaXNzIjoiaHR0cHM6Ly91YWEubmcuYmx1ZW1peC5uZXQvb2F1dGgvdG9rZW4iLCJ6aWQiOiJ1YWEiLCJncmFudF90eXBlIjoicGFzc3dvcmQiLCJ1c2VyX25hbWUiOiJlbW1hbnVlbC5zb2xvbUBmci5pYm0uY29tIiwib3JpZ2luIjoidWFhIiwidXNlcl9pZCI6ImZkMWVmM2Q3LTI2OTQtNDQ4Ni1iNjY2LWRmNTVjY2M4MzVmOCIsInJldl9zaWciOiI2MWNkZjM4MiIsImF1ZCI6WyJjZiIsIm9wZW5pZCIsInVhYSIsImNsb3VkX2NvbnRyb2xsZXIiLCJwYXNzd29yZCJdfQ.PIQlkKPDwxfa0c6951pO52qcAzggfPGrsCMuFl4V-eY&scope=
REPONSE : [2017-01-12T11:57:42+01:00]
HTTP/1.1 200 OK
Connection: close
Transfer-Encoding: chunked
Cache-Control: no-cache, no-store, max-age=0, must-revalidate,no-store
Content-Security-Policy: default-src 'self' www.ibm.com 'unsafe-inline';
Content-Type: application/json;charset=UTF-8
Date: Thu, 12 Jan 2017 10:57:42 GMT
Expires: 0
Pragma: no-cache,no-cache
Server: Apache-Coyote/1.1
Strict-Transport-Security: max-age=2592000 ; includeSubDomains
X-Archived-Client-Ip: 169.54.180.83
X-Backside-Transport: OK OK,OK OK
X-Client-Ip: 169.54.180.83,91.151.65.169
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Global-Transaction-Id: 3429006079
X-Powered-By: Servlet/3.1
X-Vcap-Request-Id: 8990fd56-827c-4956-696d-497922464ac0,04094b44-0ace- 4891-6ec0-c4855fd481f7
X-Xss-Protection: 1; mode=block
6f6
{"access_token":"[DONNEES PRIVEES MASQUEES]","token_type":"[DONNEES PRIVEES MASQUEES]","refresh_token":"[DONNEES PRIVEES MASQUEES]","expires_in":1209599,"scope":"cloud_controller.read password.write cloud_controller.write openid uaa.user","jti":"9619d1dd-995f-41b4-8a8a-825af8397ccb"}
0
ae4b3e08-4ba6-47d9-bf1f-30654af7fcfc
Next I bind an IP I requested with:
cf ic ip request // 169.46.18.243 was given
cf ic ip bind 169.46.18.243 testdebug
OK
The IP address was bound successfully.
And the "cf ic ps" command gave me this:
CONTAINER ID IMAGE
1afd6916-718 registry.ng.bluemix.net/datainjector/esolom:latest
COMMAND CREATED STATUS
"python testGUI.py " 5 minutes ago Running 5 minutes ago
PORTS NAMES
169.46.18.243:9080->9080/tcp testdebug
Associated logs:
cf ic logs -ft testdebug
2017-01-12T12:45:54.531512850Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:334: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
32017-01-12T12:45:54.531555759Z SNIMissingWarning
�2017-01-12T12:45:54.531568916Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:132: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
92017-01-12T12:45:54.531576871Z InsecurePlatformWarning
�2017-01-12T12:45:54.836875400Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:132: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
92017-01-12T12:45:54.836895214Z InsecurePlatformWarning
�2017-01-12T12:45:54.884966459Z WebSocket transport not available. Install eventlet or gevent and gevent-websocket for improved performance.
Y2017-01-12T12:45:54.889150620Z * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
So, is the fact that the same ports are attributed to host and VM are responsible for "connection: close" in the debug log? or are those two different problems?
Is "connection: close" explains that I can't connect to the web app?
Do you have an idea of how to fix this? (something to fix in the image? or an additional option in the CLI?)
Thank you for reading, your commitment and for your answers!
PS:
Clues: I'm looking for modifying the Dockerfile, I read that I have to had 2 instructions in order to integrate my docker image to Bluemix, a sleep command so that I'm sure the container is up before calling him, and apparently the "ENV PORT 3000" instruction, here is my Dockerfile, don't hesitate to review it, a simple error easily happens.
FROM ubuntu:14.04
RUN apt-get update && apt-get -y install python2.7
RUN apt-get -y install python-pip
RUN pip install Flask
RUN pip install ibmiotf
RUN pip install requests
RUN pip install flask-socketio
RUN pip install cloudant
ENV PORT=3000
EXPOSE 3000
ADD ./SIARA /opt/SIARA/
WORKDIR /opt/SIARA/
CMD (sleep 60)
CMD ["python", "testGUI.py"]

changed the Dockerfile for:
CMD sleep 60 && python testGUI.py
after seeing the #gile comment.
Even if it didn't works immediately, after 3 hours it works surprisingly well!
EDIT: seems it was just a lucky shot, I can't access to the container again, my idea is that it worked because my dockerHub rebuilt the image after commiting the changes on the Dockerfile because it worked just after I copied the images from dockerhub repo to bluemix private repo.
Does it highlight an issue? Because I don't understand why it didn't worked from again...

Related

Docker Swarm on GCP, LB works on one of the two instances

My architecture is as follows:
2 docker-dev-1 and docker-dev-2 nodes in a docker-dev VPC
2 docker-internal-1 and docker-internal-2 nodes in a docker-internal VPC
The firewall allows tcp:2377, 7946, udp:4789, 7946, esp as documented here
All of them are masters in order to facilitate testing for the moment. Docker version is 20.10.16. All the instances are exactly the same (packages, configuration...).
Currently I have a flask/jinja application running on docker-dev-X.
To connect to the database, the app passes by a reverse proxy which redirects the streams that arrives on port 3306 (MySQL) to a Cloud SQL instance of the docker-internal VPC.
The flask application is exposed via a reverse proxy that listens on port 8082.
Here is the docker daemon.json configuration:
{
"mtu": 1454,
"no-new-privileges": true
}
Everything works fine when I have only one docker-dev. However, as soon as I add the docker-dev-2 node, all streams with a large output passing through docker-dev-2 do not work.
Let me explain:
On docker-dev-1 :
dev#docker-dev-1:~$ curl localhost:8082/health
Ok
# With a heavier page
dev#docker-dev-1:~$ curl localhost:8082/auth/login
<!DOCTYPE html>
<html lang="en_GB">
<head>
... # Lots of HTMLs
</html>
No problem everything is working fine.
On docker-dev-2 :
dev#docker-dev-2:~$ curl localhost:8082/health
Ok
dev#docker-dev-2:~$ curl -I localhost:8082/health
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 18 May 2022 12:34:57 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 2
...
# With a heavier page
devdocker-dev-2:~$ curl localhost:8082/auth/login
^C # Timeout
# Same curl but shows only header
dev#docker-dev-2:~$ curl -I localhost:8082/auth/login
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 18 May 2022 10:43:57 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 222967 # Long Content-Length
Connection: keep-alive
...
As you can see, when I try to curl the /health --> No problem
When I try to curl /auth/login --> The request timeout, I have no answer
When I try to curl /auth/login to show only headers --> The request works
In a container, everything is working fine, on docker-dev-1 and on docker-dev-2 :
dev#docker-dev-2:~$ docker run -it --rm --name debug --network jinja_flask_network nicolaka/netshoot bash
bash-5.1# curl reverse_proxy_nginx/health:8082
Ok
bash-5.1# curl reverse_proxy_nginx:8082/auth/login
<!DOCTYPE html>
<html lang="en_GB">
<head>
... # Lots of HTMLs
</html>
So the problem doesn't seem to be in docker network.
The problem seems to be when the request output is too long.
I already reduced MTU to 1454 a few months ago to resolve a problem... (Seems to be the same problem but in docker network).
So, when the request is on docker-dev-1 --> No problem, the website is loading normally, But when the request is on docker-dev-2 --> Infinite loading results in a timeout.
I hope I was clear in my explanation, do you have any idea ?

curl fails when ran inside script

Trying to communicate with a running docker container by running a simple curl:
curl -v -s -X POST http://localhost:4873/_session -d \'name=some\&password=thing\'
Which works fine from any shell (login/interactive), but miserably fails when doing it in a script:
temp=$(curl -v -s -X POST http://localhost:4873/_session -d \'name=some\&password=thing\')
echo $temp
With error output suggesting a connection reset:
* Trying 127.0.0.1:4873...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 4873 (#0)
> POST /_session HTTP/1.1
> Host: localhost:4873
> User-Agent: curl/7.68.0
> Accept: */*
> Content-Length: 29
> Content-Type: application/x-www-form-urlencoded
>
} [29 bytes data]
* upload completely sent off: 29 out of 29 bytes
* Recv failure: Connection reset by peer <-- this! why?
* Closing connection 0
I'm lost and any hint is appreciated.
PS: tried without subshell and same happens so it's something with the script or the way it's executed.
Edit 1
Added docker compose file. I don't see why regular shell works, but script does not. Note that script is not ran inside docker, it's also running from host.
version: "2.1"
services:
verdaccio:
image: verdaccio/verdaccio:4
container_name: verdaccio-docker-local-storage-vol
ports:
- "4873:4873"
volumes:
- "./storage:/verdaccio/storage"
- "./conf:/verdaccio/conf"
volumes:
verdaccio:
driver: local
Edit 2
So doing temp=$(curl -v -s http://www.google.com) works fine in the script. It's some kind of networking issue, but I still haven't managed to figure out why.
Edit 3
Lots of people suggested to reformat the payload data, but even without a payload same error is thrown. Also note I'm on Linux so not sure if there are any permissions that can play a role here.
if you are using bash script, Can you update the script with below change and try to run again.
address="http://127.0.0.1:4873/_session"
cred="{\"name\":\"some\", \"password\":\"thing\"}"
temp="curl -v -s -X POST $address -d $cred"
echo $temp
I suspect the issue is within the script and not with docker.
If you run your container in default mode, docker daemon will locate it in another network, so 'localhost' of your host machine and that one of your container are different.
If you want to see the host machine ports from your container, try to run it with key --network="host" (detailed description can be found here)

docker apt-get got `Cannot inititate the connection to 8000:80 (...) - connect (22: Invalid Argument)`

I was trying to run apt-get update in a docker container.
I got these errors:
W: Failed to fetch
http://ftp.osuosl.org/pub/mariadb/repo/10.2/debian/dists/jessie/Release.gpg
Cannot initiate the connection to 8000:80 (0.0.31.64).
- connect (22: Invalid argument)
W: Failed to fetch
http://security.debian.org/dists/jessie/updates/Release.gpg
Cannot initiate the connection to 8000:80 (0.0.31.64).
- connect (22: Invalid argument)
I googled around, and some problems related to docker apt-get are related to proxy settings or DNS settings. I think I have addressed both but I am still getting the above error. Any ideas?
proxy settings
Error messages would be this -- I am NOT seeing these errors anymore.
W: Failed to fetch
http://ftp.osuosl.org/pub/mariadb/repo/10.2/debian/dists/jessie/Release.gpg
Cannot initiate the connection to ftp.osuosl.org:80 (2600:3404:200:237::2).
- connect (101: Network is unreachable) [IP: 2600:3404:200:237::2 80]
W: Failed to fetch
https://repo.percona.com/apt/dists/jessie/main/binary-amd64/Packages
Connection timed out after 120001 milliseconds
My solution has been putting lines like these in my dockerfile. Since
then the error messages have changed, so I believe this is the right fix
for the proxy problem.
ENV http_proxy <myCorpProxy>:8000
ENV https_proxy <myCorpProxy>:8000
dns settings
The error would be this -- I am NOT seeing these errors anymore.
W: Failed to fetch
http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg
Could not resolve 'my.proxy.net'
W: Failed to fetch
http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg
Could not resolve 'my.proxy.net'
W: Failed to fetch
http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg
Could not resolve 'my.proxy.net'
Solutions: Fix Docker's networking DNS config
Other forums
I found a discussion thread on forums.docker.com but could not
reach a solution yet
https://forums.docker.com/t/cannot-run-apt-get-update-successfully-behind-proxy-beta-13-1/14170
Update with correct syntax of proxy settings that solved the problem!
Thanks for Matt's answer. I realized that the syntax that
I was using was wrong. They CANNOT be
ENV http_proxy <myCorpProxy.domain.name>:8000
ENV https_proxy <myCorpProxy.domain.name>:8000
but has to be
ENV http_proxy http://<myCorpProxy.domain.name>:8000
ENV https_proxy http://<myCorpProxy.domain.name>:8000
Once I changed that, my apt-get started to work. Thanks a lot!
It looks like your proxy settings are invalid. Running the following command produces the same error message:
docker run -ti --rm \
-e https_proxy=http://8000:80 \
-e http_proxy=http://8000:80 \
debian apt-get update
You need a valid http://hostname:port that you can connect to as a proxy
docker run -ti --rm \
-e https_proxy=http://10.8.8.8:3142 \
-e http_proxy=http://10.8.8.8:3142 \
debian apt-get update
You should be able to get some form of response from the proxy address
→ curl -v http://10.8.8.8:3142
* Rebuilt URL to: http://10.8.8.8:3142/
* Trying 10.8.8.8...
* Connected to 10.8.8.8 (10.8.8.8) port 3142 (#0)
> GET / HTTP/1.1
> Host: 10.8.8.8:3142
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 406 Usage Information
< Connection: close
< Transfer-Encoding: chunked
< Content-Type: text/html
<
I also met this problem on Mac OSX. I tried the solution from #Matt. I use docker run -it -e http_proxy=http://127.0.0.1:1235 -e https_proxy=http://127.0.0.1:1235 ubuntu to set the proxy. I got Could not connect to 127.0.0.1:1235 (127.0.0.1). - connect (111: Connection refused).
I use docker inspect <container_id> to get the proxy information of that container. I got "HTTP_PROXY=docker.for.mac.localhost:1235", "HTTPS_PROXY=docker.for.mac.localhost:1235". I change the command to docker run -it -e http_proxy=http://docker.for.mac.localhost:1235 -e https_proxy=http://docker.for.mac.localhost:1235 ubuntu. It works.

Cannot retrieve the stats of my docker containers using Docker APIs

I have several containers that are running on my Centos7 VM and I would like to retrieve their CPU and Memory usage using the following command:
echo -e "GET /containers/(container_name)/stats HTTP/1.0\r\n" | \
nc -U /var/run/docker.sock
However, I just receive the following message without any statistics:
HTTP/1.0 200 OK
Server: Docker/1.10.3 (linux)
Date: Sun, 22 Jan 2017 15:53:49 GMT
Content-Type: text/plain; charset=utf-8
The "containers/(container_name)/top" command works fine.
Can you please help me to understand why I don't receive this container's statistics?
Command to use get the stats of the container:
curl -X GET http://127.0.0.1:6000/containers/<container_id>/stats
The stats will be displayed for every second.
Stats can be fetched only for running containers.
Refer this :
how to configure docker daemon port.

HTTP Request from Dockerfile not successful

I'm playing around with Docker and am trying to get a Dockerfile working running ubuntu and nginx.
The result from "docker build" is that curl is unable to do a HTTP request at localhost, however if I later start the container created from the Dockerfile it works just fine..
What could possibly be the issue here?
See the Dockerfile below:
$ cat Dockerfile
FROM ubuntu:14.10
RUN apt-get update
RUN apt-get install -y curl nginx
RUN service nginx start
RUN echo "niklas9 was here" > /usr/share/nginx/html/index.html
RUN /usr/bin/curl -v "http://localhost/"
Result from "docker build":
$ sudo docker.io build .
...
Step 5 : RUN /usr/bin/curl -v "http://localhost/"
---> Running in 46f773be22a2
* Hostname was NOT found in DNS cache
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying ::1...
* connect to ::1 port 80 failed: Connection refused
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to localhost port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 80: Connection refused
2014/11/26 22:47:38 The command [/bin/sh -c /usr/bin/curl -v "http://localhost/"] returned a non-zero code: 7
Results from starting the container and attaching into it:
root#65c55d5974cb:/# curl -v "http://localhost/"
* Hostname was NOT found in DNS cache
* Trying ::1...
* Connected to localhost (::1) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost
> Accept: */*
>
< HTTP/1.1 200 OK
* Server nginx/1.6.2 (Ubuntu) is not blacklisted
< Server: nginx/1.6.2 (Ubuntu)
< Date: Wed, 26 Nov 2014 21:50:16 GMT
< Content-Type: text/html
< Content-Length: 17
< Last-Modified: Wed, 26 Nov 2014 21:38:11 GMT
< Connection: keep-alive
< ETag: "54764843-11"
< Accept-Ranges: bytes
<
niklas9 was here
* Connection #0 to host localhost left intact
I'm running Ubuntu 14 with docker installed with apt-get, see version below.
$ docker.io --version
Docker version 0.9.1, build 3600720
Fundamentally, you should not be thinking of firing up your service at build time.
Dockerfile RUN commands are intended to create some state for the final container you are trying to make. Each command creates a new container layer, based on the one before, and Docker caches them to speed things up, so any given RUN command may not actually run at all for a build, unless the things that go before it have changed.
Some notes from Docker on how this works
After the line
RUN service nginx start
is executed, only the changes in the file system are persistent.
The nginx process isn't available when the next line of the dockerfile is executed.
It would work like that, but if you want the nginx process to be started in the container, you would need to add a CMD at the end.
FROM ubuntu:14.10
RUN apt-get update
RUN apt-get install -y curl nginx
RUN echo "niklas9 was here" > /usr/share/nginx/html/index.html
RUN service nginx start && /usr/bin/curl -v "http://localhost/"

Resources