curl fails when ran inside script - docker

Trying to communicate with a running docker container by running a simple curl:
curl -v -s -X POST http://localhost:4873/_session -d \'name=some\&password=thing\'
Which works fine from any shell (login/interactive), but miserably fails when doing it in a script:
temp=$(curl -v -s -X POST http://localhost:4873/_session -d \'name=some\&password=thing\')
echo $temp
With error output suggesting a connection reset:
* Trying 127.0.0.1:4873...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 4873 (#0)
> POST /_session HTTP/1.1
> Host: localhost:4873
> User-Agent: curl/7.68.0
> Accept: */*
> Content-Length: 29
> Content-Type: application/x-www-form-urlencoded
>
} [29 bytes data]
* upload completely sent off: 29 out of 29 bytes
* Recv failure: Connection reset by peer <-- this! why?
* Closing connection 0
I'm lost and any hint is appreciated.
PS: tried without subshell and same happens so it's something with the script or the way it's executed.
Edit 1
Added docker compose file. I don't see why regular shell works, but script does not. Note that script is not ran inside docker, it's also running from host.
version: "2.1"
services:
verdaccio:
image: verdaccio/verdaccio:4
container_name: verdaccio-docker-local-storage-vol
ports:
- "4873:4873"
volumes:
- "./storage:/verdaccio/storage"
- "./conf:/verdaccio/conf"
volumes:
verdaccio:
driver: local
Edit 2
So doing temp=$(curl -v -s http://www.google.com) works fine in the script. It's some kind of networking issue, but I still haven't managed to figure out why.
Edit 3
Lots of people suggested to reformat the payload data, but even without a payload same error is thrown. Also note I'm on Linux so not sure if there are any permissions that can play a role here.

if you are using bash script, Can you update the script with below change and try to run again.
address="http://127.0.0.1:4873/_session"
cred="{\"name\":\"some\", \"password\":\"thing\"}"
temp="curl -v -s -X POST $address -d $cred"
echo $temp
I suspect the issue is within the script and not with docker.

If you run your container in default mode, docker daemon will locate it in another network, so 'localhost' of your host machine and that one of your container are different.
If you want to see the host machine ports from your container, try to run it with key --network="host" (detailed description can be found here)

Related

Why no_proxy must be specified for CURL to work in this scenario?

Inside my virtual machine, I have the following docker-compose.yml file:
services:
nginx:
image: "nginx:1.23.1-alpine"
container_name: parse-nginx
ports:
- "80:80"
mongo-0:
image: "mongo:5.0.6"
container_name: parse-mongo-0
volumes:
- ./mongo-0/data:/data/db
- ./mongo-0/config:/data/config
server-0:
image: "parseplatform/parse-server:5.2.4"
container_name: parse-server-0
ports:
- "1337:1337"
volumes:
- ./server-0/config-vol/configuration.json:/parse-server/config/configuration.json
command: "/parse-server/config/configuration.json"
The configuration.json file specified for server-0 is as follows:
{
"appId": "APPLICATION_ID_00",
"masterKey": "MASTER_KEY_00",
"readOnlyMasterKey": "only",
"databaseURI": "mongodb://mongo-0/test"
}
After using docker compose up, I execute the following command from the VM:
curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://localhost:1337/parse/classes/GameScore
The output is:
{"objectId":"yeHHiu01IV","createdAt":"2022-08-25T02:36:06.054Z"}
I use the following command to get inside the nginx container:
docker exec -it parse-nginx sh
Pinging parse-server-0 shows that it does resolve into a proper IP address. I then run the modified version of the curl command above changing localhost with that host name:
curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://parse-server-0:1337/parse/classes/GameScore
It gives me a 504 error like this:
...
<title>504 DNS look up failed</title>
</head>
<body><div class="message-container">
<div class="logo"></div>
<h1>504 DNS look up failed</h1>
<p>The webserver reported that an error occurred while trying to access the website. Please return to the previous page.</p>
...
However if I use no_proxy as follows, it works:
no_proxy="parse-server-0" curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "X-Parse-Master-Key: MASTER_KEY_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://parse-server-0:1337/parse/classes/GameScore
The output is again something like this:
{"objectId":"ICTZrQQ305","createdAt":"2022-08-25T02:18:11.565Z"}
I am very perplexed by this. Clearly, parse-server-0 is reachable with ping. How can it then throws a 504 error without using no_proxy? The parse-nginx container is using default settings and configuration. I do not set up any proxy. I am using it to test the curl command from another container to parse-mongo-0. Any help would be greatly appreciated.
The contents of /etc/resolv.conf is:
nameserver 127.0.0.11
options edns0 trust-ad ndots:0
Running echo $HTTP_PROXY inside parse-nginx returns:
http://10.10.10.10:8080
This value is null inside the VM.
Your proxy server doesn't appear to be running in this docker network. So when the request goes to that proxy server, it will not query the docker DNS on this network to resolve the other container names.
If your application isn't making requests outside of the docker network, you can remove the proxy settings. Otherwise, you'll want to set no_proxy for the other docker containers you will be accessing.
Please check the value of echo $http_proxy. Please note the downcase here. If this value is set, that means curl is configured to use the proxy. You're getting 504 while DNS resolution most probably because your parse-nginx container isn't able to reach the ip 10.10.10.10. And specifying no_proxy tells it to ignore the http_proxy env var (overriding it) and make the request without any proxy.
Inside my VM, this is the contents of the ~/.docker/config.json file:
{
"proxies":
{
"default":
{
"httpProxy": "http://10.10.10.10:8080",
"httpsProxy": "http://10.10.10.10:8080"
}
}
}
This was implemented a while back as an ad hoc fix for some network issues. A security certificate was later implemented. I completely forgot about the fix. Clearing the ~/.docker/config.json file, and redoing docker compose up fixes the issue. I no longer need no_proxy to make curl works. Everything is as it should be now. Thank you so much for all the help.

ansible over docker hosts does not as expected

I'm facing a bit weird issue targeting in ansible docker containers.
Inventory
el7_02 ansible_port=6000 ansible_user=user ansible_host=localhost
el7_03 ansible_port=6001 ansible_user=user ansible_host=localhost
playbook
- shell: hostname
register: x
- debug: msg="{{ x.stdout}}, {{ansible_hostname}}, {{ansible_user}}, {{ansible_port}}"
output
TASK [Gathering Facts] *************************************************************************************************
ok: [el7_03]
ok: [el7_02]
TASK [x : shell] *************************************************************************************************
changed: [el7_03]
changed: [el7_02]
TASK [x : debug] *************************************************************************************************
ok: [el7_03] => {
"msg": "el7_02, el7_02, user, 6001"
}
ok: [el7_02] => {
"msg": "el7_02, el7_02, user, 6000"
}
as you can see for some reasons I see not expected hostname for the container el7_03. While I'd expect to see in a debug tasks for the docker el7_03 the same hostname (i.e. el7_03 but not el7_02). Why I receive "the wrong" output?
checking hostnames in docker
~/ $ ssh -p 6000 user#localhost 'hostname'
el7_02
~/ $ ssh -p 6001 user#localhost 'hostname'
el7_03
if I will switch to ansible_connection=docker then I get what I expect. however, I cannot use it because when I interact with anything located outside of my laptop (installing anything or downloading from the internet) time to time (and quite often) I receive timeouts. Maybe there is a way how to get rid of timeouts?
os: macos
ansible: 2.9.11
python: 3.8.5
docker: 19.0.3.8
thank you
you need to work around the issue that ansible looks up a host via hostname and not via hostname:port pair .
my workaround for this issue is as follows:
$ grep pi. /etc/hosts
127.0.0.1 pi1
127.0.0.1 pi2
127.0.0.1 pi3
# inventory contents:
$ cat all_rpis.ini
pi1:3321
pi2:3322
pi3:3323

When attempting to Verify my CouchDB installation I get the error "Error: could not resolve http://any:5984/verifytestdb/"

When I install CouchDB and use the GUI and run verify.
I get the error
Error: could not resolve http://any:5984/verifytestdb/
And Replication status get's an X saying I can't replicate. Any suggestion on how to fix this problem.
It's running in a Docker Container and the Ports says
4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp
The GUI should same it works an not show an Error.
Feel like a port might be blocked in 5986 required for replication.
Use the Config setting on on the CouchDB GUI
Go to httpd
Then select bind_address
And and change the value from "Any" to "bind_address"
Run the test again and it should work.
for me what works is adding to couchdb config or change in UI
[httpd]
bind_address = 0.0.0.0
tested with verify and
curl -vX POST http://127.0.0.1:5984/_replicate -d '{"source":"albums","target":"albums-replica","create_target":true}' -H "Content-Type: application/json"
{"ok":true,"session_id":"9ab3e4f1a9cae16df05b32866088510c","source_last_seq":"6-g1AAAAILeJyNkU0OgjAQRqto1IVn0CMA_YGu5CZKOzVIsF2o......
with docker exposing only port
services:
couchdb:
ports:
- "5984:5984"

Cannot conect to Docker container running in VSTS

I have a test which starts a Docker container, performs the verification (which is talking to the Apache httpd in the Docker container), and then stops the Docker container.
When I run this test locally, this test runs just fine. But when it runs on hosted VSTS, thus a hosted build agent, it cannot connect to the Apache httpd in the Docker container.
This is the .vsts-ci.yml file:
queue: Hosted Linux Preview
steps:
- script: |
./test.sh
This is the test.sh shell script to reproduce the problem:
#!/bin/bash
set -e
set -o pipefail
function tearDown {
docker stop test-apache
docker rm test-apache
}
trap tearDown EXIT
docker run -d --name test-apache -p 8083:80 httpd
sleep 10
curl -D - http://localhost:8083/
When I run this test locally, the output that I get is:
$ ./test.sh
469d50447ebc01775d94e8bed65b8310f4d9c7689ad41b2da8111fd57f27cb38
HTTP/1.1 200 OK
Date: Tue, 04 Sep 2018 12:00:17 GMT
Server: Apache/2.4.34 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
<html><body><h1>It works!</h1></body></html>
test-apache
test-apache
This output is exactly as I expect.
But when I run this test on VSTS, the output that I get is (irrelevant parts replaced with …).
2018-09-04T12:01:23.7909911Z ##[section]Starting: CmdLine
2018-09-04T12:01:23.8044456Z ==============================================================================
2018-09-04T12:01:23.8061703Z Task : Command Line
2018-09-04T12:01:23.8077837Z Description : Run a command line script using cmd.exe on Windows and bash on macOS and Linux.
2018-09-04T12:01:23.8095370Z Version : 2.136.0
2018-09-04T12:01:23.8111699Z Author : Microsoft Corporation
2018-09-04T12:01:23.8128664Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613735)
2018-09-04T12:01:23.8146694Z ==============================================================================
2018-09-04T12:01:26.3345330Z Generating script.
2018-09-04T12:01:26.3392080Z Script contents:
2018-09-04T12:01:26.3409635Z ./test.sh
2018-09-04T12:01:26.3574923Z [command]/bin/bash --noprofile --norc /home/vsts/work/_temp/02476800-8a7e-4e22-8715-c3f706e3679f.sh
2018-09-04T12:01:27.7054918Z Unable to find image 'httpd:latest' locally
2018-09-04T12:01:30.5555851Z latest: Pulling from library/httpd
2018-09-04T12:01:31.4312351Z d660b1f15b9b: Pulling fs layer
[…]
2018-09-04T12:01:49.1468474Z e86a7f31d4e7506d34e3b854c2a55646eaa4dcc731edc711af2cc934c44da2f9
2018-09-04T12:02:00.2563446Z % Total % Received % Xferd Average Speed Time Time Time Current
2018-09-04T12:02:00.2583211Z Dload Upload Total Spent Left Speed
2018-09-04T12:02:00.2595905Z
2018-09-04T12:02:00.2613320Z 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8083: Connection refused
2018-09-04T12:02:00.7027822Z test-apache
2018-09-04T12:02:00.7642313Z test-apache
2018-09-04T12:02:00.7826541Z ##[error]Bash exited with code '7'.
2018-09-04T12:02:00.7989841Z ##[section]Finishing: CmdLine
The key thing is this:
curl: (7) Failed to connect to localhost port 8083: Connection refused
10 seconds should be enough for apache to start.
Why can curl not communicate with Apache on its port 8083?
P.S.:
I know that a hard-coded port like this is rubbish and that I should use an ephemeral port instead. I wanted to get it running first wirth a hard-coded port, because that's simpler than using an ephemeral port, and then switch to an ephemeral port as soon as the hard-coded port works. And in case the hard-coded port doesn't work because the port is unavailable, the error should look different, in that case, docker run should fail because the port can't be allocated.
Update:
Just to be sure, I've rerun the test with sleep 100 instead of sleep 10. The results are unchanged, curl cannot connect to localhost port 8083.
Update 2:
When extending the script to execute docker logs, docker logs shows that Apache is running as expected.
When extending the script to execute docker ps, it shows the following output:
2018-09-05T00:02:24.1310783Z CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2018-09-05T00:02:24.1336263Z 3f59aa014216 httpd "httpd-foreground" About a minute ago Up About a minute 0.0.0.0:8083->80/tcp test-apache
2018-09-05T00:02:24.1357782Z 850bda64f847 microsoft/vsts-agent:ubuntu-16.04-docker-17.12.0-ce-standard "/home/vsts/agents/2…" 2 minutes ago Up 2 minutes musing_booth
The problem is that the VSTS build agent runs in a Docker container. When the Docker container for Apache is started, it runs on the same level as the VSTS build agent Docker container, not nested inside the VSTS build agent Docker container.
There are two possible solutions:
Replacing localhost with the ip address of the docker host, keeping the port number 8083
Replacing localhost with the ip address of the docker container, changing the host port number 8083 to the container port number 80.
Access via the Docker Host
In this case, the solution is to replace localhost with the ip address of the docker host. The following shell snippet can do that:
host=localhost
if grep '^1:name=systemd:/docker/' /proc/1/cgroup
then
apt-get update
apt-get install net-tools
host=$(route -n | grep '^0.0.0.0' | sed -e 's/^0.0.0.0\s*//' -e 's/ .*//')
fi
curl -D - http://$host:8083/
The if grep '^1:name=systemd:/docker/' /proc/1/cgroup inspects whether the script is running inside a Docker container. If so, it installs net-tools to get access to the route command, and then parses the default gw from the route command to get the ip address of the host. Note that this only works if the container's network default gw actually is the host.
Direct Access to the Docker Container
After launching the docker container, its ip addresses can be obtained with the following command:
docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' <container-id>
Replace <container-id> with your container id or name.
So, in this case, it would be (assuming that the first ip address is okay):
ips=($(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' nuance-apache))
host=${ips[0]}
curl http://$host/

docker apt-get got `Cannot inititate the connection to 8000:80 (...) - connect (22: Invalid Argument)`

I was trying to run apt-get update in a docker container.
I got these errors:
W: Failed to fetch
http://ftp.osuosl.org/pub/mariadb/repo/10.2/debian/dists/jessie/Release.gpg
Cannot initiate the connection to 8000:80 (0.0.31.64).
- connect (22: Invalid argument)
W: Failed to fetch
http://security.debian.org/dists/jessie/updates/Release.gpg
Cannot initiate the connection to 8000:80 (0.0.31.64).
- connect (22: Invalid argument)
I googled around, and some problems related to docker apt-get are related to proxy settings or DNS settings. I think I have addressed both but I am still getting the above error. Any ideas?
proxy settings
Error messages would be this -- I am NOT seeing these errors anymore.
W: Failed to fetch
http://ftp.osuosl.org/pub/mariadb/repo/10.2/debian/dists/jessie/Release.gpg
Cannot initiate the connection to ftp.osuosl.org:80 (2600:3404:200:237::2).
- connect (101: Network is unreachable) [IP: 2600:3404:200:237::2 80]
W: Failed to fetch
https://repo.percona.com/apt/dists/jessie/main/binary-amd64/Packages
Connection timed out after 120001 milliseconds
My solution has been putting lines like these in my dockerfile. Since
then the error messages have changed, so I believe this is the right fix
for the proxy problem.
ENV http_proxy <myCorpProxy>:8000
ENV https_proxy <myCorpProxy>:8000
dns settings
The error would be this -- I am NOT seeing these errors anymore.
W: Failed to fetch
http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg
Could not resolve 'my.proxy.net'
W: Failed to fetch
http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg
Could not resolve 'my.proxy.net'
W: Failed to fetch
http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg
Could not resolve 'my.proxy.net'
Solutions: Fix Docker's networking DNS config
Other forums
I found a discussion thread on forums.docker.com but could not
reach a solution yet
https://forums.docker.com/t/cannot-run-apt-get-update-successfully-behind-proxy-beta-13-1/14170
Update with correct syntax of proxy settings that solved the problem!
Thanks for Matt's answer. I realized that the syntax that
I was using was wrong. They CANNOT be
ENV http_proxy <myCorpProxy.domain.name>:8000
ENV https_proxy <myCorpProxy.domain.name>:8000
but has to be
ENV http_proxy http://<myCorpProxy.domain.name>:8000
ENV https_proxy http://<myCorpProxy.domain.name>:8000
Once I changed that, my apt-get started to work. Thanks a lot!
It looks like your proxy settings are invalid. Running the following command produces the same error message:
docker run -ti --rm \
-e https_proxy=http://8000:80 \
-e http_proxy=http://8000:80 \
debian apt-get update
You need a valid http://hostname:port that you can connect to as a proxy
docker run -ti --rm \
-e https_proxy=http://10.8.8.8:3142 \
-e http_proxy=http://10.8.8.8:3142 \
debian apt-get update
You should be able to get some form of response from the proxy address
→ curl -v http://10.8.8.8:3142
* Rebuilt URL to: http://10.8.8.8:3142/
* Trying 10.8.8.8...
* Connected to 10.8.8.8 (10.8.8.8) port 3142 (#0)
> GET / HTTP/1.1
> Host: 10.8.8.8:3142
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 406 Usage Information
< Connection: close
< Transfer-Encoding: chunked
< Content-Type: text/html
<
I also met this problem on Mac OSX. I tried the solution from #Matt. I use docker run -it -e http_proxy=http://127.0.0.1:1235 -e https_proxy=http://127.0.0.1:1235 ubuntu to set the proxy. I got Could not connect to 127.0.0.1:1235 (127.0.0.1). - connect (111: Connection refused).
I use docker inspect <container_id> to get the proxy information of that container. I got "HTTP_PROXY=docker.for.mac.localhost:1235", "HTTPS_PROXY=docker.for.mac.localhost:1235". I change the command to docker run -it -e http_proxy=http://docker.for.mac.localhost:1235 -e https_proxy=http://docker.for.mac.localhost:1235 ubuntu. It works.

Resources