I have a test which starts a Docker container, performs the verification (which is talking to the Apache httpd in the Docker container), and then stops the Docker container.
When I run this test locally, this test runs just fine. But when it runs on hosted VSTS, thus a hosted build agent, it cannot connect to the Apache httpd in the Docker container.
This is the .vsts-ci.yml file:
queue: Hosted Linux Preview
steps:
- script: |
./test.sh
This is the test.sh shell script to reproduce the problem:
#!/bin/bash
set -e
set -o pipefail
function tearDown {
docker stop test-apache
docker rm test-apache
}
trap tearDown EXIT
docker run -d --name test-apache -p 8083:80 httpd
sleep 10
curl -D - http://localhost:8083/
When I run this test locally, the output that I get is:
$ ./test.sh
469d50447ebc01775d94e8bed65b8310f4d9c7689ad41b2da8111fd57f27cb38
HTTP/1.1 200 OK
Date: Tue, 04 Sep 2018 12:00:17 GMT
Server: Apache/2.4.34 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
<html><body><h1>It works!</h1></body></html>
test-apache
test-apache
This output is exactly as I expect.
But when I run this test on VSTS, the output that I get is (irrelevant parts replaced with …).
2018-09-04T12:01:23.7909911Z ##[section]Starting: CmdLine
2018-09-04T12:01:23.8044456Z ==============================================================================
2018-09-04T12:01:23.8061703Z Task : Command Line
2018-09-04T12:01:23.8077837Z Description : Run a command line script using cmd.exe on Windows and bash on macOS and Linux.
2018-09-04T12:01:23.8095370Z Version : 2.136.0
2018-09-04T12:01:23.8111699Z Author : Microsoft Corporation
2018-09-04T12:01:23.8128664Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613735)
2018-09-04T12:01:23.8146694Z ==============================================================================
2018-09-04T12:01:26.3345330Z Generating script.
2018-09-04T12:01:26.3392080Z Script contents:
2018-09-04T12:01:26.3409635Z ./test.sh
2018-09-04T12:01:26.3574923Z [command]/bin/bash --noprofile --norc /home/vsts/work/_temp/02476800-8a7e-4e22-8715-c3f706e3679f.sh
2018-09-04T12:01:27.7054918Z Unable to find image 'httpd:latest' locally
2018-09-04T12:01:30.5555851Z latest: Pulling from library/httpd
2018-09-04T12:01:31.4312351Z d660b1f15b9b: Pulling fs layer
[…]
2018-09-04T12:01:49.1468474Z e86a7f31d4e7506d34e3b854c2a55646eaa4dcc731edc711af2cc934c44da2f9
2018-09-04T12:02:00.2563446Z % Total % Received % Xferd Average Speed Time Time Time Current
2018-09-04T12:02:00.2583211Z Dload Upload Total Spent Left Speed
2018-09-04T12:02:00.2595905Z
2018-09-04T12:02:00.2613320Z 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8083: Connection refused
2018-09-04T12:02:00.7027822Z test-apache
2018-09-04T12:02:00.7642313Z test-apache
2018-09-04T12:02:00.7826541Z ##[error]Bash exited with code '7'.
2018-09-04T12:02:00.7989841Z ##[section]Finishing: CmdLine
The key thing is this:
curl: (7) Failed to connect to localhost port 8083: Connection refused
10 seconds should be enough for apache to start.
Why can curl not communicate with Apache on its port 8083?
P.S.:
I know that a hard-coded port like this is rubbish and that I should use an ephemeral port instead. I wanted to get it running first wirth a hard-coded port, because that's simpler than using an ephemeral port, and then switch to an ephemeral port as soon as the hard-coded port works. And in case the hard-coded port doesn't work because the port is unavailable, the error should look different, in that case, docker run should fail because the port can't be allocated.
Update:
Just to be sure, I've rerun the test with sleep 100 instead of sleep 10. The results are unchanged, curl cannot connect to localhost port 8083.
Update 2:
When extending the script to execute docker logs, docker logs shows that Apache is running as expected.
When extending the script to execute docker ps, it shows the following output:
2018-09-05T00:02:24.1310783Z CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2018-09-05T00:02:24.1336263Z 3f59aa014216 httpd "httpd-foreground" About a minute ago Up About a minute 0.0.0.0:8083->80/tcp test-apache
2018-09-05T00:02:24.1357782Z 850bda64f847 microsoft/vsts-agent:ubuntu-16.04-docker-17.12.0-ce-standard "/home/vsts/agents/2…" 2 minutes ago Up 2 minutes musing_booth
The problem is that the VSTS build agent runs in a Docker container. When the Docker container for Apache is started, it runs on the same level as the VSTS build agent Docker container, not nested inside the VSTS build agent Docker container.
There are two possible solutions:
Replacing localhost with the ip address of the docker host, keeping the port number 8083
Replacing localhost with the ip address of the docker container, changing the host port number 8083 to the container port number 80.
Access via the Docker Host
In this case, the solution is to replace localhost with the ip address of the docker host. The following shell snippet can do that:
host=localhost
if grep '^1:name=systemd:/docker/' /proc/1/cgroup
then
apt-get update
apt-get install net-tools
host=$(route -n | grep '^0.0.0.0' | sed -e 's/^0.0.0.0\s*//' -e 's/ .*//')
fi
curl -D - http://$host:8083/
The if grep '^1:name=systemd:/docker/' /proc/1/cgroup inspects whether the script is running inside a Docker container. If so, it installs net-tools to get access to the route command, and then parses the default gw from the route command to get the ip address of the host. Note that this only works if the container's network default gw actually is the host.
Direct Access to the Docker Container
After launching the docker container, its ip addresses can be obtained with the following command:
docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' <container-id>
Replace <container-id> with your container id or name.
So, in this case, it would be (assuming that the first ip address is okay):
ips=($(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' nuance-apache))
host=${ips[0]}
curl http://$host/
Related
I'm trying to bulid a freeradius-server using docker and pull a image "freeradius/freeradius server". The first time I used given command
docker run --name my-radius -t -d freeradius/freeradius-server -X
to build a containner adn successfully start debug mode. But I don't know how to quit so I used ctrl+c to stop the containner. And then I used commands below to get in the containner and want to start debug mode again so that I can change configuration or parameters.
docker start my-radius
docker exec -it my-radius /bin/bash
I got in the containner and used freeradius -X but failed. It present
Failed binding to auth address 127.0.0.1 port 18120 bound to server inner-tunnel: Address already in use
/etc/freeradius/sites-enabled/inner-tunnel[33]: Error binding to port for 127.0.0.1 port 18120
I used Google to look for solutions but failed. I guess it means the radius-server started automatically so that the address 127.0.0.1 and port 18120 were used. But I don't know how to stop it in the containner .
The official FreeRADIUS docker image will start FreeRADIUS when the container starts. This means that if you start the container and then exec a shell into it, FreeRADIUS will already be running.
The container will exit as soon as the FreeRADIUS process stops, meaning it is not possible to start the container in this way, stop FreeRADIUS running, and then continue to use the container.
In this situation, trying to run FreeRADIUS a second time in another shell will fail because the ports are already open, as you have discovered.
This can be see thus:
$ docker run --name my-radius -d freeradius/freeradius-server
106cdbc81e8e5c0257f22bebad221ed1b4ba0a14f40ce1e4110ec388380c7e62
$ docker exec -it my-radius /bin/bash
root#106cdbc81e8e:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
freerad 1 0 1 23:10 ? 00:00:00 freeradius -f
root 12 0 1 23:10 pts/0 00:00:00 /bin/bash
root 22 12 0 23:10 pts/0 00:00:00 ps -ef
root#106cdbc81e8e:/# exit
exit
$ docker stop my-radius
my-radius
$ docker rm my-radius
my-radius
$
To be able to run FreeRADIUS yourself you can do two things. Firstly, don't start the container in the background, but start it in the foreground with FreeRADIUS in debug mode. The docker entrypoint will let you pass arguments directly to the daemon. This is the easiest way if you don't need to actually do anything inside the container, but just run FreeRADIUS in debug mode:
$ docker run --name my-radius -it freeradius/freeradius-server -X
FreeRADIUS Version 3.0.21
Copyright (C) 1999-2019 The FreeRADIUS server project and contributors
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE
...
Listening on auth address * port 1812 bound to server default
Listening on acct address * port 1813 bound to server default
Listening on auth address :: port 1812 bound to server default
Listening on acct address :: port 1813 bound to server default
Listening on auth address 127.0.0.1 port 18120 bound to server inner-tunnel
Listening on proxy address * port 38640
Listening on proxy address :: port 49445
Ready to process requests
^C$
(note hit Ctrl-C to quit).
The alternative is to start it in the background, but instead of running FreeRADIUS run some other process. You can then exec into the container and run FreeRADIUS manually. This means you get a full shell inside the container without FreeRADIUS already running. For instance:
$ docker run --name my-radius -d freeradius/freeradius-server sleep 999999999999
23b5ddd4825a31a8fb417e1594028c6533267be4ff20a448d3844203b805dbd9
$ docker exec -it my-radius /bin/bash
root#23b5ddd4825a:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 23:16 ? 00:00:00 sleep 999999999999
root 7 0 0 23:17 pts/0 00:00:00 /bin/bash
root 17 7 0 23:17 pts/0 00:00:00 ps -ef
root#23b5ddd4825a:/# freeradius -X
FreeRADIUS Version 3.0.21
Copyright (C) 1999-2019 The FreeRADIUS server project and contributors
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE
...
Listening on proxy address * port 46662
Listening on proxy address :: port 40284
Ready to process requests
^Croot#23b5ddd4825a:/# exit
exit
$ docker container kill my-radius
my-radius
$ docker container rm my-radius
my-radius
The sleep command used here will obviously quit at some point, so use a number large enough that it runs for long enough, as when that process exits the container will shut down.
In my Docker (Spring Boot) application I would like to execute Docker commands. I use the docker-spotify-api (client).
I get different connection errors. I start the application as part of a docker-compose.yml.
This is what I tried so far on an EC2 AWS VPS:
docker = DefaultDockerClient.builder()
.uri(URI.create("tcp://localhost:2376"))
.build();
=> TCP protocol not supported.
docker = DefaultDockerClient.builder()
.uri(URI.create("tcp://localhost:2375"))
.build();
=> TCP protocol not supported.
docker = new DefaultDockerClient("unix:///var/run/docker.sock");
==> No such file
docker = DefaultDockerClient.builder()
.uri("unix:///var/run/docker.sock")
.build();
==> No such file
docker = DefaultDockerClient.builder()
.uri(URI.create("http://localhost:2375")).build();
or
docker = DefaultDockerClient.builder()
.uri(URI.create("http://localhost:2376")).build();
or
docker = DefaultDockerClient.builder()
.uri(URI.create("https://localhost:2376"))
.build();
==> Connect to localhost:2376 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
Wthat is my environment on EC2 VPS:
$ ls -l /var/run
lrwxrwxrwx 1 root root 6 Nov 14 07:23 /var/run -> ../run
$ groups ec2-user
ec2-user : ec2-user adm wheel systemd-journal docker
$ ls -l /run/docker.sock
srw-rw---- 1 root docker 0 Feb 14 17:16 /run/docker.sock
echo $DOCKER_HOST $DOCKER_CERT_PATH
(empty)
This situation is similar to https://github.com/spotify/docker-client/issues/838#issuecomment-318261710.
You use docker-compose on the host to start up your application; Within the container, the Spring Boot application is using docker-spotify-api.
What you can try is to mount /var/run/docker.sock:/var/run/docker.sock in you compose file.
As #Benjah1 indicated, /var/run/docker.sock had to be mounted first.
To do so in a docker-compose / Docker Swarm environment you can do:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Furthermore, the other options resulted in errors because the default setting of Docker is that it won't open up to tcp/http connections. You can change this, of course, taking a small risk.
What is your DOCKER_HOST and DOCKER_CERT_PATH env vars value.
Try below as docker-client communicates with your local Docker daemon using the HTTP Remote API
final DockerClient docker = DefaultDockerClient.builder()
.uri(URI.create("https://localhost:2376"))
.build();
please also verify the privileges of docker.sock is it visible to your app and check weather your docker service is running or not as from above screenshot your docker.sock looks empty but if service is running it should contain pid in it
It took me some time to figure out, but I was running https://hub.docker.com/r/alpine/socat/ locally, and also wanted to connect to my Docker daemon and couldn't (same errors). Then it struck me: the solution on that webpage uses 127.0.0.1 as the ip address to bind to. Instead, start that container with 0.0.0.0, and then inside your container, you can do this: DockerClient dockerClient = new DefaultDockerClient("http://192.168.1.215:2376"); (use your own ip-address of course).
This worked for me.
Installed Docker on Mac and trying to run Vespa on Docker following steps specified in following link
https://docs.vespa.ai/documentation/vespa-quick-start.html
I did n't had any issues till step 4. I see vespa container running after step 2 and step 3 returned 200 OK response.
But Step 5 failed to return 200 OK response. Below is the command I ran on my terminal
curl -s --head http://localhost:8080/ApplicationStatus
I keep getting
curl: (52) Empty reply from server whenever I run without -s option.
So I tried to see listening ports inside my vespa container and don't see anything for 8080 but can see for 19071(used in step 3)
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 8080'
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 19071'
tcp 0 0 0.0.0.0:19071 0.0.0.0:* LISTEN
Below doc has info related to vespa ports
https://docs.vespa.ai/documentation/reference/files-processes-and-ports.html
I'm assuming port 8080 should be active after docker run(step 2 of quick start link) and can be accessed outside container as port mapping is done.
But I don't see 8080 port active inside container in first place.
A'm I missing something. Do I need to perform any additional step than mentioned in quick start? FYI I installed Jenkins inside my docker and was able to access outside container via port mapping. But not sure why it's not working with vespa.I have been trying from quiet sometime but no progress. Please advice me if I'm missing something here.
You have too low memory for your docker container, "Minimum 6GB memory dedicated to Docker (the default is 2GB on Macs).". See https://docs.vespa.ai/documentation/vespa-quick-start.html
The deadlock detector warnings and failure to get configuration from configuration server (which is likely oom killed) indicates that you are too low on memory.
My guess is that your jdisc container had not finished initialize or did not initialize properly? Did you try to check the log?
docker exec vespa bash -c '/opt/vespa/bin/vespa-logfmt /opt/vespa/logs/vespa/vespa.log'
This should tell you if there was something wrong. When it is ready to receive requests you would see something like this:
[2018-12-10 06:30:37.854] INFO : container Container.org.eclipse.jetty.server.AbstractConnector Started SearchServer#79afa369{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
[2018-12-10 06:30:37.857] INFO : container Container.org.eclipse.jetty.server.Server Started #10280ms
[2018-12-10 06:30:37.857] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Switching to the latest deployed set of configurations and components. Application switch number: 0
[2018-12-10 06:30:37.859] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Initializing new set of configurations and components. Application switch number: 1
Pretty straightforward:
christian#christian:~/development$ docker -v
Docker version 1.6.2, build 7c8fca2
I ran these instructions to start docker.
docker run --detach --name neo4j --publish 7474:7474 \
--volume $HOME/neo4j/data:/data neo4j
Nothing exciting here; this should all just work.
But, http://localhost:7474 doesn't respond. When I jump into the container, it seems to respond just fine (see debug session). What did I miss?
christian#christian:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2d9e0d5d2f73 neo4j:latest "/docker-entrypoint. 15 minutes ago Up 15 minutes 7473/tcp, 0.0.0.0:7474->7474/tcp neo4j
christian#christian:~$ curl http://localhost:7474
^C
christian#christian:~$ time curl http://localhost:7474
^C
real 0m33.353s
user 0m0.008s
sys 0m0.000s
christian#christian:~$ docker exec -it 2d9e0d5d2f7389ed8b7c91d923af4a664471a93f805deb491b20fe14d389a3d2 /bin/bash
root#2d9e0d5d2f73:/var/lib/neo4j# curl http://localhost:7474
{
"management" : "http://localhost:7474/db/manage/",
"data" : "http://localhost:7474/db/data/"
}root#2d9e0d5d2f73:/var/lib/neo4j# exit
christian#christian:~$ docker logs 2d9e0d5d2f7389ed8b7c91d923af4a664471a93f805deb491b20fe14d389a3d2
Starting Neo4j Server console-mode...
/var/lib/neo4j/data/log was missing, recreating...
2016-03-07 17:37:22.878+0000 INFO No SSL certificate found, generating a self-signed certificate..
2016-03-07 17:37:25.276+0000 INFO Successfully started database
2016-03-07 17:37:25.302+0000 INFO Starting HTTP on port 7474 (4 threads available)
2016-03-07 17:37:25.462+0000 INFO Enabling HTTPS on port 7473
2016-03-07 17:37:25.531+0000 INFO Mounting static content at /webadmin
2016-03-07 17:37:25.579+0000 INFO Mounting static content at /browser
2016-03-07 17:37:26.384+0000 INFO Remote interface ready and available at http://0.0.0.0:7474/
I can't reproduce this. Docker 1.8.2. & 1.10.0 is OK with your case:
docker run --detach --name neo4j --publish 7474:7474 neo4j
curl -i 127.0.0.1:7474
HTTP/1.1 200 OK
Date: Tue, 08 Mar 2016 16:45:46 GMT
Content-Type: application/json; charset=UTF-8
Access-Control-Allow-Origin: *
Content-Length: 100
Server: Jetty(9.2.4.v20141103)
{
"management" : "http://127.0.0.1:7474/db/manage/",
"data" : "http://127.0.0.1:7474/db/data/"
}
Try upgrade Docker and check netfilter rules for forwarding.
Instead of making the request to localhost you'll want to use the docker-machine VM ip address, which you can determine with this command:
docker-machine inspect default | grep IPAddress
or
curl -i http://$(docker-machine ip default):7474/
The default IP address is 192.168.99.100
OK, basically I removed the volume mount in the args to docker and it works. Ultimately, I don't want an out-of-container mount anyways. Thank you #LoadAverage for cluing me in. It's still not 'right' but for my purposes I don't care.
christian#christian:~/development$ docker run --detach --name neo4j --publish 7474:7474 neo4j
6c94527816057f8ca1e325c8f9fa7b441b4a5d26682f72d42ad17614d9251170
christian#christian:~/development$ curl http://127.0.0.1:7474
{
"management" : "http://127.0.0.1:7474/db/manage/",
"data" : "http://127.0.0.1:7474/db/data/"
}
christian#christian:~/development$
I have a service running in a docker container (local machine). I can see the service URL in the Ambari service config.
Now I want to connect to that service using my local development environment.
I found I can connect to that within the container but when I use that URL outside in my local I get connection refused.
Cause: org.apache.http.conn.HttpHostConnectException: Connect to
xx.xx.xx.com:12008 [xx.xx.xx.com/195.169.98.101] failed: Connection refused
How to connect to a service running inside a container from outside?
In my case code execute in my local machine.
If your container has mapped its port on the VM 12008 port, you would need to make sure you have port forwarded 12008 in your VirtualBox connection settings, as I mention in "How to connect mysql workbench to running mysql inside docker?"
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port12008 ,tcp,,12008,,12008"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port12008 ,udp,,12008,,12008"
The question needs more clarification, but I will answer with some assumptions.
I used an Ambari docker image (chose this randomly based on popularity).
Then I started 3 clusters as mentioned and my amb-settings and docker ps looked like this:
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ amb-settings
NODE_PREFIX=amb
CLUSTER_SIZE=3
AMBARI_SERVER_NAME=amb-server
AMBARI_SERVER_IMAGE=hortonworks/ambari-server:latest
AMBARI_AGENT_IMAGE=hortonworks/ambari-agent:latest
DOCKER_OPTS=
AMBARI_SERVER_IP=172.17.0.6
CONSUL=amb-consul
CONSUL_IMAGE=sequenceiq/consul:v0.5.0-v6
EXPOSE_DNS=false
DRY_RUN=false
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2483a74d919 hortonworks/ambari-agent:latest "/usr/sbin/init syste" 20 minutes ago Up 20 minutes amb2
4acaec766eaa hortonworks/ambari-agent:latest "/usr/sbin/init syste" 21 minutes ago Up 20 minutes amb1
47e9419de59f hortonworks/ambari-server:latest "/usr/sbin/init syste" 21 minutes ago Up 21 minutes 8080/tcp amb-server
548730bb1824 sequenceiq/consul:v0.5.0-v6 "/bin/start -server -" 22 minutes ago Up 22 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 8500/tcp amb-consul
27c725af6531 sequenceiq/ambari "/usr/sbin/init" 23 minutes ago Up 23 minutes 8080/tcp awesome_tesla
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$
As of now, I can visit the Ambari server through: http://172.17.0.6:8080/
This works also from my host computer. However, if you want this to be connected from another computer from a similar network, then one option is to have a haproxy which does the redirection from:
localhost:8080 -> 172.17.0.6:8080
So, I created a small haproxy.cfg and Dockerfile to achieve this:
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ cat Dockerfile
FROM haproxy:1.6
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ cat haproxy.cfg
frontend localnodes
bind *:8080
mode http
default_backend ambari
backend ambari
mode http
server ambari-server 172.17.0.6:8080 check
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker build --rm -t ambariproxy .
Sending build context to Docker daemon 9.635 MB
Step 1 : FROM haproxy:1.6
---> af749d0291b2
Step 2 : COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
---> Using cache
---> 60cdd2c7bb05
Successfully built 60cdd2c7bb05
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker run -d -p 8080:8080 ambariproxy
63dd026349bbb6752dbd898e1ae70e48a8785e792b35040e0d0473acb00c2834
Now if I say localhost:8080 or MY_HOST_IP:8080 I can see the ambari-server and this should work also from computers in the same network.
Hope I managed to answer your question :)
Thanks,