How do I run containerized Cypress runner against containerized server? - docker

I'm trying to run Cypress tests against containerized Nginx:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7c3efd24e6e6 tdd_nginx "/docker-entrypoint.…" 19 minutes ago Up 19 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp tdd_nginx_1
from official docs I learned I can use docker run -it -v $PWD:/e2e -w /e2e -e CYPRESS_baseUrl=host.docker.internal cypress/included:7.7.0
Here I learned about host.docker.internal which is how supposedly Cypress knows to look for localhost in a particular container.
Nginx container has exposed port 80 so I've tried -e CYPRESS_baseUrl=host.docker.internal:80 as well as without specifying port as port 80 is a fallback port in most cases.
error output:
Cypress could not verify that this server is running:
> http://host.docker.internal:80
We are verifying this server because it has been configured as your `baseUrl`.
Cypress automatically waits until your server is accessible before running tests.
We will try connecting to it 3 more times...
We will try connecting to it 2 more times...
We will try connecting to it 1 more time...
Cypress failed to verify that your server is running.
Please start this server and then run Cypress again.
Moving the env variable into cypress.json made no difference:
{
"baseUrl": "host.docker.internal",
"video": false
}

Changed the cypress.json to:
{
"CYPRESS_BASE_URL": "host.docker.internal",
"video": false
}
parsing CYPRESS_BASE_URL as env variable didn't help but putting it into the file did the trick. Strangely, it makes difference.
Thanks goes to #jonrsharpe

Related

How to shut down HTTP on httpd docker image

I have created a container running apache2 http server, loaded my certificates and https://mydomain works, however http://mydomain works too, and if I digit on my browser mydomain the browser open http://mydomain. Is there a way to disable http protocol? I use only -p 443:443 while starting the container.
This is my Dockerfile
ARG version=2.4.48-alpine
FROM httpd:$version
LABEL version=1.0
COPY ./public_html/ /usr/local/apache2/htdocs/
# run web traffic over SSL/HTTPS
COPY ./cert/srv.crt /usr/local/apache2/conf/
COPY ./cert/srv.key /usr/local/apache2/conf/
RUN ["sed", "-i", "-e", "'s/^#\(Include .*httpd-ssl.conf\)/\1/'", "-e", "'s/^#\(LoadModule .*mod_ssl.so\)/\1/'", "-e", "'s/^#\(LoadModule .*mod_socache_shmcb.so\)/\1/'", "conf/httpd.conf"]
EXPOSE 443/tcp
and this is the outpuf of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
21678d6321e4 webserver "/bin/sh" 2 hours ago Up About an hour 80/tcp, 0.0.0.0:443->443/tcp webserver
I resolved the issue redirecting http to https exposing also the 80 port.

Issue accessing vespa outside docker container

Installed Docker on Mac and trying to run Vespa on Docker following steps specified in following link
https://docs.vespa.ai/documentation/vespa-quick-start.html
I did n't had any issues till step 4. I see vespa container running after step 2 and step 3 returned 200 OK response.
But Step 5 failed to return 200 OK response. Below is the command I ran on my terminal
curl -s --head http://localhost:8080/ApplicationStatus
I keep getting
curl: (52) Empty reply from server whenever I run without -s option.
So I tried to see listening ports inside my vespa container and don't see anything for 8080 but can see for 19071(used in step 3)
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 8080'
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 19071'
tcp 0 0 0.0.0.0:19071 0.0.0.0:* LISTEN
Below doc has info related to vespa ports
https://docs.vespa.ai/documentation/reference/files-processes-and-ports.html
I'm assuming port 8080 should be active after docker run(step 2 of quick start link) and can be accessed outside container as port mapping is done.
But I don't see 8080 port active inside container in first place.
A'm I missing something. Do I need to perform any additional step than mentioned in quick start? FYI I installed Jenkins inside my docker and was able to access outside container via port mapping. But not sure why it's not working with vespa.I have been trying from quiet sometime but no progress. Please advice me if I'm missing something here.
You have too low memory for your docker container, "Minimum 6GB memory dedicated to Docker (the default is 2GB on Macs).". See https://docs.vespa.ai/documentation/vespa-quick-start.html
The deadlock detector warnings and failure to get configuration from configuration server (which is likely oom killed) indicates that you are too low on memory.
My guess is that your jdisc container had not finished initialize or did not initialize properly? Did you try to check the log?
docker exec vespa bash -c '/opt/vespa/bin/vespa-logfmt /opt/vespa/logs/vespa/vespa.log'
This should tell you if there was something wrong. When it is ready to receive requests you would see something like this:
[2018-12-10 06:30:37.854] INFO : container Container.org.eclipse.jetty.server.AbstractConnector Started SearchServer#79afa369{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
[2018-12-10 06:30:37.857] INFO : container Container.org.eclipse.jetty.server.Server Started #10280ms
[2018-12-10 06:30:37.857] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Switching to the latest deployed set of configurations and components. Application switch number: 0
[2018-12-10 06:30:37.859] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Initializing new set of configurations and components. Application switch number: 1

Cannot conect to Docker container running in VSTS

I have a test which starts a Docker container, performs the verification (which is talking to the Apache httpd in the Docker container), and then stops the Docker container.
When I run this test locally, this test runs just fine. But when it runs on hosted VSTS, thus a hosted build agent, it cannot connect to the Apache httpd in the Docker container.
This is the .vsts-ci.yml file:
queue: Hosted Linux Preview
steps:
- script: |
./test.sh
This is the test.sh shell script to reproduce the problem:
#!/bin/bash
set -e
set -o pipefail
function tearDown {
docker stop test-apache
docker rm test-apache
}
trap tearDown EXIT
docker run -d --name test-apache -p 8083:80 httpd
sleep 10
curl -D - http://localhost:8083/
When I run this test locally, the output that I get is:
$ ./test.sh
469d50447ebc01775d94e8bed65b8310f4d9c7689ad41b2da8111fd57f27cb38
HTTP/1.1 200 OK
Date: Tue, 04 Sep 2018 12:00:17 GMT
Server: Apache/2.4.34 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
<html><body><h1>It works!</h1></body></html>
test-apache
test-apache
This output is exactly as I expect.
But when I run this test on VSTS, the output that I get is (irrelevant parts replaced with …).
2018-09-04T12:01:23.7909911Z ##[section]Starting: CmdLine
2018-09-04T12:01:23.8044456Z ==============================================================================
2018-09-04T12:01:23.8061703Z Task : Command Line
2018-09-04T12:01:23.8077837Z Description : Run a command line script using cmd.exe on Windows and bash on macOS and Linux.
2018-09-04T12:01:23.8095370Z Version : 2.136.0
2018-09-04T12:01:23.8111699Z Author : Microsoft Corporation
2018-09-04T12:01:23.8128664Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613735)
2018-09-04T12:01:23.8146694Z ==============================================================================
2018-09-04T12:01:26.3345330Z Generating script.
2018-09-04T12:01:26.3392080Z Script contents:
2018-09-04T12:01:26.3409635Z ./test.sh
2018-09-04T12:01:26.3574923Z [command]/bin/bash --noprofile --norc /home/vsts/work/_temp/02476800-8a7e-4e22-8715-c3f706e3679f.sh
2018-09-04T12:01:27.7054918Z Unable to find image 'httpd:latest' locally
2018-09-04T12:01:30.5555851Z latest: Pulling from library/httpd
2018-09-04T12:01:31.4312351Z d660b1f15b9b: Pulling fs layer
[…]
2018-09-04T12:01:49.1468474Z e86a7f31d4e7506d34e3b854c2a55646eaa4dcc731edc711af2cc934c44da2f9
2018-09-04T12:02:00.2563446Z % Total % Received % Xferd Average Speed Time Time Time Current
2018-09-04T12:02:00.2583211Z Dload Upload Total Spent Left Speed
2018-09-04T12:02:00.2595905Z
2018-09-04T12:02:00.2613320Z 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8083: Connection refused
2018-09-04T12:02:00.7027822Z test-apache
2018-09-04T12:02:00.7642313Z test-apache
2018-09-04T12:02:00.7826541Z ##[error]Bash exited with code '7'.
2018-09-04T12:02:00.7989841Z ##[section]Finishing: CmdLine
The key thing is this:
curl: (7) Failed to connect to localhost port 8083: Connection refused
10 seconds should be enough for apache to start.
Why can curl not communicate with Apache on its port 8083?
P.S.:
I know that a hard-coded port like this is rubbish and that I should use an ephemeral port instead. I wanted to get it running first wirth a hard-coded port, because that's simpler than using an ephemeral port, and then switch to an ephemeral port as soon as the hard-coded port works. And in case the hard-coded port doesn't work because the port is unavailable, the error should look different, in that case, docker run should fail because the port can't be allocated.
Update:
Just to be sure, I've rerun the test with sleep 100 instead of sleep 10. The results are unchanged, curl cannot connect to localhost port 8083.
Update 2:
When extending the script to execute docker logs, docker logs shows that Apache is running as expected.
When extending the script to execute docker ps, it shows the following output:
2018-09-05T00:02:24.1310783Z CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2018-09-05T00:02:24.1336263Z 3f59aa014216 httpd "httpd-foreground" About a minute ago Up About a minute 0.0.0.0:8083->80/tcp test-apache
2018-09-05T00:02:24.1357782Z 850bda64f847 microsoft/vsts-agent:ubuntu-16.04-docker-17.12.0-ce-standard "/home/vsts/agents/2…" 2 minutes ago Up 2 minutes musing_booth
The problem is that the VSTS build agent runs in a Docker container. When the Docker container for Apache is started, it runs on the same level as the VSTS build agent Docker container, not nested inside the VSTS build agent Docker container.
There are two possible solutions:
Replacing localhost with the ip address of the docker host, keeping the port number 8083
Replacing localhost with the ip address of the docker container, changing the host port number 8083 to the container port number 80.
Access via the Docker Host
In this case, the solution is to replace localhost with the ip address of the docker host. The following shell snippet can do that:
host=localhost
if grep '^1:name=systemd:/docker/' /proc/1/cgroup
then
apt-get update
apt-get install net-tools
host=$(route -n | grep '^0.0.0.0' | sed -e 's/^0.0.0.0\s*//' -e 's/ .*//')
fi
curl -D - http://$host:8083/
The if grep '^1:name=systemd:/docker/' /proc/1/cgroup inspects whether the script is running inside a Docker container. If so, it installs net-tools to get access to the route command, and then parses the default gw from the route command to get the ip address of the host. Note that this only works if the container's network default gw actually is the host.
Direct Access to the Docker Container
After launching the docker container, its ip addresses can be obtained with the following command:
docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' <container-id>
Replace <container-id> with your container id or name.
So, in this case, it would be (assuming that the first ip address is okay):
ips=($(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' nuance-apache))
host=${ips[0]}
curl http://$host/

Docker healthcheck for nginx container

I have a project using the official nginx docker container from Docker Hub, launching via Docker Compose. I have healthchecks configured in Docker Compose for each of my containers, and recently the healthcheck for this nginx container has been behaving strangely; on launching with docker-compose up -d, all my containers launch, and begin running healthchecks, but the nginx container looks like it never runs the healthcheck. I can manually run the script just fine if I docker exec into the container, and the healthcheck runs normally if I restart the container.
Example output from docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
458a55ae8971 my_custom_image "/tini -- /usr/local…" 7 minutes ago Up 7 minutes (healthy) project_worker_1
5024781b1a73 redis:3.2 "docker-entrypoint.s…" 7 minutes ago Up 7 minutes (healthy) 127.0.0.1:6379->6379/tcp project_redis_1
bd405dde8ce7 postgres:9.6 "docker-entrypoint.s…" 7 minutes ago Up 7 minutes (healthy) 127.0.0.1:15432->5432/tcp project_postgres_1
93e15c18d879 nginx:mainline "nginx -g 'daemon of…" 7 minutes ago Up 7 minutes (health: starting) 127.0.0.1:80->80/tcp, 127.0.0.1:443->443/tcp nginx
Example (partial, for brevity) output from docker inspect nginx:
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 11568,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-02-13T21:04:22.904241169Z",
"FinishedAt": "0001-01-01T00:00:00Z",
"Health": {
"Status": "unhealthy",
"FailingStreak": 0,
"Log": []
}
},
The portion of the docker-compose.yml defining the nginx container:
nginx:
image: nginx:mainline
# using container_name means there will only ever be one nginx container!
container_name: nginx
restart: always
networks:
- proxynet
volumes:
- /etc/nginx/conf.d
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- tlsdata:/etc/nginx/certs:ro
- attachdata:/usr/share/nginx/html/uploads:ro
- staticdata:/usr/share/nginx/html/static:ro
- ./nginx/healthcheck.sh:/bin/healthcheck.sh
healthcheck:
test: ['CMD', '/bin/healthcheck.sh']
interval: 1m
timeout: 5s
retries: 3
ports:
# Make the http/https ports available on the Docker host IPv4 loopback interface
- '127.0.0.1:80:80'
- '127.0.0.1:443:443'
The healthcheck.sh I am loading in as a volume:
#!/bin/bash
service nginx status || exit 1
It looks like the problem is just an issue with systemd never returning from the status check when the container initially launches, and at the same time the configured healthcheck timeout does not trigger. Everything else works, and nginx is up and responding, but it would be nice for the healthcheck to function properly without needing to manually restart each time I start up.
Is there something missing in my configuration, or a better check I can run?
I think that there is no need for a custom script in this case.
Try just change your healthcheck test to
test: ["CMD", "service", "nginx", "status"]
That works fine for me.
Try to use " instead of ' as well, just in case :)
EDIT
If you really want to force an exit 1, in case of failure, you could use:
test: service nginx status || exit 1
for the official alpine nginx image you can also do:
healthcheck:
test: ["CMD-SHELL", "wget -O /dev/null http://localhost || exit 1"]
timeout: 10s
wget is part of the standard image. What this does is download your index.html/php/whatever to nowhere (/dev/null), and it should timeout and fail otherwise.
I attempted the same script and encountered the same issue. I changed the healthcheck.sh to instead run like this:
#!/bin/bash
if service nginx status; then
exit 0
else
exit 1
fi
Running this in the docker container resulted in successful health checks.
Over a year later, I have found a solution. First, an additional clarification on the environment, what I believe is happening, and speculation on a possible bug with the Docker Engine.
The Compose file I am using now is launching a lightly modified version of the 'official' Alpine NGINX image, which uses COPY to load in the healthcheck script and adds HEALTHCHECK explicitly in the image. This image is used for an nginx service, and is used in concert with an image running jwilder/docker-gen to use container metadata from Docker to generate NGINX configuration files. This container is running as a service named nginx-gen. When containers change, configuration is re-generated, and if there are any changes, a SIGHUP is sent to the nginx service.
What I discovered is the following:
If all services are launched together, the nginx service never runs healthchecks;
If the nginx service is restarted soon after launch, healthchecks complete normally;
If the nginx service is launched by itself, healthchecks complete normally;
If all services other than nginx-gen are launched together, healthchecks complete normally;
If all services are launched together, but nginx-gen is modified to sleep 60 before doing anything, healthchecks complete normally;
So, it appears that there is some obscure interaction with signal processing, Docker, and NGINX. If a SIGHUP is sent to an NGINX process in a container before the first healthcheck runs in that container, no healthchecks ever run.
The final iteration I came up with modifies the nginx-gen container to poll the health of the nginx container. It looks up the health status of a container with a defined label in a loop, with a short sleep. Once the nginx container reports healthy, nginx-gen proceeds to generate configuration files. I also changed the notification method to docker exec a script to explicitly test and reload configuration in the nginx container, rather than rely on SIGHUP.
End result: I can docker-compose up -d, and everything eventually reports healthy without further intervention. Success!

Connect to a Service running inside a docker container from outside

I have a service running in a docker container (local machine). I can see the service URL in the Ambari service config.
Now I want to connect to that service using my local development environment.
I found I can connect to that within the container but when I use that URL outside in my local I get connection refused.
Cause: org.apache.http.conn.HttpHostConnectException: Connect to
xx.xx.xx.com:12008 [xx.xx.xx.com/195.169.98.101] failed: Connection refused
How to connect to a service running inside a container from outside?
In my case code execute in my local machine.
If your container has mapped its port on the VM 12008 port, you would need to make sure you have port forwarded 12008 in your VirtualBox connection settings, as I mention in "How to connect mysql workbench to running mysql inside docker?"
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port12008 ,tcp,,12008,,12008"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port12008 ,udp,,12008,,12008"
The question needs more clarification, but I will answer with some assumptions.
I used an Ambari docker image (chose this randomly based on popularity).
Then I started 3 clusters as mentioned and my amb-settings and docker ps looked like this:
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ amb-settings
NODE_PREFIX=amb
CLUSTER_SIZE=3
AMBARI_SERVER_NAME=amb-server
AMBARI_SERVER_IMAGE=hortonworks/ambari-server:latest
AMBARI_AGENT_IMAGE=hortonworks/ambari-agent:latest
DOCKER_OPTS=
AMBARI_SERVER_IP=172.17.0.6
CONSUL=amb-consul
CONSUL_IMAGE=sequenceiq/consul:v0.5.0-v6
EXPOSE_DNS=false
DRY_RUN=false
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2483a74d919 hortonworks/ambari-agent:latest "/usr/sbin/init syste" 20 minutes ago Up 20 minutes amb2
4acaec766eaa hortonworks/ambari-agent:latest "/usr/sbin/init syste" 21 minutes ago Up 20 minutes amb1
47e9419de59f hortonworks/ambari-server:latest "/usr/sbin/init syste" 21 minutes ago Up 21 minutes 8080/tcp amb-server
548730bb1824 sequenceiq/consul:v0.5.0-v6 "/bin/start -server -" 22 minutes ago Up 22 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 8500/tcp amb-consul
27c725af6531 sequenceiq/ambari "/usr/sbin/init" 23 minutes ago Up 23 minutes 8080/tcp awesome_tesla
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$
As of now, I can visit the Ambari server through: http://172.17.0.6:8080/
This works also from my host computer. However, if you want this to be connected from another computer from a similar network, then one option is to have a haproxy which does the redirection from:
localhost:8080 -> 172.17.0.6:8080
So, I created a small haproxy.cfg and Dockerfile to achieve this:
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ cat Dockerfile
FROM haproxy:1.6
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ cat haproxy.cfg
frontend localnodes
bind *:8080
mode http
default_backend ambari
backend ambari
mode http
server ambari-server 172.17.0.6:8080 check
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker build --rm -t ambariproxy .
Sending build context to Docker daemon 9.635 MB
Step 1 : FROM haproxy:1.6
---> af749d0291b2
Step 2 : COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
---> Using cache
---> 60cdd2c7bb05
Successfully built 60cdd2c7bb05
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker run -d -p 8080:8080 ambariproxy
63dd026349bbb6752dbd898e1ae70e48a8785e792b35040e0d0473acb00c2834
Now if I say localhost:8080 or MY_HOST_IP:8080 I can see the ambari-server and this should work also from computers in the same network.
Hope I managed to answer your question :)
Thanks,

Resources