How to shut down HTTP on httpd docker image - docker

I have created a container running apache2 http server, loaded my certificates and https://mydomain works, however http://mydomain works too, and if I digit on my browser mydomain the browser open http://mydomain. Is there a way to disable http protocol? I use only -p 443:443 while starting the container.
This is my Dockerfile
ARG version=2.4.48-alpine
FROM httpd:$version
LABEL version=1.0
COPY ./public_html/ /usr/local/apache2/htdocs/
# run web traffic over SSL/HTTPS
COPY ./cert/srv.crt /usr/local/apache2/conf/
COPY ./cert/srv.key /usr/local/apache2/conf/
RUN ["sed", "-i", "-e", "'s/^#\(Include .*httpd-ssl.conf\)/\1/'", "-e", "'s/^#\(LoadModule .*mod_ssl.so\)/\1/'", "-e", "'s/^#\(LoadModule .*mod_socache_shmcb.so\)/\1/'", "conf/httpd.conf"]
EXPOSE 443/tcp
and this is the outpuf of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
21678d6321e4 webserver "/bin/sh" 2 hours ago Up About an hour 80/tcp, 0.0.0.0:443->443/tcp webserver

I resolved the issue redirecting http to https exposing also the 80 port.

Related

How do I run containerized Cypress runner against containerized server?

I'm trying to run Cypress tests against containerized Nginx:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7c3efd24e6e6 tdd_nginx "/docker-entrypoint.…" 19 minutes ago Up 19 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp tdd_nginx_1
from official docs I learned I can use docker run -it -v $PWD:/e2e -w /e2e -e CYPRESS_baseUrl=host.docker.internal cypress/included:7.7.0
Here I learned about host.docker.internal which is how supposedly Cypress knows to look for localhost in a particular container.
Nginx container has exposed port 80 so I've tried -e CYPRESS_baseUrl=host.docker.internal:80 as well as without specifying port as port 80 is a fallback port in most cases.
error output:
Cypress could not verify that this server is running:
> http://host.docker.internal:80
We are verifying this server because it has been configured as your `baseUrl`.
Cypress automatically waits until your server is accessible before running tests.
We will try connecting to it 3 more times...
We will try connecting to it 2 more times...
We will try connecting to it 1 more time...
Cypress failed to verify that your server is running.
Please start this server and then run Cypress again.
Moving the env variable into cypress.json made no difference:
{
"baseUrl": "host.docker.internal",
"video": false
}
Changed the cypress.json to:
{
"CYPRESS_BASE_URL": "host.docker.internal",
"video": false
}
parsing CYPRESS_BASE_URL as env variable didn't help but putting it into the file did the trick. Strangely, it makes difference.
Thanks goes to #jonrsharpe

Docker healthcheck for nginx container

I have a project using the official nginx docker container from Docker Hub, launching via Docker Compose. I have healthchecks configured in Docker Compose for each of my containers, and recently the healthcheck for this nginx container has been behaving strangely; on launching with docker-compose up -d, all my containers launch, and begin running healthchecks, but the nginx container looks like it never runs the healthcheck. I can manually run the script just fine if I docker exec into the container, and the healthcheck runs normally if I restart the container.
Example output from docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
458a55ae8971 my_custom_image "/tini -- /usr/local…" 7 minutes ago Up 7 minutes (healthy) project_worker_1
5024781b1a73 redis:3.2 "docker-entrypoint.s…" 7 minutes ago Up 7 minutes (healthy) 127.0.0.1:6379->6379/tcp project_redis_1
bd405dde8ce7 postgres:9.6 "docker-entrypoint.s…" 7 minutes ago Up 7 minutes (healthy) 127.0.0.1:15432->5432/tcp project_postgres_1
93e15c18d879 nginx:mainline "nginx -g 'daemon of…" 7 minutes ago Up 7 minutes (health: starting) 127.0.0.1:80->80/tcp, 127.0.0.1:443->443/tcp nginx
Example (partial, for brevity) output from docker inspect nginx:
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 11568,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-02-13T21:04:22.904241169Z",
"FinishedAt": "0001-01-01T00:00:00Z",
"Health": {
"Status": "unhealthy",
"FailingStreak": 0,
"Log": []
}
},
The portion of the docker-compose.yml defining the nginx container:
nginx:
image: nginx:mainline
# using container_name means there will only ever be one nginx container!
container_name: nginx
restart: always
networks:
- proxynet
volumes:
- /etc/nginx/conf.d
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- tlsdata:/etc/nginx/certs:ro
- attachdata:/usr/share/nginx/html/uploads:ro
- staticdata:/usr/share/nginx/html/static:ro
- ./nginx/healthcheck.sh:/bin/healthcheck.sh
healthcheck:
test: ['CMD', '/bin/healthcheck.sh']
interval: 1m
timeout: 5s
retries: 3
ports:
# Make the http/https ports available on the Docker host IPv4 loopback interface
- '127.0.0.1:80:80'
- '127.0.0.1:443:443'
The healthcheck.sh I am loading in as a volume:
#!/bin/bash
service nginx status || exit 1
It looks like the problem is just an issue with systemd never returning from the status check when the container initially launches, and at the same time the configured healthcheck timeout does not trigger. Everything else works, and nginx is up and responding, but it would be nice for the healthcheck to function properly without needing to manually restart each time I start up.
Is there something missing in my configuration, or a better check I can run?
I think that there is no need for a custom script in this case.
Try just change your healthcheck test to
test: ["CMD", "service", "nginx", "status"]
That works fine for me.
Try to use " instead of ' as well, just in case :)
EDIT
If you really want to force an exit 1, in case of failure, you could use:
test: service nginx status || exit 1
for the official alpine nginx image you can also do:
healthcheck:
test: ["CMD-SHELL", "wget -O /dev/null http://localhost || exit 1"]
timeout: 10s
wget is part of the standard image. What this does is download your index.html/php/whatever to nowhere (/dev/null), and it should timeout and fail otherwise.
I attempted the same script and encountered the same issue. I changed the healthcheck.sh to instead run like this:
#!/bin/bash
if service nginx status; then
exit 0
else
exit 1
fi
Running this in the docker container resulted in successful health checks.
Over a year later, I have found a solution. First, an additional clarification on the environment, what I believe is happening, and speculation on a possible bug with the Docker Engine.
The Compose file I am using now is launching a lightly modified version of the 'official' Alpine NGINX image, which uses COPY to load in the healthcheck script and adds HEALTHCHECK explicitly in the image. This image is used for an nginx service, and is used in concert with an image running jwilder/docker-gen to use container metadata from Docker to generate NGINX configuration files. This container is running as a service named nginx-gen. When containers change, configuration is re-generated, and if there are any changes, a SIGHUP is sent to the nginx service.
What I discovered is the following:
If all services are launched together, the nginx service never runs healthchecks;
If the nginx service is restarted soon after launch, healthchecks complete normally;
If the nginx service is launched by itself, healthchecks complete normally;
If all services other than nginx-gen are launched together, healthchecks complete normally;
If all services are launched together, but nginx-gen is modified to sleep 60 before doing anything, healthchecks complete normally;
So, it appears that there is some obscure interaction with signal processing, Docker, and NGINX. If a SIGHUP is sent to an NGINX process in a container before the first healthcheck runs in that container, no healthchecks ever run.
The final iteration I came up with modifies the nginx-gen container to poll the health of the nginx container. It looks up the health status of a container with a defined label in a loop, with a short sleep. Once the nginx container reports healthy, nginx-gen proceeds to generate configuration files. I also changed the notification method to docker exec a script to explicitly test and reload configuration in the nginx container, rather than rely on SIGHUP.
End result: I can docker-compose up -d, and everything eventually reports healthy without further intervention. Success!

Cannot start Portainer which compose

I don't know why but I cannot start portainer.
I downloaded https://github.com/portainer/portainer-compose
I did a docker-compose up
Everything seems fine:
portainer-proxy
docker ps
CONTAINER ID IMAGE COMMAND CREATED
9c01c18dcc23 portainer/portainer:latest "/portainer --temp..." 5 minutes ago
2de6b22cadb0 portainer_proxy "nginx -g 'daemon ..." 10 minutes ago
1c0166b3f870 v2tec/watchtower "/watchtower --cle..." 10 minutes ago
893a507f62e3 portainer/templates "nginx -g 'daemon ..." 10 minutes ago
And I have this in the logs :
portainer-app | 2017/11/12 15:01:54 Warning: the --templates / -t flag is deprecated and will be removed in future versions.
portainer-app | 2017/11/12 15:01:54 Starting Portainer 1.15.1 on :9000
I should be able to access portainer on port 9000, but nothing happens here.
If I try to access localhost then I have 404 from nginx.
Do you have any idea?
In the doc of the partainer compose, the link to access to partainer is: http://localhost/portainer (replace localhost by the ip of your server if necessary). So it uses the 80 port.
If you need to use the 9000 port, replace this line ine the docker-compose.yml:
ports:
- "80:80"
by
ports:
- "9000:80"
And access to it: http://localhost:9000/portainer
HTH

Docker port mapping not working, "connection refused"

I have a docker container running in Windows, as per the below.
C:\magento2-devbox>docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
magento2devbox_db_046616a8b9fbb28b8fe4b01a66076f5e docker-entrypoint.sh mysqld Up 0.0.0.0:32776->3306/tcp
magento2devbox_elastic_046616a8b9fbb28b8fe4b01a66076f5e /docker-entrypoint.sh elas ... Up 0.0.0.0:32771->9200/tcp, 9300/tcp
magento2devbox_rabbit_046616a8b9fbb28b8fe4b01a66076f5e docker-entrypoint.sh rabbi ... Up 15671/tcp, 0.0.0.0:32773->15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 0.0.0.0:32774->5672/tcp
magento2devbox_redis_046616a8b9fbb28b8fe4b01a66076f5e docker-entrypoint.sh redis ... Up 6379/tcp
magento2devbox_varnish_046616a8b9fbb28b8fe4b01a66076f5e /usr/local/bin/entrypoint.sh Up 0.0.0.0:32775->6081/tcp
magento2devbox_web_046616a8b9fbb28b8fe4b01a66076f5e /usr/local/bin/entrypoint.sh Up 0.0.0.0:32770->22/tcp, 44100/tcp, 0.0.0.0:32768->5000/tcp, 0.0.0.0:32769->80/tcp, 9000/tcp
However, when i try to reach to http://localhost:32769 which should map to the container web-server I get "connection refused". How can I start debugging what's happening?
Thanks.
I've found the solution. In Windows the container doesn't run on Windows per se, but inside the tocket VM in Virtualbox. Thus you have to run:
C:\magento2-devbox>docker-machine ip
192.168.99.100
And then use that IP to reach the application: http://192.168.99.100:32769 - now it works. In my specific case above, I needed to reach the Varnish IP of the application http://192.168.99.100:32775

Connect to a Service running inside a docker container from outside

I have a service running in a docker container (local machine). I can see the service URL in the Ambari service config.
Now I want to connect to that service using my local development environment.
I found I can connect to that within the container but when I use that URL outside in my local I get connection refused.
Cause: org.apache.http.conn.HttpHostConnectException: Connect to
xx.xx.xx.com:12008 [xx.xx.xx.com/195.169.98.101] failed: Connection refused
How to connect to a service running inside a container from outside?
In my case code execute in my local machine.
If your container has mapped its port on the VM 12008 port, you would need to make sure you have port forwarded 12008 in your VirtualBox connection settings, as I mention in "How to connect mysql workbench to running mysql inside docker?"
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port12008 ,tcp,,12008,,12008"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port12008 ,udp,,12008,,12008"
The question needs more clarification, but I will answer with some assumptions.
I used an Ambari docker image (chose this randomly based on popularity).
Then I started 3 clusters as mentioned and my amb-settings and docker ps looked like this:
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ amb-settings
NODE_PREFIX=amb
CLUSTER_SIZE=3
AMBARI_SERVER_NAME=amb-server
AMBARI_SERVER_IMAGE=hortonworks/ambari-server:latest
AMBARI_AGENT_IMAGE=hortonworks/ambari-agent:latest
DOCKER_OPTS=
AMBARI_SERVER_IP=172.17.0.6
CONSUL=amb-consul
CONSUL_IMAGE=sequenceiq/consul:v0.5.0-v6
EXPOSE_DNS=false
DRY_RUN=false
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2483a74d919 hortonworks/ambari-agent:latest "/usr/sbin/init syste" 20 minutes ago Up 20 minutes amb2
4acaec766eaa hortonworks/ambari-agent:latest "/usr/sbin/init syste" 21 minutes ago Up 20 minutes amb1
47e9419de59f hortonworks/ambari-server:latest "/usr/sbin/init syste" 21 minutes ago Up 21 minutes 8080/tcp amb-server
548730bb1824 sequenceiq/consul:v0.5.0-v6 "/bin/start -server -" 22 minutes ago Up 22 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 8500/tcp amb-consul
27c725af6531 sequenceiq/ambari "/usr/sbin/init" 23 minutes ago Up 23 minutes 8080/tcp awesome_tesla
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$
As of now, I can visit the Ambari server through: http://172.17.0.6:8080/
This works also from my host computer. However, if you want this to be connected from another computer from a similar network, then one option is to have a haproxy which does the redirection from:
localhost:8080 -> 172.17.0.6:8080
So, I created a small haproxy.cfg and Dockerfile to achieve this:
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ cat Dockerfile
FROM haproxy:1.6
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ cat haproxy.cfg
frontend localnodes
bind *:8080
mode http
default_backend ambari
backend ambari
mode http
server ambari-server 172.17.0.6:8080 check
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker build --rm -t ambariproxy .
Sending build context to Docker daemon 9.635 MB
Step 1 : FROM haproxy:1.6
---> af749d0291b2
Step 2 : COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
---> Using cache
---> 60cdd2c7bb05
Successfully built 60cdd2c7bb05
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker run -d -p 8080:8080 ambariproxy
63dd026349bbb6752dbd898e1ae70e48a8785e792b35040e0d0473acb00c2834
Now if I say localhost:8080 or MY_HOST_IP:8080 I can see the ambari-server and this should work also from computers in the same network.
Hope I managed to answer your question :)
Thanks,

Resources