My build does not include my web site directive - docker

I'm not sure where I went off of the rails but I am trying to create a container for my web site. First I start off with a file called 'default':
server {
root /var/www;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
var/www/ points to my web content with index.html being the default file for the content.
Then I create my very simple Dockerfile:
FROM httpd
MAINTAINER Jay Blanchard
RUN httpd
ADD default /home/OARS/
In my Dockerfile I reference the default file from above, thinking this is what is needed to point to my web content. The default file happens to be in the same directory as the Docker file, but I give the path /home/OARS/ as I have seen in some examples.
The build is successful:
foo#bar:/home/OARS$ sudo docker build -t oars-example .
Sending build context to Docker daemon 3.072 kB
Sending build context to Docker daemon
Step 0 : FROM httpd
---> cba1e4bb4caa
Step 1 : MAINTAINER Jay Blanchard
---> Using cache
---> e77807e98c6b
Step 2 : RUN httpd
---> Using cache
---> c0bff2fb1f9b
Step 3 : ADD default /home/OARS/
---> 3b4053fbc8d4
Removing intermediate container e02d27c4309d
Successfully built 3b4053fbc8d4
And the run appears to be successful:
foo#bar:/home/OARS$ sudo docker run -d -P oars-example
9598c176a706b19dd28dfab8de94e9c630e5781aca6930564d15182d21b0f6a5
9598c176a706 oars-example:latest "httpd-foreground" 6 seconds ago Up 5 seconds 0.0.0.0:32776->80/tcp jovial_fermat
Yet when I go to the IP (with port 32776, there is something running on port 80 already) I do not get the index page I've specified in /var/www, but I do get the default index page from the Apache server.
Here is the log from the server:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 000.000.000.000. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 000.000.000.000. Set the 'ServerName' directive globally to suppress this message
[Tue May 19 16:59:17.457525 2015] [mpm_event:notice] [pid 1:tid 140053777708928] AH00489: Apache/2.4.12 (Unix) configured -- resuming normal operations
[Tue May 19 16:59:17.457649 2015] [core:notice] [pid 1:tid 140053777708928] AH00094: Command line: 'httpd -D FOREGROUND'
000.000.000.000 - - [19/May/2015:17:00:08 +0000] "GET / HTTP/1.1" 200 45
000.000.000.000 - - [19/May/2015:17:00:08 +0000] "GET /favicon.ico HTTP/1.1" 404 209
I've changed the IP addresses in the logs just to keep things kosher.
Am I missing something obvious to make sure my web site files are being run in the container?

First, you are trying to use a nginx config file within an Apache container.
Then, according to the base container documentation, the correct way to specify a config file is:
# Dockerfile
FROM httpd
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf

Related

Custom docker image logs are incomplete

I have a task where I need to built a Docker Compose stack for a Wordpress with HTTPS support. So there I have a custom built images based on Ubuntu 20.04 and patterned on official image equivalents for these images. In short everythings works on these custom images besides the log part. I do have 2 projects - one for official images and one for custom built images.
The actual logs on the container cat /var/log/apache2/error.log looks like it:
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 192.168.80.3. Set the 'ServerName' directive globally to suppress this message
[Wed Feb 15 16:36:15.477371 2023] [mpm_prefork:notice] [pid 8] AH00163: Apache/2.4.41 (Ubuntu) configured -- resuming normal operations
[Wed Feb 15 16:36:15.477397 2023] [core:notice] [pid 8] AH00094: Command line: '/usr/sbin/apache2 -D FOREGROUND'
Where the docker logs does give logs looking like that:
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 192.168.80.3. Set the 'ServerName' directive globally to suppress this message
I have the same issue for 3 custom built containers.
Their CMD or ENTRYPOINT 's look like this:
CMD ["apache2ctl", "-D", "FOREGROUND"]
ENTRYPOINT ["sh", "-c", "nginx -g 'daemon off;' & exec tail -f /var/log/nginx/*.log"]
ENTRYPOINT ["sh", "-c", "/usr/sbin/mysqld --init-file=/docker-entrypoint-initdb.d/setup.sql --log-error-verbosity=3 --general_log=1 --general_log_file=/var/log/mysql/general.log --log-error=/var/log/mysql/error.log & exec tail -f /var/log/mysql/*.log"]
I need to use tail for 2/3 of these containers because without it there are no logs whatsoever.
In the case of apache2ctl there is no need to use tail but the result is the same for all of these containers - logs are being displayed only once and they appear to be incomplete, they do not seem to update in real time like they would do for official images.
I have built a custom ubuntu based image with Python3 and simple printing out script and in that case it prints forever (logs get updated) - same behavior like for a regular shell.
I spent a lot of time testing different possibilities and I'm going nowhere with it, so I am here to ask why it's the case for these custom built images and how can I make it so the logging is continuous and complete like it is the case for the official images.

nginx permission denied accessing puma socket that does exist in the correct location

On a Digital Ocean droplet running Ubuntu 21.10 impish I am deploying a bare bones Rails 7.0.0.alpha2 application to production. I am setting up nginx as the reverse proxy server to communicate with Puma acting as the Rails server.
I wish to run puma as a service using systemctl without sudo root privileges. To this effect I have a puma service setup in the users home folder located at ~/.config/systemd/user, the service is enabled and runs as I would expect it to run.
systemctl status --user puma_master_cms_production
reports the following
● puma_master_cms_production.service - Puma HTTP Server for master_cms (production)
Loaded: loaded (/home/comtechmaster/.config/systemd/user/puma_master_cms_production.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-11-18 22:31:02 UTC; 1h 18min ago
Main PID: 1577 (ruby)
Tasks: 10 (limit: 2338)
Memory: 125.1M
CPU: 2.873s
CGroup: /user.slice/user-1000.slice/user#1000.service/app.slice/puma_master_cms_production.service
└─1577 puma 5.5.2 (unix:///home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock)
Nov 18 22:31:02 master-cms systemd[749]: Started Puma HTTP Server for master_cms (production).
The rails production.log is empty.
The puma error log shows the following
cat log/puma_error.log
=== puma startup: 2021-11-18 22:31:05 +0000 ===
The pid files exist in the application roots shared/tmp/pids folder
ls tmp/pids
puma.pid puma.state
and the socket that nginx needs but is unable to connect to due to permission denied exists
ls -l ~/apps/master_cms/shared/tmp/sockets/
total 0
srwxrwxrwx 1 comtechmaster comtechmaster 0 Nov 18 22:31 puma_master_cms_production.sock
nginx is up and running and providing a
502 bad gateway
response. The nginx error log reports the following error
2021/11/18 23:18:43 [crit] 1500#1500: *25 connect() to unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock failed (13: Permission denied) while connecting to upstream, client: 86.160.191.54, server: 159.65.50.229, request: "GET / HTTP/2.0", upstream: "http://unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock:/500.html"
sudo nginx -t reports the following
sudo nginx -t
nginx: [warn] could not build optimal proxy_headers_hash, you should increase either proxy_headers_hash_max_size: 512 or proxy_headers_hash_bucket_size: 64; ignoring proxy_headers_hash_bucket_size
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successfu
just to be pedantic both an ls and a sudo ls to the path reported in the error shows
ls /home/comtechmaster/apps/master_cms/shared/tmp/sockets/
puma_master_cms_production.sock
as expected so I am stumped to understand why nginx running as root using sudo service nginx start is being denied access to a socket that exists, that is owned by the local user rather than root.
I expect the solution is going to be something totally obvious but I can not see what
This problem ended up being related to the folder permissions for the users home folder and specifically a change in the way Ububntu 20.10 sets permissions differently to previous versions of ubuntu, or at least a difference in the way the DigitalOcean setup scripts behave.
This was resolved with a simple command line chmod o=rx from the /home against the user folder concerned e.g.
cd /home
chmod o=rx the_home_folder_for_user

Why I get a nginx 404 when I pointing a Symlink to a path?

I want to access a directory not containing in the docroot from nginx.
The Situation:
I've got a folder which contains files:
Command:
ls -lah /var/www/
Output:
drwxr-x---. 2 docker-www docker-www 6 Jul 24 16:56 some_folder
And I've got a path which should delivers the content from the folder above:
Command:
ls -lah /var/www/typo3/releases/current/typo3-web/web/info/symlink
Output:
lrwxrwxrwx. 1 docker-www docker-www 32 Jul 24 16:56 symlink -> /var/www/some_folder
My nginx config:
root /usr/share/nginx/current/typo3-web/web;
...
location /info/symlink/ {
allow all;
autoindex on;
disable_symlinks off;
}
Problem:
The nginx delivers "404 Not Found"
Things I've tried yet:
Create a normal folder in /var/www/typo3/releases/current/typo3-web/web/info/ and it works. NGINX delivers the file index.
Check the file permissions: the user docker-www has access to the files.
Sorry guys. I have to mount the folder into the docker container by using another docker volume.

Cannot conect to Docker container running in VSTS

I have a test which starts a Docker container, performs the verification (which is talking to the Apache httpd in the Docker container), and then stops the Docker container.
When I run this test locally, this test runs just fine. But when it runs on hosted VSTS, thus a hosted build agent, it cannot connect to the Apache httpd in the Docker container.
This is the .vsts-ci.yml file:
queue: Hosted Linux Preview
steps:
- script: |
./test.sh
This is the test.sh shell script to reproduce the problem:
#!/bin/bash
set -e
set -o pipefail
function tearDown {
docker stop test-apache
docker rm test-apache
}
trap tearDown EXIT
docker run -d --name test-apache -p 8083:80 httpd
sleep 10
curl -D - http://localhost:8083/
When I run this test locally, the output that I get is:
$ ./test.sh
469d50447ebc01775d94e8bed65b8310f4d9c7689ad41b2da8111fd57f27cb38
HTTP/1.1 200 OK
Date: Tue, 04 Sep 2018 12:00:17 GMT
Server: Apache/2.4.34 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
<html><body><h1>It works!</h1></body></html>
test-apache
test-apache
This output is exactly as I expect.
But when I run this test on VSTS, the output that I get is (irrelevant parts replaced with …).
2018-09-04T12:01:23.7909911Z ##[section]Starting: CmdLine
2018-09-04T12:01:23.8044456Z ==============================================================================
2018-09-04T12:01:23.8061703Z Task : Command Line
2018-09-04T12:01:23.8077837Z Description : Run a command line script using cmd.exe on Windows and bash on macOS and Linux.
2018-09-04T12:01:23.8095370Z Version : 2.136.0
2018-09-04T12:01:23.8111699Z Author : Microsoft Corporation
2018-09-04T12:01:23.8128664Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613735)
2018-09-04T12:01:23.8146694Z ==============================================================================
2018-09-04T12:01:26.3345330Z Generating script.
2018-09-04T12:01:26.3392080Z Script contents:
2018-09-04T12:01:26.3409635Z ./test.sh
2018-09-04T12:01:26.3574923Z [command]/bin/bash --noprofile --norc /home/vsts/work/_temp/02476800-8a7e-4e22-8715-c3f706e3679f.sh
2018-09-04T12:01:27.7054918Z Unable to find image 'httpd:latest' locally
2018-09-04T12:01:30.5555851Z latest: Pulling from library/httpd
2018-09-04T12:01:31.4312351Z d660b1f15b9b: Pulling fs layer
[…]
2018-09-04T12:01:49.1468474Z e86a7f31d4e7506d34e3b854c2a55646eaa4dcc731edc711af2cc934c44da2f9
2018-09-04T12:02:00.2563446Z % Total % Received % Xferd Average Speed Time Time Time Current
2018-09-04T12:02:00.2583211Z Dload Upload Total Spent Left Speed
2018-09-04T12:02:00.2595905Z
2018-09-04T12:02:00.2613320Z 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8083: Connection refused
2018-09-04T12:02:00.7027822Z test-apache
2018-09-04T12:02:00.7642313Z test-apache
2018-09-04T12:02:00.7826541Z ##[error]Bash exited with code '7'.
2018-09-04T12:02:00.7989841Z ##[section]Finishing: CmdLine
The key thing is this:
curl: (7) Failed to connect to localhost port 8083: Connection refused
10 seconds should be enough for apache to start.
Why can curl not communicate with Apache on its port 8083?
P.S.:
I know that a hard-coded port like this is rubbish and that I should use an ephemeral port instead. I wanted to get it running first wirth a hard-coded port, because that's simpler than using an ephemeral port, and then switch to an ephemeral port as soon as the hard-coded port works. And in case the hard-coded port doesn't work because the port is unavailable, the error should look different, in that case, docker run should fail because the port can't be allocated.
Update:
Just to be sure, I've rerun the test with sleep 100 instead of sleep 10. The results are unchanged, curl cannot connect to localhost port 8083.
Update 2:
When extending the script to execute docker logs, docker logs shows that Apache is running as expected.
When extending the script to execute docker ps, it shows the following output:
2018-09-05T00:02:24.1310783Z CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2018-09-05T00:02:24.1336263Z 3f59aa014216 httpd "httpd-foreground" About a minute ago Up About a minute 0.0.0.0:8083->80/tcp test-apache
2018-09-05T00:02:24.1357782Z 850bda64f847 microsoft/vsts-agent:ubuntu-16.04-docker-17.12.0-ce-standard "/home/vsts/agents/2…" 2 minutes ago Up 2 minutes musing_booth
The problem is that the VSTS build agent runs in a Docker container. When the Docker container for Apache is started, it runs on the same level as the VSTS build agent Docker container, not nested inside the VSTS build agent Docker container.
There are two possible solutions:
Replacing localhost with the ip address of the docker host, keeping the port number 8083
Replacing localhost with the ip address of the docker container, changing the host port number 8083 to the container port number 80.
Access via the Docker Host
In this case, the solution is to replace localhost with the ip address of the docker host. The following shell snippet can do that:
host=localhost
if grep '^1:name=systemd:/docker/' /proc/1/cgroup
then
apt-get update
apt-get install net-tools
host=$(route -n | grep '^0.0.0.0' | sed -e 's/^0.0.0.0\s*//' -e 's/ .*//')
fi
curl -D - http://$host:8083/
The if grep '^1:name=systemd:/docker/' /proc/1/cgroup inspects whether the script is running inside a Docker container. If so, it installs net-tools to get access to the route command, and then parses the default gw from the route command to get the ip address of the host. Note that this only works if the container's network default gw actually is the host.
Direct Access to the Docker Container
After launching the docker container, its ip addresses can be obtained with the following command:
docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' <container-id>
Replace <container-id> with your container id or name.
So, in this case, it would be (assuming that the first ip address is okay):
ips=($(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' nuance-apache))
host=${ips[0]}
curl http://$host/

Docker build ignore file permissions

Im building my docker image from a jenkins job.
I do ADD an index.html file to the html directory of nignx.
The permissions on the jenkins host are
-rw-r----- 1 jenkins jenkins 3.3K Nov 10 14:12 index.html
and also the permissions inside the container are set to
-rw-r----- 1 root root 3.2K Nov 10 13:12 index.html
so the webserver serves an 403 Forbidden instead of the file.
Can I omit the permissions on the host and use a default umask (rwxr-xr-x) or do I have to chmod every file I want serve via nginx?
The Docker Documentation for ADD states the following:
All new files and directories are created with a UID and GID of 0.
This means that you have to run either chown or chmod after copying the files.
There are some further discussions here:
https://github.com/docker/docker/issues/7537
https://github.com/docker/docker/pull/9934

Resources