Running Haproxy in docker container - docker

I am trying to create a Docker container from haproxy image but I run in to some problems. I followed the tutorial from Dockerhub where it says to create a Dockerfile containing
FROM haproxy:1.7
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
I then run docker build -t my-haproxy . And everything look good, but when i run docker run -it --rm --name haproxy-syntax-check my-haproxy haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg to test the config file i get the following error`s
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:7] : cannot find user id for 'haproxy' (0:Success)
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:8] : cannot find group id for 'haproxy' (0:Success)
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:32] : error opening file </etc/haproxy/errors/400.http> for custom error message <400>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:33] : error opening file </etc/haproxy/errors/403.http> for custom error message <403>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:34] : error opening file </etc/haproxy/errors/408.http> for custom error message <408>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:35] : error opening file </etc/haproxy/errors/500.http> for custom error message <500>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:36] : error opening file </etc/haproxy/errors/502.http> for custom error message <502>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:37] : error opening file </etc/haproxy/errors/503.http> for custom error message <503>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:38] : error opening file </etc/haproxy/errors/504.http> for custom error message <504>.
[ALERT] 114/152637 (1) : Error(s) found in configuration file : /usr/local/etc/haproxy/haproxy.cfg
[ALERT] 114/152637 (1) : Fatal errors found in configuration.
I have a group and user called haproxy. I can still create the container but it does not work. Here is my haproxy.cfg file
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend myfrontend
bind *:80
mode http
default_backend mybackend
backend mybackend
mode http
balance roundrobin
option httpchk HEAD / # checks against the index page
server web1 172.17.0.2:80 check weight 10
server web2 172.17.0.3:80 check weight 20

Whenever I pull the official Haproxy container, I do not see the haproxy user/group. Actually, the whole reason I have a custom image for haproxy is just to add them
RUN addgroup -g 1000 haproxy && \
adduser -u 1000 -G haproxy -h /app -D haproxy

Change the user and group to root from haproxy. It will work.
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user root
group root
daemon

Add this 2 lines on the global section.
user haproxy
group haproxy
Remove or comment this lines on the defaults section.
#errorfile 400 /etc/haproxy/errors/400.http
#errorfile 403 /etc/haproxy/errors/403.http
#errorfile 408 /etc/haproxy/errors/408.http
#errorfile 500 /etc/haproxy/errors/500.http
#errorfile 502 /etc/haproxy/errors/502.http
#errorfile 503 /etc/haproxy/errors/503.http
#errorfile 504 /etc/haproxy/errors/504.http

Related

Core log Lua in Haproxy does not log to the default haproxy log file

I have setup a Lua script to process the request in HAProxy. I am using Core class to log information in the log file.
Here is my config file
sudo nano /etc/haproxy/haproxy.cfg
global
lua-load /etc/haproxy/route_req.lua
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
user haproxy
group haproxy
daemon
#HAProxy for web servers
frontend web-frontend
bind 10.122.0.2:80
bind 139.59.75.106:80
mode http
use_backend %[lua.routeIP]
Here is my route_req.lua file
local function getIP(txn)
local clientip = txn.f:src()
backend = ""
-- MY CODE GOES HERE
core.log(core.info, "This is an example\n")
return backend
end
core.register_fetches('routeIP', getIP)
I don't see any logging in my log file, /var/log/haproxy.log. Also there was no logging regarding the same in /var/log/syslog file.
Make sure to include log global in your frontend stanza.

nginx permission denied accessing puma socket that does exist in the correct location

On a Digital Ocean droplet running Ubuntu 21.10 impish I am deploying a bare bones Rails 7.0.0.alpha2 application to production. I am setting up nginx as the reverse proxy server to communicate with Puma acting as the Rails server.
I wish to run puma as a service using systemctl without sudo root privileges. To this effect I have a puma service setup in the users home folder located at ~/.config/systemd/user, the service is enabled and runs as I would expect it to run.
systemctl status --user puma_master_cms_production
reports the following
● puma_master_cms_production.service - Puma HTTP Server for master_cms (production)
Loaded: loaded (/home/comtechmaster/.config/systemd/user/puma_master_cms_production.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-11-18 22:31:02 UTC; 1h 18min ago
Main PID: 1577 (ruby)
Tasks: 10 (limit: 2338)
Memory: 125.1M
CPU: 2.873s
CGroup: /user.slice/user-1000.slice/user#1000.service/app.slice/puma_master_cms_production.service
└─1577 puma 5.5.2 (unix:///home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock)
Nov 18 22:31:02 master-cms systemd[749]: Started Puma HTTP Server for master_cms (production).
The rails production.log is empty.
The puma error log shows the following
cat log/puma_error.log
=== puma startup: 2021-11-18 22:31:05 +0000 ===
The pid files exist in the application roots shared/tmp/pids folder
ls tmp/pids
puma.pid puma.state
and the socket that nginx needs but is unable to connect to due to permission denied exists
ls -l ~/apps/master_cms/shared/tmp/sockets/
total 0
srwxrwxrwx 1 comtechmaster comtechmaster 0 Nov 18 22:31 puma_master_cms_production.sock
nginx is up and running and providing a
502 bad gateway
response. The nginx error log reports the following error
2021/11/18 23:18:43 [crit] 1500#1500: *25 connect() to unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock failed (13: Permission denied) while connecting to upstream, client: 86.160.191.54, server: 159.65.50.229, request: "GET / HTTP/2.0", upstream: "http://unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock:/500.html"
sudo nginx -t reports the following
sudo nginx -t
nginx: [warn] could not build optimal proxy_headers_hash, you should increase either proxy_headers_hash_max_size: 512 or proxy_headers_hash_bucket_size: 64; ignoring proxy_headers_hash_bucket_size
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successfu
just to be pedantic both an ls and a sudo ls to the path reported in the error shows
ls /home/comtechmaster/apps/master_cms/shared/tmp/sockets/
puma_master_cms_production.sock
as expected so I am stumped to understand why nginx running as root using sudo service nginx start is being denied access to a socket that exists, that is owned by the local user rather than root.
I expect the solution is going to be something totally obvious but I can not see what
This problem ended up being related to the folder permissions for the users home folder and specifically a change in the way Ububntu 20.10 sets permissions differently to previous versions of ubuntu, or at least a difference in the way the DigitalOcean setup scripts behave.
This was resolved with a simple command line chmod o=rx from the /home against the user folder concerned e.g.
cd /home
chmod o=rx the_home_folder_for_user

Passenger not running (Ruby on Rails + Nginx)

My AWS instance was working fine with my app. But, today, the server was down without memory ram. Then I run:
sync; echo 1 > /proc/sys/vm/drop_caches
sudo service nginx start
After that, ram memory consumption is ok, but the app not.
I'm running a Rails 4.2.1 website with Ruby 2.2.2 and nginx/1.8.0 in a Ubuntu 14 AWS instance.
When I access the site, I have the error:
502 Bad Gateway
nginx/1.8.0
When I run passenger-config restart-app I have:
*** ERROR: Phusion Passenger doesn't seem to be running. If you are sure that it
is running, then the causes of this problem could be one of:
1. You customized the instance registry directory using Apache's
PassengerInstanceRegistryDir option, Nginx's
passenger_instance_registry_dir option, or Phusion Passenger Standalone's
--instance-registry-dir command line argument. If so, please set the
environment variable PASSENGER_INSTANCE_REGISTRY_DIR to that directory
and run this command again.
2. The instance directory has been removed by an operating system background
service. Please set a different instance registry directory using Apache's
PassengerInstanceRegistryDir option, Nginx's passenger_instance_registry_dir
option, or Phusion Passenger Standalone's --instance-registry-dir command
line argument.
In the file /var/log/nginx/error.log I have:
2021/06/19 13:21:12 [crit] 26618#0: *48688773 connect() to unix:/tmp/passenger.26EHXct/agents.s/server failed (2: No such file or directory) while connecting to upstream, client: XXX.XXX.34.163, server: www.XXX.com, request: "GET / HTTP/1.1", upstream: "passenger:unix:/tmp/passenger.26EHXct/agents.s/server:", host: "XXX.com"
I already tried this solution and not working.
When I run: passenger-config validate-install I have:
Use <space> to select.
If the menu doesn't display correctly, press '!'
‣ ⬢ Passenger itself
⬡ Apache
-------------------------------------------------------------------------
* Checking whether this Passenger install is in PATH... ✓
* Checking whether there are no other Passenger installations... ✓
Everything looks good. :-)
When I run: sudo passenger-memory-stats I have:
Version: 5.0.10
Date : 2021-06-19 13:31:40 -0300
------------- Apache processes -------------
*** WARNING: The Apache executable cannot be found.
Please set the APXS2 environment variable to your 'apxs2' executable's filename, or set the HTTPD environment variable to your 'httpd' or 'apache2' executable's filename.
---------- Nginx processes ----------
PID PPID VMSize Private Name
-------------------------------------
26615 1 230.7 MB 26.3 MB nginx: worker process
26616 1 230.4 MB 27.4 MB nginx: worker process
26617 1 229.7 MB 25.8 MB nginx: worker process
26618 1 233.3 MB 27.4 MB nginx: worker process
### Processes: 4
### Total private dirty RSS: 106.78 MB
--- Passenger processes ---
### Processes: 0
### Total private dirty RSS: 0.00 MB
Anyone knows how can I solve this?
When I ran sudo service nginx restart, I didn't notice the flag [fail] on the right of the terminal.
Then, I ran sudo service nginx status I got the message nginx is not running.
After ran sudo nginx -t I got the message
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
But I saw several nginx processes, then, I killed all nginx process with sudo kill $(ps aux | grep '[n]ginx' | awk '{print $2}') and then, sudo service nginx start.
Everything works fine again.

HAProxy Lua logging

I'm getting duplicate HAProxy log messages from my LUA script and don't understand why.
haproxy.cfg
global
log /dev/log local0 warning
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
lua-load /home/tester/hello.lua
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend test_endpoint
bind *:9202
http-request lua.tester
hello.lua
function tester(txn)
core.log(core.debug, "debug message!\n")
core.log(core.info, "info message!\n")
core.log(core.warning, "warning message!\n")
core.log(core.err, "error message!\n")
end
core.register_action('tester', {'http-req'}, tester)
HAProxy was installed as a package and therefore writes to /var/log/haproxy.log by default on my ubuntu system. This is what I see in the log:
Jan 25 05:47:23 ubuntu haproxy[65622]: warning message!.
Jan 25 05:47:23 ubuntu haproxy[65622]: error message!.
Jan 25 05:47:23 ubuntu haproxy[65615]: [info] 024/054723 (65622) : info message!.
Jan 25 05:47:23 ubuntu haproxy[65615]: [warning] 024/054723 (65622) : warning message!.
Jan 25 05:47:23 ubuntu haproxy[65615]: [err] 024/054723 (65622) : error message!.
I expected only the top 2 lines. Can anyone explain why the other lines appear in the log and how I can configure them out?
Thanks in advance!
for info:
# haproxy -v
HA-Proxy version 2.2.8-1ppa1~bionic 2021/01/14 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2025.
Known bugs: http://www.haproxy.org/bugs/bugs-2.2.8.html
Running on: Linux 4.15.0-134-generic #138-Ubuntu SMP Fri Jan 15 10:52:18 UTC 2021 x86_64
UPDATE:
Looking at the hlua.c source code, I can see the extra 3 lines are stderr - the logging is sent to the log (green box) and also to stderr (red box):
I had to add "-q" flag to ExecStart in /lib/systemd/system/haproxy.service. It now looks like this:
ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE -q $EXTRAOPTS
Note: adding "quiet" to the global section in haproxy.cfg did not work for me. Perhaps broken?

My build does not include my web site directive

I'm not sure where I went off of the rails but I am trying to create a container for my web site. First I start off with a file called 'default':
server {
root /var/www;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
var/www/ points to my web content with index.html being the default file for the content.
Then I create my very simple Dockerfile:
FROM httpd
MAINTAINER Jay Blanchard
RUN httpd
ADD default /home/OARS/
In my Dockerfile I reference the default file from above, thinking this is what is needed to point to my web content. The default file happens to be in the same directory as the Docker file, but I give the path /home/OARS/ as I have seen in some examples.
The build is successful:
foo#bar:/home/OARS$ sudo docker build -t oars-example .
Sending build context to Docker daemon 3.072 kB
Sending build context to Docker daemon
Step 0 : FROM httpd
---> cba1e4bb4caa
Step 1 : MAINTAINER Jay Blanchard
---> Using cache
---> e77807e98c6b
Step 2 : RUN httpd
---> Using cache
---> c0bff2fb1f9b
Step 3 : ADD default /home/OARS/
---> 3b4053fbc8d4
Removing intermediate container e02d27c4309d
Successfully built 3b4053fbc8d4
And the run appears to be successful:
foo#bar:/home/OARS$ sudo docker run -d -P oars-example
9598c176a706b19dd28dfab8de94e9c630e5781aca6930564d15182d21b0f6a5
9598c176a706 oars-example:latest "httpd-foreground" 6 seconds ago Up 5 seconds 0.0.0.0:32776->80/tcp jovial_fermat
Yet when I go to the IP (with port 32776, there is something running on port 80 already) I do not get the index page I've specified in /var/www, but I do get the default index page from the Apache server.
Here is the log from the server:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 000.000.000.000. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 000.000.000.000. Set the 'ServerName' directive globally to suppress this message
[Tue May 19 16:59:17.457525 2015] [mpm_event:notice] [pid 1:tid 140053777708928] AH00489: Apache/2.4.12 (Unix) configured -- resuming normal operations
[Tue May 19 16:59:17.457649 2015] [core:notice] [pid 1:tid 140053777708928] AH00094: Command line: 'httpd -D FOREGROUND'
000.000.000.000 - - [19/May/2015:17:00:08 +0000] "GET / HTTP/1.1" 200 45
000.000.000.000 - - [19/May/2015:17:00:08 +0000] "GET /favicon.ico HTTP/1.1" 404 209
I've changed the IP addresses in the logs just to keep things kosher.
Am I missing something obvious to make sure my web site files are being run in the container?
First, you are trying to use a nginx config file within an Apache container.
Then, according to the base container documentation, the correct way to specify a config file is:
# Dockerfile
FROM httpd
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf

Resources