I'm getting duplicate HAProxy log messages from my LUA script and don't understand why.
haproxy.cfg
global
log /dev/log local0 warning
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
lua-load /home/tester/hello.lua
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend test_endpoint
bind *:9202
http-request lua.tester
hello.lua
function tester(txn)
core.log(core.debug, "debug message!\n")
core.log(core.info, "info message!\n")
core.log(core.warning, "warning message!\n")
core.log(core.err, "error message!\n")
end
core.register_action('tester', {'http-req'}, tester)
HAProxy was installed as a package and therefore writes to /var/log/haproxy.log by default on my ubuntu system. This is what I see in the log:
Jan 25 05:47:23 ubuntu haproxy[65622]: warning message!.
Jan 25 05:47:23 ubuntu haproxy[65622]: error message!.
Jan 25 05:47:23 ubuntu haproxy[65615]: [info] 024/054723 (65622) : info message!.
Jan 25 05:47:23 ubuntu haproxy[65615]: [warning] 024/054723 (65622) : warning message!.
Jan 25 05:47:23 ubuntu haproxy[65615]: [err] 024/054723 (65622) : error message!.
I expected only the top 2 lines. Can anyone explain why the other lines appear in the log and how I can configure them out?
Thanks in advance!
for info:
# haproxy -v
HA-Proxy version 2.2.8-1ppa1~bionic 2021/01/14 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2025.
Known bugs: http://www.haproxy.org/bugs/bugs-2.2.8.html
Running on: Linux 4.15.0-134-generic #138-Ubuntu SMP Fri Jan 15 10:52:18 UTC 2021 x86_64
UPDATE:
Looking at the hlua.c source code, I can see the extra 3 lines are stderr - the logging is sent to the log (green box) and also to stderr (red box):
I had to add "-q" flag to ExecStart in /lib/systemd/system/haproxy.service. It now looks like this:
ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE -q $EXTRAOPTS
Note: adding "quiet" to the global section in haproxy.cfg did not work for me. Perhaps broken?
Related
I'm just testing uWSGI by following the quick-start guide in official web site. But, I am facing a problem.
This is what I did. It's exactly same as the steps in quick guide.
$ uwsgi --http :9090 --wsgi-file foobar.py
*** Starting uWSGI 2.0.19.1 (64bit) on [Sat Mar 12 09:28:41 2022] ***
compiled with version: Clang 11.0.0 on 18 January 2021 21:53:23
os: Darwin-21.3.0 Darwin Kernel Version 21.3.0: Wed Jan 5 21:37:58 PST 2022; root:xnu-8019.80.24~20/RELEASE_X86_64
nodename: mac-brian.local
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 16
current working directory: /Users/brian/Documents/project/uwsgi_test
detected binary path: /opt/anaconda3/envs/price_analysis/bin/uwsgi
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 5568
your memory page size is 4096 bytes
detected max file descriptor number: 256
lock engine: OSX spinlocks
thunder lock: disabled (you can enable it with --thunder-lock)
uWSGI http bound on :9090 fd 4
spawned uWSGI http 1 (pid: 10349)
uwsgi socket 0 bound to TCP address 127.0.0.1:59738 (port auto-assigned) fd 3
Python version: 3.9.1 | packaged by conda-forge | (default, Jan 10 2021, 02:52:42) [Clang 11.0.0 ]
Fatal Python error: init_import_site: Failed to import the site module
Python runtime state: initialized
Traceback (most recent call last):
File "/opt/anaconda3/envs/price_analysis/lib/python3.9/site.py", line 73, in <module>
import os
File "/opt/anaconda3/envs/price_analysis/lib/python3.9/os.py", line 29, in <module>
from _collections_abc import _check_methods
File "/opt/anaconda3/envs/price_analysis/lib/python3.9/_collections_abc.py", line 416, in <module>
class _CallableGenericAlias(GenericAlias):
TypeError: type 'types.GenericAlias' is not an acceptable base type
And this is my foobar.py:
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return [b"Hello World"]
After this I tried to connect to the http://localhost:9090 in this way.
curl -v 127.0.0.1:9090
These are the responses.
curl -v 127.0.0.1:9090
* Trying 127.0.0.1:9090...
* Connected to 127.0.0.1 (127.0.0.1) port 9090 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:9090
> User-Agent: curl/7.77.0
> Accept: */*
>
[uwsgi-http] unable to connect() to node "127.0.0.1:59738" (0 retries): Connection refused
[uwsgi-http] unable to connect() to node "127.0.0.1:59738" (1 retries): Connection refused
[uwsgi-http] unable to connect() to node "127.0.0.1:59738" (2 retries): Connection refused
[uwsgi-http] unable to connect() to node "127.0.0.1:59738" (3 retries): Connection refused
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
I expected Hello World but I get empty messages from the server.
How can I solve this?
On a Digital Ocean droplet running Ubuntu 21.10 impish I am deploying a bare bones Rails 7.0.0.alpha2 application to production. I am setting up nginx as the reverse proxy server to communicate with Puma acting as the Rails server.
I wish to run puma as a service using systemctl without sudo root privileges. To this effect I have a puma service setup in the users home folder located at ~/.config/systemd/user, the service is enabled and runs as I would expect it to run.
systemctl status --user puma_master_cms_production
reports the following
● puma_master_cms_production.service - Puma HTTP Server for master_cms (production)
Loaded: loaded (/home/comtechmaster/.config/systemd/user/puma_master_cms_production.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-11-18 22:31:02 UTC; 1h 18min ago
Main PID: 1577 (ruby)
Tasks: 10 (limit: 2338)
Memory: 125.1M
CPU: 2.873s
CGroup: /user.slice/user-1000.slice/user#1000.service/app.slice/puma_master_cms_production.service
└─1577 puma 5.5.2 (unix:///home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock)
Nov 18 22:31:02 master-cms systemd[749]: Started Puma HTTP Server for master_cms (production).
The rails production.log is empty.
The puma error log shows the following
cat log/puma_error.log
=== puma startup: 2021-11-18 22:31:05 +0000 ===
The pid files exist in the application roots shared/tmp/pids folder
ls tmp/pids
puma.pid puma.state
and the socket that nginx needs but is unable to connect to due to permission denied exists
ls -l ~/apps/master_cms/shared/tmp/sockets/
total 0
srwxrwxrwx 1 comtechmaster comtechmaster 0 Nov 18 22:31 puma_master_cms_production.sock
nginx is up and running and providing a
502 bad gateway
response. The nginx error log reports the following error
2021/11/18 23:18:43 [crit] 1500#1500: *25 connect() to unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock failed (13: Permission denied) while connecting to upstream, client: 86.160.191.54, server: 159.65.50.229, request: "GET / HTTP/2.0", upstream: "http://unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock:/500.html"
sudo nginx -t reports the following
sudo nginx -t
nginx: [warn] could not build optimal proxy_headers_hash, you should increase either proxy_headers_hash_max_size: 512 or proxy_headers_hash_bucket_size: 64; ignoring proxy_headers_hash_bucket_size
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successfu
just to be pedantic both an ls and a sudo ls to the path reported in the error shows
ls /home/comtechmaster/apps/master_cms/shared/tmp/sockets/
puma_master_cms_production.sock
as expected so I am stumped to understand why nginx running as root using sudo service nginx start is being denied access to a socket that exists, that is owned by the local user rather than root.
I expect the solution is going to be something totally obvious but I can not see what
This problem ended up being related to the folder permissions for the users home folder and specifically a change in the way Ububntu 20.10 sets permissions differently to previous versions of ubuntu, or at least a difference in the way the DigitalOcean setup scripts behave.
This was resolved with a simple command line chmod o=rx from the /home against the user folder concerned e.g.
cd /home
chmod o=rx the_home_folder_for_user
I am trying to create a Docker container from haproxy image but I run in to some problems. I followed the tutorial from Dockerhub where it says to create a Dockerfile containing
FROM haproxy:1.7
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
I then run docker build -t my-haproxy . And everything look good, but when i run docker run -it --rm --name haproxy-syntax-check my-haproxy haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg to test the config file i get the following error`s
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:7] : cannot find user id for 'haproxy' (0:Success)
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:8] : cannot find group id for 'haproxy' (0:Success)
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:32] : error opening file </etc/haproxy/errors/400.http> for custom error message <400>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:33] : error opening file </etc/haproxy/errors/403.http> for custom error message <403>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:34] : error opening file </etc/haproxy/errors/408.http> for custom error message <408>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:35] : error opening file </etc/haproxy/errors/500.http> for custom error message <500>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:36] : error opening file </etc/haproxy/errors/502.http> for custom error message <502>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:37] : error opening file </etc/haproxy/errors/503.http> for custom error message <503>.
[ALERT] 114/152637 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:38] : error opening file </etc/haproxy/errors/504.http> for custom error message <504>.
[ALERT] 114/152637 (1) : Error(s) found in configuration file : /usr/local/etc/haproxy/haproxy.cfg
[ALERT] 114/152637 (1) : Fatal errors found in configuration.
I have a group and user called haproxy. I can still create the container but it does not work. Here is my haproxy.cfg file
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend myfrontend
bind *:80
mode http
default_backend mybackend
backend mybackend
mode http
balance roundrobin
option httpchk HEAD / # checks against the index page
server web1 172.17.0.2:80 check weight 10
server web2 172.17.0.3:80 check weight 20
Whenever I pull the official Haproxy container, I do not see the haproxy user/group. Actually, the whole reason I have a custom image for haproxy is just to add them
RUN addgroup -g 1000 haproxy && \
adduser -u 1000 -G haproxy -h /app -D haproxy
Change the user and group to root from haproxy. It will work.
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user root
group root
daemon
Add this 2 lines on the global section.
user haproxy
group haproxy
Remove or comment this lines on the defaults section.
#errorfile 400 /etc/haproxy/errors/400.http
#errorfile 403 /etc/haproxy/errors/403.http
#errorfile 408 /etc/haproxy/errors/408.http
#errorfile 500 /etc/haproxy/errors/500.http
#errorfile 502 /etc/haproxy/errors/502.http
#errorfile 503 /etc/haproxy/errors/503.http
#errorfile 504 /etc/haproxy/errors/504.http
I setup kubernetes cluster by kubeadm, the pod network is flannel, I can get the log for pods which running on master.
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
httpd-7448fc6b46-fgkp2 1/1 Running 0 1d 10.244.2.39 k8s-node2
httpd-7448fc6b46-njbh8 1/1 Running 0 1d 10.244.0.10 k8smaster
httpd-7448fc6b46-wq4zs 1/1 Running 0 1d 10.244.1.75 k8s-node1
$ kubectl logs httpd-7448fc6b46-njbh8
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.0.10. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.0.10. Set the 'ServerName' directive globally to suppress this message
[Wed Mar 21 10:10:21.568990 2018] [mpm_event:notice] [pid 1:tid 139992519874432] AH00489: Apache/2.4.32 (Unix) configured -- resuming normal operations
[Wed Mar 21 10:10:21.569204 2018] [core:notice] [pid 1:tid 139992519874432] AH00094: Command line: 'httpd -D FOREGROUND'
10.244.0.1 - - [21/Mar/2018:10:21:02 +0000] "GET / HTTP/1.1" 200 45
10.244.0.1 - - [21/Mar/2018:10:22:53 +0000] "GET / HTTP/1.1" 200 45
But I am unable to get the log of pod which running on slave node, the result looks like this:
"Error from server: Get https://192.168.18.111:10250/containerLogs/default/httpd-7448fc6b46-6pf7w/httpd?follow=true: cannotconnect"
How can I debug the issue? any ideas?
The issue has been solved, my cluster is behind the firewall and need to set proxy in order to download the image, so I set the proxy for the docker, but I didn't bypass the slave nodes when I set the proxy, so the request for the logs is mislead by the proxy setting in the docker.
I'm not sure where I went off of the rails but I am trying to create a container for my web site. First I start off with a file called 'default':
server {
root /var/www;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
var/www/ points to my web content with index.html being the default file for the content.
Then I create my very simple Dockerfile:
FROM httpd
MAINTAINER Jay Blanchard
RUN httpd
ADD default /home/OARS/
In my Dockerfile I reference the default file from above, thinking this is what is needed to point to my web content. The default file happens to be in the same directory as the Docker file, but I give the path /home/OARS/ as I have seen in some examples.
The build is successful:
foo#bar:/home/OARS$ sudo docker build -t oars-example .
Sending build context to Docker daemon 3.072 kB
Sending build context to Docker daemon
Step 0 : FROM httpd
---> cba1e4bb4caa
Step 1 : MAINTAINER Jay Blanchard
---> Using cache
---> e77807e98c6b
Step 2 : RUN httpd
---> Using cache
---> c0bff2fb1f9b
Step 3 : ADD default /home/OARS/
---> 3b4053fbc8d4
Removing intermediate container e02d27c4309d
Successfully built 3b4053fbc8d4
And the run appears to be successful:
foo#bar:/home/OARS$ sudo docker run -d -P oars-example
9598c176a706b19dd28dfab8de94e9c630e5781aca6930564d15182d21b0f6a5
9598c176a706 oars-example:latest "httpd-foreground" 6 seconds ago Up 5 seconds 0.0.0.0:32776->80/tcp jovial_fermat
Yet when I go to the IP (with port 32776, there is something running on port 80 already) I do not get the index page I've specified in /var/www, but I do get the default index page from the Apache server.
Here is the log from the server:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 000.000.000.000. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 000.000.000.000. Set the 'ServerName' directive globally to suppress this message
[Tue May 19 16:59:17.457525 2015] [mpm_event:notice] [pid 1:tid 140053777708928] AH00489: Apache/2.4.12 (Unix) configured -- resuming normal operations
[Tue May 19 16:59:17.457649 2015] [core:notice] [pid 1:tid 140053777708928] AH00094: Command line: 'httpd -D FOREGROUND'
000.000.000.000 - - [19/May/2015:17:00:08 +0000] "GET / HTTP/1.1" 200 45
000.000.000.000 - - [19/May/2015:17:00:08 +0000] "GET /favicon.ico HTTP/1.1" 404 209
I've changed the IP addresses in the logs just to keep things kosher.
Am I missing something obvious to make sure my web site files are being run in the container?
First, you are trying to use a nginx config file within an Apache container.
Then, according to the base container documentation, the correct way to specify a config file is:
# Dockerfile
FROM httpd
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf