How to start syslogd in nginx container - docker

I'm using an Nginx docker container as base image for an application. I'm redirecting Nginx logs to syslog but I'm not sure what is the best way to have the busybox syslogd started. It all works if I start it manually, I just need it to run as a daemon automatically when the container runs.
Seeing that nginx is in init.d I tried this in my Dockerfile:
RUN ln -s /bin/busybox syslogd /etc/init.d/syslogd || :
But syslogd still didn't run on start-up. Since the documentation says that only one [CMD] is allowed I have the following hack:
FROM nginx:mainline-alpine
CMD nginx & busybox syslogd -n
This works, locally at least, but I'm wondering what is the proper solution. By default the container already symlinks log files to stdout and stderr but I don't want to use docker's syslog logging driver, because the application will be deployed to Kubernetes so I need a self-contained solution, that will work in the pod. Thank you!

Have your container log to stdout, but collect the logs elsewhere.
One option is to configure Docker itself to send container logs to syslog:
docker run --log-driver=syslog --log-opt syslog-address=udp://... ... nginx
Since the Docker daemon itself is configuring this, the syslog-address needs to be something that can be reached from the host. If you're running syslogd in a separate container, this option needs to point at a published port.
Another option is to use the standard Docker JSON-format logging, but use another tool to forward the logs to somewhere else. This has the downside of needing an additional tool, but the upside of docker logs working unmodified. Fluentd is a prominent open-source option. (Logstash is another, but doesn't seem to directly have a Docker integration.)

Related

Make Docker ignore daemon.json configuration on start

Currently we have multiple docker containers(host A).
We send the logs from each docker container to logger(which is runs on docker container on another server).
Here is my daemon.json:
{
"log-driver":"gelf",
"log-opts":{
"gelf-address":"tcp://10.*.*.*:12201"
},
"dns":[
"10.*.*.*"
],
"icc":false
}
The problem is that if logger docker is not running and i restarting host A, they not starting because cannot connect to logger.
Is there any way to configure docker containers to start even if they cannot connect to logger configured in daemon.json?
Thank you.
With this you are not configuring docker containers, but the daemon itself. If you restart you host, you restart the daemon and on startup it reads the config. If the config is invalid or parts of it are not working, it doesn't start up. You can manually start up the docker daemon with a manual configuration like
dockerd --debug \
--tls=true \
--tlscert=/var/docker/server.pem \
--tlskey=/var/docker/serverkey.pem \
--host tcp://192.168.59.3:2376
see: Docker daemon documentation
Keep in mind, that it will keep running with those options, until it's restarted.
The logging settings in daemon.json are defaults for newly created containers. Changing this file will not change existing containers being restarted.
You may want to reconsider your logging design. One option is to swap out the logging driver for a logging forwarder, leaving the logs in the default json driver, and having another process monitor those and forward the logs to the remote server. This avoids blocking at the cost of missing some logs written just as the container is deleted (or very short lived containers). The other option is to improve the redundancy of your logging system since it is a single point of failure that blocks your workloads from running.

Scaling filebeat over docker containers

I’m looking for the appropriate way to monitor applicative logs produced by nginx, tomcat, springboot embedded in docker with filebeat and ELK.
In the container strategy, a container should be used for only one purpose.
One nginx per container and one tomcat per container, meaning we can’t have an additional filebeat within a nginx or tomcat container.
Over what I have read over Internet, we could have the following setup:
a volume dedicated for storing logs
a nginx container which mount the dedicated logs volume
a tomcat / springboot container which mount the dedicated logs volume
a filebeat container also mounting the dedicated logs volume
This works fine but when it comes to scale out nginx and springboot container, it is a little bit more complex for me.
Which pattern should I use to push my logs using filebeat to logstash if I have the following configuration:
several nginx containers in load balancing with the same configuration (logs configuration is the same: same path)
several springboot rest api containers behing nginx containers with the same configuration (logs configuration is the same:same path)
Should I create one volume by set of nginx + springboot rest api and add a filebeat container ?
Should I create a global log volume shared by all my containers and have a different log filename by container
(having the name of the container in the filename of the logs?) and having only one filebeat container ?
In the second proposal, how to scale filebeat ?
Is there another way to do that ?
Many thanks for your help.
The easiest thing to do, if you can manage it, is to set each container process to log to its own stdout (you might be able to specify /dev/stdout or /proc/1/fd/1 as a log file). For example, the Docker Hub nginx Dockerfile specifies
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
so the ordinary nginx logs become the container logs. Once you do that, you can plug in the filebeat container input to read those logs and process them. You could also see them from outside the container with docker logs, they are the same logs.
What if you have to log to the filesystem? Or there are multiple separate log streams you want to be able to collect?
If the number of containers is variable, but you have good control over their configuration, then I'd probably set up a single global log volume as you describe and use the filebeat log input to read every log file in that directory tree.
If the number of containers is fixed, then you can set up a volume per container and mount it in each container's "usual" log storage location. Then mount all of those directories into the filebeat container. The obvious problem here is that if you do start or stop a container, you'll need to restart the log manager for the added/removed volume.
If you're actually on Kubernetes, there are two more possibilities. If you're trying to collect container logs out of the filesystem, you need to run a copy of filebeat on every node; a DaemonSet can manage this for you. A Kubernetes pod can also run multiple containers, so your other option is to set up pods with both an application container and a filebeat "sidecar" container that ships the logs off. Set up the pod with an emptyDir volume to hold the logs, and mount it into both containers. A template system like Helm can help you write the pod specifications without repeating the logging sidecar setup over and over.

See docker container logs on host while using gelf driver

I am using gelf as log driver for my docker container. In log options i provided udp endpoint.
Now when i start the container, everything is working as expected.
My question is, if it is possible to see the container logs in the host where it is running(not at UDP endpoint)?
This depends on Docker version.
Docker 20.10 and up introduces “dual logging”, which uses a local buffer that allows you to use the docker logs command for any logging driver.
If you are talking about seeing the logs via docker logs command on the machine running the docker containers, its not possible to do so when using other logging drivers.
See limitations of logging drivers.
If you know where the log is at inside the container, a work around would be to write a script which copies the log file from the container and displays it, or maybe just exec's into the container and displays it. But I really wouldn't recommend that.
Something like:
#!/bin/bash
docker cp mycontainer:/var/log/mylog.log $(pwd)/logs/mylog.log
tail -f $(pwd)/logs/mylog.log

Is it possible run syslog inside Docker and expose that to the host as host's syslog daemon?

I am trying to run syslog inside Docker so that it has access to DNS configuration for the container. Is it possible run syslog inside Docker and expose that to the host as host's syslog daemon?
Yes. I'm doing this at the moment, because I've got a containerised ELK (Elasticsearch/Logstash/Kibana).
My logstash runs a listener on port 514 for syslog traffic* which it forwards to ELK.
Well, more correctly - I'm running a haproxy instance, that I'm redirecting using confd and etcd to wherever my syslog container is, but the principle stands.
My hosts have
*.* ##localhost
in their rsyslog.conf
And it works nicely. (and I can also log from my containers to this syslogd)
I think this a bit old but it can help others!
I think the best way is to use docker driver to send logs to syslog instead of running syslog inside.
One of the best practices in docker is to run only one process inside the container.
If you would like to have one docker running syslog inside and forward all logs to this container from other containers this will be good idea, because you will separate concerns and also you can scale the log container.
Here is a container that do that: enter link description here

How to monitor java application memory usage in Docker

I run the java web application on tomcat in the Docker container.
Is there any way to monitor the memory usage of the java application? I try to use jconsole with the process id of the docker, but it tells me Invalidate process id
I also enable JMX in tomcat, but don't know how to bind to it. I can use visualvm from my local to bind the host machine, but can not find way to bind to the docker inner the host.
Is there any good way to achieve this?
Thanks
To connect to a java process running in a docker container running in boot2docker with visualvm you can try the following:
Start your java process using the following options:
java -Dcom.sun.management.jmxremote.port=<port> \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.rmi.port=<port> \
-Djava.rmi.server.hostname=<boot2docker_ip> \
<Main>
You need to run your image with --expose <port> -p <port>:<port>.
Then "Add JMX Connection" in visualvm with <boot2docker_ip>:<port>.
It shouldn't be much different without boot2docker.
To monitor it's usage, you need to get it's real Process ID. If you are running tomcat directly in the container, then it should be:
DOCKER_ROOT_PROC=`(docker inspect -f "{{ .State.Pid }}" my_container)`
If you are using something like Phusion's baseimage, then your java process will be a child of that process. To see the hierarchy use:
pstree $DOCKER_ROOT_PROC
Once you have that, you can write your script using
ps -o pid,cmd --no-headers --ppid $DOCKER_ROOT_PROC
In your script recursively to find the java process you want to monitor (with some Regular Expression filtering, of course). Then finally you can use this to get your java application's memory usage in kilobytes:
ps -o vsz -p $JAVAPROCESS
I don't know if this can be used with jconsole, but it is a way of monitoring the memory usage.
To monitor docker containers I recommend Google's cAdvisor project. That way you have a general solution to monitor docker containers. Just run your app, whatever that is, in a docker container, and check things like cpu and memory usage. Here you have an http API as well as a web ui.
I tried the Pierre's answer (also answered here) but no way.
At the end I could connect using a SSH tunnel.
cAdvisor mentioned above will not help with monitoring Tomcat running inside the container. You may want to take a look at SPM Client docker container, which does exactly that! It has the agents for monitoring a number of different applications running in Docker - Elasticsearch, Solr, Tomcat, MySQL, and so on: https://github.com/sematext/docker-spm-client
For the memory usage monitoring of your application in Docker, you can also launch an ejstatd inside your Docker container (calling mvn -Djava.rmi.server.hostname=$HOST_HOSTNAME exec:java -Dexec.args="-pr 2222 -ph 2223 -pv 2224" & from the ejstatd folder before launching your main container process), exposing those 3 ports to the Docker host using docker run -e HOST_HOSTNAME=$HOSTNAME -p 2222:2222 -p 2223:2223 -p 2224:2224 myimage.
Then you will be able to connect to this special jstatd daemon using JVisualVM for example, adding a "Remote Host" specifying your Docker hostname as "Host name" and adding a "Custom jstatd Connections" (in the "Advanced Settings") by setting "2222" to "Port".
Disclaimer: I'm the author of this open source tool.

Resources