My application will send out syslog local0 messages.
When I move my application into docker, I found it is difficult to show the syslog.
I've tried to run docker as --log-dirver as syslog or journald, both works strange, the /var/log/local0.log show console output of docker container instead of my application's syslog when I try to run this command inside container
logger -p local0.info -t a message
So, I try to install syslog-ng inside the docker container.
The outside docker box is Arch Linux (kernel 4.14.8 + systemctl).
The docker container is running as CentOS 6. If I install syslog-ng inside the container and start it, it shows following message.
# yum install -y syslog-ng # this will install syslog-ng 3.2.5
# /etc/init.d/syslog-ng start
Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
Starting syslog-ng: Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'
Error initializing source driver; source='s_sys', id='s_sys#0'
Error initializing message pipeline;
I also had problems getting the standard "syslog" output from my app after it has been dockerized.
I have attacked the problem from a different direction. I wanted to get the container syslogs on the host /var/log/syslog
I have ran my container with an extra mount the /dev/log device and voila it worked like a charm.
docker run -v /dev/log:/dev/log sysloggingapp:latest
CentOS 6:
1.
Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
Starting syslog-ng: Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
You can fix above error by installing syslog-ng-libdbi package:
yum install -y syslog-ng-libdbi
2.
Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'
Error initializing source driver; source='s_sys', id='s_sys#0'
Error initializing message pipeline;
Since syslog-ng doesn't have direct access on the kernel messages, you need to disable (comment) that in its configuration:
sed -i 's|file ("/proc/kmsg"|#file ("/proc/kmsg"|g' /etc/syslog-ng/syslog-ng.conf
CentOS 7:
1.
Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'
The system() source is in default configuration. This source reads platform-specific sources automatically, and reads /dev/kmsg on Linux if the kernel is version 3.5 or newer. So, we need to disable (comment) system() source in configuration file:
sed -i 's/system()/# system()/g' /etc/syslog-ng/syslog-ng.conf
2. When we start it in foreground mode syslog-ng -F we get the following:
# syslog-ng -F
syslog-ng: Error setting capabilities, capability management disabled; error='Operation not permitted'
So, we need to run syslog-ng as root, without capability-support:
syslog-ng --no-caps -F
Another way is to set up central logging with syslog/ rsyslog server, then use the syslog docker driver for logging. The syntax to use on the docker run command line is:
$ docker run --log-driver=syslog \
--log-opt syslog-address=udp://address:port image-name
Destination syslog server protocol can be udp or tcp and the server address can be a remote server, VM, a different container or local container address.
Replace image-name with your application docker image name.
A ready rsyslog docker image is available on https://github.com/jumanjihouse/docker-rsyslog
References: Docker Logging at docker.com,
Docker CLI, https://www.aquasec.com/wiki/display/containers/Docker+Containers+vs.+Virtual+Machines
For anyone trying to figure this out in the future,
The best way I've found is to just set LOG_PERROR flag in openlog().
That way, your syslog will print to stderr, which docker will then log by default (you don't need to run syslog process in docker for this). This is much easier then trying to figure out how to run a syslog process alongside your application inside your docker container (which docker probably isn't designed to do anyway).
Related
I am trying to install docker engine inside a container.
wget https://desktop.docker.com/linux/main/amd64/docker-desktop-4.16.2-amd64.deb
apt-get install -y ./docker-desktop-4.16.2-amd64.deb
Everything goes fine until in the post install phase, it tries to update /ect/hosts files for the kubernetes. Here it fails:
/var/lib/dpkg/info/docker-desktop.postinst: line 42: /etc/hosts: Read-only file system
This is expected behaviour for docker build in that it does not allow to modify /etc/hosts of the container.
Is there a way to solve this? Install docker desktop without doing this step? Or any other way?
I Solved this issue by adding this parameter in build
--add-host kubernetes.docker.internal:127.0.0.1
Example:
docker build --add-host kubernetes.docker.internal:127.0.0.1 -t stsjava2 .
When the Docker desktop installation fails with an error related to "/etc/hosts", it is usually due to a conflict with the host system's configuration. Here are some steps that you can try to resolve the issue:
Check the permissions of the "/etc/hosts" file on your host system to ensure
that it is accessible to Docker.
Try to start the Docker container with elevated privileges (e.g., using
"sudo") to see if that resolves the issue.
If the above steps do not resolve the issue, you can try modifying the
Docker container's network configuration to use a different network driver
that does not conflict with the host system's "/etc/hosts" file.
You can also try running the Docker container in a different environment
(e.g., a virtual machine) that does not have the same conflicts with the
host system.
If all else fails, you can try reinstalling Docker or using a different version of Docker to see if that resolves the issue.
I have a dockerized application which only write logs to syslog. How can I inspect its logs Should I run syslog daemon inside docker?
I can change the Dockerfile, but cannot change the application itself.
Typically, you'd want your messages to end up on either standard out or standard error (stdout and stderr.) Once you can do that, all of the docker tools for managing and searching logs become available to you.
As detailed here, there are quite a few options to force a docker application write to stdout instead of a file. However, if your application writes only to syslog, it's a little more tricky. None of the solutions in that thread worked for me to redirect syslog to stdout/stderr.
However, I did find a project, syslog2stdout, which is a simple program which listens to the syslog socket, and writes the messages that come through to stdout.
Here's how to use it in Docker.
Here's an example Python program that logs to syslog. If you run this normally under docker, nothing will show up, either on your terminal or in system logs.
import syslog
syslog.syslog("bar123")
Here's a dockerfile which installs both that program and syslog2stdout:
FROM python:3.10
COPY test.py /
RUN git clone https://github.com/ossobv/syslog2stdout.git \
&& cd syslog2stdout \
&& git checkout 142793f
RUN cd syslog2stdout \
&& make \
CMD /syslog2stdout/syslog2stdout /dev/log & python3 test.py
(Note: The base image is not important - I'm just using a Python image because my test application is written in Python.)
If I run that, I get this output:
$ docker build . -t test
$ docker run -it test
user.info: test.py: bar123
...which is the message we wanted to log.
Alternatives
You can also bind mount /dev/log into the container. I don't like this much from a security perspective, but it is simple and it gets the logging into your host's logging daemon.
You can run systemd inside a container, and have it manage your application. Since systemd has a logging daemon integrated with it, you automatically get syslog forwarding. Given the complexity of systemd, I wouldn't go this way, but it's an option.
Inside a Docker container which doesn't have rsyslogd installed, what happens to logs from the command logger error "my error message"?
It seems odd to have the logger command available, without anything to capture and process the log events it emits. I would naively have expected, both the logger command and the mechanism to process the log messages to be packaged together, it doesn't make sense to me to have one without the other.
============= EDIT with more information =============
I'm using the Dockers official MariaDB 10.1 image. I'm entering the container with the command
docker exec -it maria_10_2_test bash
then I'm using the Linux logger utility to try and log to the syslog like with
logger error "My message here"
The logger command exits successfully with a 0 code, but nothing is written to the syslog (as there is no syslog daemon in the image).
I think this question is more general than Docker and is a general Linux question. If on the host machine (Ubuntu 20.04), I turn off the syslog service and socket, I can still use the logger command, it still gives a zero exit code and nothing is written to the syslog.
I have been able to successfully run apache ignite with custom config using the command
docker run -it --net=host -v "pathToLocalDirectory"/config:/opt/ignite/apache-ignite/config -e "CONFIG_URI=file:///opt/ignite/apache-ignite/config/default-config.xml" apacheignite/ignite.
But when I run my java project in IntelliJ I get the message
"IP finder returned empty addresses list. Please check IP finder configuration and make sure multicast works on your network...".
Note: the java client project works if I run the ignite server using windows batch file.
Also, I have published 47500 port as well. the result is the same.
try running it using docker -run -it --net=host (don't mount the volumes).
If that doesn't work, it means that either something is incorrect w/your docker setup OR you are configuring discovery differently for clients and servers.
check the IP addresses listed in your client discovery section.
ssh into the container and check what is actually mounted?
run docker exec -it container-name /bin/bash
check: /opt/ignite/apache-ignite/config/default-config.xml is there and contains the correct discovery info.
Check that the ignite log (located in /opt/ignite/apache-ignite/work/log/) specifies that the correct config is being used.
It will have a line like so: [INFO][main][IgniteKernal] Config URL: file:/opt/ignite/apache-ignite/config/default-config.xml
If you don't see the mounted config file try mounting more simply.
docker run -d -v /local/dir/config.xml:/config-file.xml -e CONFIG_URI=/config-file.xml apacheignite/ignite
more info:
https://apacheignite.readme.io/docs/docker-deployment
https://apacheignite.readme.io/docs/tcpip-discovery
I have a docker container that uses a gstreamer plugin to capture the input of a camera. It runs fine with a Bastler camera but now I need to use an IDS uEye camera. To be able to use this camera I need to have the ueyeusbdrc service running. The IDS documentation says that to start it I can run sudo systemctl start ueyeusbdrc or sudo /etc/init.d/ueyeusbdrc start. The problem is that when the docker container runs, that service is not running and I get a Failed to initialize camera error, which is the same error I get if I run gst-launch-1.0 -v idsueyesrc ! videoconvert ! autovideosink and the ueyeusbdrc service is not running outside the container in my PC. So this tells me that the issue is that the ueyeusbdrc service is not running inside the container.
How can I run the ueyeusbdrc inside the docker container? I tried to run /etc/init.d/ueyeusbdrc start in the .sh script that launches the application (which is called using ENTRYPOINT ["<.sh file>"] in the Dockerfile), but it fails. Also if I try to use sudo, it tells me that the command doesn't exist. If I run systemctl it also tells me the command doesn't exist. BTW, I am running the docker with privileged: true (at least that's what is set in the docker-compose.yml file).
I am using Ubuntu 18.04.
Update:
I mapped /run/ueyed and /var/run/ueyed to the container and that changed the error from Failed to initialize camera to Failed to initialize video capture. It may be that I can run the daemon in the host and there is a way to hook it to the container. Any suggestions on how to do that?
Finally got this working. I had to add a few options to the docker command (in my case to the docker-compose yml file). I based my solution on the settings found here: https://github.com/chalmers-revere/opendlv-device-camera-ueye
Adding these arguments to the docker command solved the issue: --ipc=host --pid=host -v /var/run:/var/run. With these options there is no need to run the service inside the container.
The other key part is to install the IDS software inside the docker container. This can be easily done by downloading, extracting and running the installer (the git repo mentioned above has an outdated version, but the most recent version can be found in the IDS web page).
Also, make sure the system service for the IDS uEye camera is running in the host (sudo systemctl start ueyeusbdrc).