Currently we have multiple docker containers(host A).
We send the logs from each docker container to logger(which is runs on docker container on another server).
Here is my daemon.json:
{
"log-driver":"gelf",
"log-opts":{
"gelf-address":"tcp://10.*.*.*:12201"
},
"dns":[
"10.*.*.*"
],
"icc":false
}
The problem is that if logger docker is not running and i restarting host A, they not starting because cannot connect to logger.
Is there any way to configure docker containers to start even if they cannot connect to logger configured in daemon.json?
Thank you.
With this you are not configuring docker containers, but the daemon itself. If you restart you host, you restart the daemon and on startup it reads the config. If the config is invalid or parts of it are not working, it doesn't start up. You can manually start up the docker daemon with a manual configuration like
dockerd --debug \
--tls=true \
--tlscert=/var/docker/server.pem \
--tlskey=/var/docker/serverkey.pem \
--host tcp://192.168.59.3:2376
see: Docker daemon documentation
Keep in mind, that it will keep running with those options, until it's restarted.
The logging settings in daemon.json are defaults for newly created containers. Changing this file will not change existing containers being restarted.
You may want to reconsider your logging design. One option is to swap out the logging driver for a logging forwarder, leaving the logs in the default json driver, and having another process monitor those and forward the logs to the remote server. This avoids blocking at the cost of missing some logs written just as the container is deleted (or very short lived containers). The other option is to improve the redundancy of your logging system since it is a single point of failure that blocks your workloads from running.
Related
I want to restart the docker container only when container crashed due to error. And don't want to restart the container if host reboots.
Which restart_policy will work for the above case?
Start containers automatically
on-failure[:max-retries]
Restart the container if it exits due to an error, which manifests as a non-zero exit code. Optionally, limit the number of times the Docker daemon attempts to restart the container using the :max-retries option.
docker run -d --restart on-failure[:max-retries] CONTAINER
UPDATE
A Docker host is a physical computer system or virtual machine running Linux. This can be your laptop, server or virtual machine in your data center, or computing resource provided by a cloud provider. The component on the host that does the work of building and running containers is the Docker Daemon.
Keep containers alive during daemon downtime
By default, when the Docker daemon terminates, it shuts down running containers. You can configure the daemon so that containers remain running if the daemon becomes unavailable. This functionality is called live restore. The live restore option helps reduce container downtime due to daemon crashes, planned outages, or upgrades.
Enable live restore
There are two ways to enable the live restore setting to keep containers alive when the daemon becomes unavailable. Only do one of the following.
Add the configuration to the daemon configuration file. On Linux, this defaults to /etc/docker/daemon.json. On Docker Desktop for Mac or Docker Desktop for Windows, select the Docker icon from the task bar, then click Preferences -> Daemon -> Advanced.
Use the following JSON to enable live-restore.
{
"live-restore": true
}
Restart the Docker daemon. On Linux, you can avoid a restart (and avoid any downtime for your containers) by reloading the Docker daemon. If you use systemd, then use the command systemctl reload docker. Otherwise, send a SIGHUP signal to the dockerd process.
If you prefer, you can start the dockerd process manually with the --live-restore flag. This approach is not recommended because it does not set up the environment that systemd or another process manager would use when starting the Docker process. This can cause unexpected behavior.
I'm using an Nginx docker container as base image for an application. I'm redirecting Nginx logs to syslog but I'm not sure what is the best way to have the busybox syslogd started. It all works if I start it manually, I just need it to run as a daemon automatically when the container runs.
Seeing that nginx is in init.d I tried this in my Dockerfile:
RUN ln -s /bin/busybox syslogd /etc/init.d/syslogd || :
But syslogd still didn't run on start-up. Since the documentation says that only one [CMD] is allowed I have the following hack:
FROM nginx:mainline-alpine
CMD nginx & busybox syslogd -n
This works, locally at least, but I'm wondering what is the proper solution. By default the container already symlinks log files to stdout and stderr but I don't want to use docker's syslog logging driver, because the application will be deployed to Kubernetes so I need a self-contained solution, that will work in the pod. Thank you!
Have your container log to stdout, but collect the logs elsewhere.
One option is to configure Docker itself to send container logs to syslog:
docker run --log-driver=syslog --log-opt syslog-address=udp://... ... nginx
Since the Docker daemon itself is configuring this, the syslog-address needs to be something that can be reached from the host. If you're running syslogd in a separate container, this option needs to point at a published port.
Another option is to use the standard Docker JSON-format logging, but use another tool to forward the logs to somewhere else. This has the downside of needing an additional tool, but the upside of docker logs working unmodified. Fluentd is a prominent open-source option. (Logstash is another, but doesn't seem to directly have a Docker integration.)
I want to see the logs from my Docker Swarm service. Not only because I want all my logs to be collected for the usual reason, but also because I want to work out why the service is crashing with "task: non-zero exit (1)".
I see that there is work to implement docker logs in the pipeline, but there a way to access logs for production services? Or is Docker Swarm not ready for production wrt logging?
With Docker Swarm 17.03 you can now access the logs of a multi instance service via command line.
docker service logs -f {NAME_OF_THE_SERVICE}
You can get the name of the service with:
docker service ls
Note that this is an experimental feature (not production ready) and in order to use it you must enable the experimental mode:
Update: docker service logs is now a standard feature of docker >= 17.06. https://docs.docker.com/engine/reference/commandline/service_logs/#parent-command
similar question: How to log container in docker swarm mode
What we've done successfully is utilize GrayLog. If you look at docker run documentation, you can specify a log-driver and log-options that allow you to send all console messages to a graylog cluster.
docker run... --log-driver=gelf --log-opt gelf-address=udp://your.gelf.ip.address:port --log-opt tag="YourIdentifier"
You can also technically configure it at the global level for the docker daemon, but I would advise against that. It won't let you add the "Tag" option, which is exceptionally useful for filtering down your results.
Docker service definitions also support log driver and log options so you can use docker service update to adjust your services without destroying them.
As the documents says:
docker service logs [OPTIONS] SERVICE|TASK
resource: https://docs.docker.com/engine/reference/commandline/service_logs/
When we run the same process in docker and in host system, how it differentiates one from the other, from the perspective of audit logs?
Can I view the process running in docker in host system?
You would not run the same process (same pid) in docker and in host, since the purpose of a container is to provide isolation (both processes and filesystem)
I mentioned in your previous question "Docker Namespace in kernel level" that the pid of a process run in a container could be made visible from the host.
But in term of audit log, you can configure logging drivers in order to follow only containers, and ignore processes running directly on host.
For instance, in this article, Mark configures rsyslog to isolate the Docker logs into their own file.
To do this create /etc/rsyslog.d/10-docker.conf and copy the following content into the file using your favorite text editor.
# Docker logging
daemon.* {
/var/log/docker.log
stop
}
In summary this will write all logs for the daemon category to /var/log/docker.log then stop processing that log entry so it isn’t written to the systems default syslog file.
That should be enough to clearly differentiate the host processes logs (in regular syslog) from the ones running in containers (in /var/log/docker.log)
Update May 2016: issue 10163 and --pid=container:id is closed by PR 22481 for docker 1.12, allowing to join another container's PID namespace.
I created a customize Docker image based on ubuntu 14.04 with the Sensu-Client package inside.
Everything's went fine but now I'm wondering how can I trigger the checks to run from the hosts machine.
For example, I want to be able to check the processes that are running on the host machine and not only the ones running inside the container.
Thanks
It depends on what checks you want to run. A lot of system-level checks work fine if you run sensu container with --net=host and --privileged flags.
--net=host not just allows you to see the same hostname and IP as host system, but also all the tcp connections and interface metric will match for container and host.
--privileged gives container full access to system metrics like hdd, memory, cpu.
Tricky thing is checking external process metrics, as docker isolates it even from privileged container, but you can share host's root filesystem as docker volume ( -v /:/host) and patch check to use chroot or use /host/proc instead of /proc.
Long story short, some checks will just work, for others you need to patch or develop your own way, but sensu in docker is one possible way.
an unprivileged docker container cannot check processes outside of it's container because docker uses kernel namespaces to isolate it from all other processes running on the host. This is by design: docker security documentation
If you would like to run a super privileged docker container that has this namespace disabled you can run:
docker run -it --rm --privileged --pid=host alpine /bin/sh
Doing so removes an important security layer that docker provides and should be avoided if possible. Once in the container, try running ps auxf and you will see all processes on the host.
I don't think this is possible right now.
If the processes in the host instance are running inside docker, you can mount the socket and get the status from the sensu container
Add a sensu-client to the host machine? You might want to split it out so you have granulation between problems in the containers VS problems with your hosts
Else - You would have to set up some way to report from the inside - Either using something low level (system calls etc) or set up something from outside to catch the the calls and report back status.
HTHs
Most if not all sensu plugins hardcode the path to the proc files. One option is to mount the host proc files to a different path inside of the docker container and modify the sensu plugins to support this other location.
This is my base docker container that supports modifying the sensu plugins proc file location.
https://github.com/sstarcher/docker-sensu