Rotate log for application in docker container - docker

I have an application which writes log into file (example /var/log/my_app).
Before Docker I used supervisor to start my app and logrotate.d for rotation logs.
Logrotate.d runs script for supervisor to restart my app after log rotate (for delete old log file and create new one):
supervisor restart my_app
How I should do with Docker?
As I know if we use docker container with only one running app we should not use supervisor (start and restart will do docker). So how can I use logrotate.d for this?
Create “volume” for log dir and setup logrotate.d to restart Docker container? But I think it’s not a good idea.
Use logrotate.d within Docker container with my app? For each Docker image I will install logrotate.d and instead of run script for supervisor I should run script for close my app (kill -9 or something else).

If you decide to move to Docker you should also adapt your application.
Applications running in containers should write their logs to the console (system out). You can achieve that by using a CONSOLE appender in your logger configuration.
Once that is done you can inspect your logs from outside the container with:
docker logs <container_name>
You can also follow the logs (like you would do with "tail -f"):
docker logs -f <container_name>
You can also change the logging driver used by your container and do more fancy stuff with your logs.
See more details here: https://docs.docker.com/config/containers/logging/configure/

There are two possible options
Your application should write logs to console. Then the logs will be managed using docker logs command
If application must write logs to a file, your application should rotate the logs. Most programming languages support log frameworks that provide this functionality.

Related

How to launch a host app using Docker Compose

When running a Docker Compose project, it would be nice to be able to open an app with certain parameters on the host operating system (on which docker-compose up is being invoked). This would be legitimately useful when running web apps. For example, I would love to have Docker Compose automatically open a browser on the host with location of http://localhost:8080, when I run docker-compose run, rather than manually opening a browser and entering the http://localhost:8080. Just the way we see in Minikube (e.g when running minikube service web-deployment).
I am aware there are parameters to use in docker-compose.yml to pass commands to run in containers, like command and entrypoint, but I don't know if that is possible for applications on the host OS.
Compose can do a pretty limited set of things. It can build Docker images, without any ordering constraints, and it can start (presumably long-running) Docker containers, with very limited ordering constraints. It can create a couple of associated Docker objects like networks and named volumes. That's literally all it can do, though; it cannot do larger-scale orchestration task ("run this migration container to completion, then run this application") or launch non-Docker tasks.
You might use some host-based tool to manage this instead. Even a shell script would be enough; possibly something like
#!/bin/sh
# start the container stack
# (assumes the caller has permission to do this)
docker-compose up -d
# wait for the service to be ready
while ! curl --fail --silent --head http://localhost:8080; do
sleep 1
done
# open the browser window
open http://localhost:8080

Make Docker Logs Persistent

I have been using docker-compose to setup some docker containers.
I am aware that the logs can be viewed using docker logs <container-name>.
All logs are being printed to STDOUT and STDERR when the containers are run, there is no log 'file' being generated in the containers.
But these logs (obtained from docker logs command) are removed when their respective containers are removed by commands like docker-compose down or docker-compose rm.
When the containers are created and started again there is a fresh set of logs. No logs from the previous 'run' is present.
I am curious if there is a way to somehow prevent the logs from being removed along with their containers.
Ideally i would like to keep all my previous logs even when the container is removed.
I believe you have two ways you can go:
Make containers log into file
You can reconfigure the applications inside the container to write into logfiles rather than stdout/stderr. As you put it, you'd like to keep the logs even when the container is removed. Therefore ensure the files are stored in a (bind) mounted volume.
Reconfigure docker to store logs
Reconfigure docker to use a different logging driver. This can be especially helpful as it prevents you from changing each and every container.

How to dump jvm memery in docker swarm mode?

My springcloud applications are running in docker swarm mode, and recently I found that the memory usage is not in the normal level. I want to use some tools like jmap or something else to dump the heap so that I can change the params of jvm and solve the problem.
I have tried to use some tools like arthas ,but it failed because of the pid 1 problem. We can not attach to a process which id is 1-5.
How can I know the heap usage ( eden、survivor etc) ?
The easiest way I think is to get into container's shell and install necessary tools right there. Use this on a node with the application container to start interactive session:
docker exec -u root -it <container_name> sh
Stop the container after you finished to recreate it and thus clean up.
If at some point you need to extract a file from container (e.g. dump) use docker cp from another console on the node:
docker cp <container_name>:<path_in_container> <local_path>

How to auto-remove Docker container while persisting logs?

Is there any way to use Docker's --rm option that auto-removes the container once it exits but allow the container's logs to persist?
I have an application that creates containers to process jobs, and then once all jobs are complete, the container exits and is deleted to conserve space. However, in case a bug caused the container's process to exit prematurely, I'd like to persist the log files so I can confirm it exited cleanly or diagnose a faulty exit.
However, the --rm option appears to remove the container's logs along with the container.
Log to somewhere outside of the container.
You could mount a directory of the host in your container, so logs will be written to the host directory and kept after rm.
Or you can mount a volume on your container; which will be persisted after rm
Or you can setup rsyslog - or some similar log collection agent - to export your logs to a remote service. See https://www.simulmedia.com/blog/2016/02/19/centralized-docker-logging-with-rsyslog/ for more on this solution.
The first 2 are hacks but easier to get up and running on your workstation/server. If this is all cloud hosted there might be a decent log offloading option (Cloudwatch on AWS) which saves you the hassle of configuring rsyslog

Detach container from host console

I am creating a docker container with Ubuntu:16.04 image using python docker package. I am passing tty as True and detach as True to the client.containers.run() function. The container starts with /sbin/init process. The container is created successfully. But the problem is, the login prompt on my host machine is replaced with the container login prompt on my host machine console. As a result, I am not able to the login on the machine on the console. SSH connection to the machine work fine.
This happens even when I run my python script after connecting SSH to the machine. I tried different options like setting tty to False, setting stdout to False, setting the environment variable TERM to xterm in the container, but nothing help.
It would be really great if someone can suggest a solution for this problem.
My script is very simple:
import docker
client = docker.from_env()
container = client.containers.run('ubuntu:16.04', '/sbin/init', privileged=True,
detach=True, tty=True, stdin_open=True, stdout=False, stderr=False,
environment=['TERM=xterm'])
I am not using any dockerfile.
I have been able to figure out that this problem happens when I start container in privileged mode. If I do this, the /sbin/init process launches /sbin/agetty processes which causes /dev/tty to be attached to the container. I need to figure out a way to start /sbin/init in such a way that it does not create /sbin/agetty processes.
/sbin/init in Ubuntu is a service called systemd. If you look at the linked page it does a ton of things – configures various kernel parameters, mounts filesystems, configures the network, launches getty process, .... Many of these things require changing host-global settings, and if you launch a container with --privileged you're allowing systemd to do that.
I'd give two key recommendations on this command:
Don't run systemd in Docker. If you really need a multi-process init system, supervisord is popular, but prefer single-process containers. If you know you need some init(8) (process ID 1 has some responsibilities) then tini is another popular option.
Don't directly run bare Linux distribution images. Whatever software you're trying to run, it's almost assuredly not in an alpine or ubuntu image. Build a custom image that has the software you need and run that; you should set up its CMD correctly so that you can docker run the image without any manual setup.
Also remember that the ability to run any Docker command at all implies unrestricted root-level access over the host. You're seeing some of that here where a --privileged container is taking over the host's console; it's also very very easy to read and edit files like the host's /etc/shadow and /etc/sudoers. There's nothing technically wrong with the kind of script you're showing, but you need to be extremely careful with standard security concerns.

Resources