Capturing logs when Docker container crashes - docker

My app uses a 3rd party jar which is liable to cause a system crash at random times. We dont want the issue to kill our service so the docker container uses the "always" restart policy.
What I am trying to do now is find a way to capture the logs to see what caused the crash but because the actual docker container is restarted I can't just log onto the crashed docker when the crash happens because the old container that crashed is removed and a new container starts up. Is there any way to push log files onto say amazon S3 after the app crashes but before the docker container is destroyed?

Related

Why does Docker randomly throw this a 'Permission Denied' error when trying to stop a container?

I am trying to stop a docker container and get the following error:
This happens randomly on occasion and it is very frustrating to have restart the docker service and relaunch all my containers.
Would anyone know what could be happening to cause this? As far I have seen or know, there has not been any changes made to the container since they have been launched, may some changes in the content of the data in the containers. Also if people need more information, I would be happy to provide.
FYI everything that I am doing I am doing as a root user.
ALSO -- ABSOLUTLEY CANNOT STOP THE DOCKER DAMON OR RESTART IT, THIS MUST BE RESOLVED WHILE KEEPING THE CURRENT CONTAINERS OPEN AND RUNNIN.

Why did docker container restart?

I have an application inside a docker-compose. On startup, a lot of logging messages are created. When looking at the logs of my docker-compose in the morning, I can see those startup messages, but just before that the container was logging production messages.
In the past it happened that the application crashed because of a bug and restarted. Another time there was a memory problem because lots of data was accidently loaded, then there were error messages saying from Go indicating OutOfMemeory errors, then the container restarted.
But from time to time the container restarts without any indication why. How can I find out the reason why it restarts?
Assuming you have access to the host, I would suggest using volumes to persist the container's entire /var/log somewhere on the host. You can look at these log files to discover reasons for shutdown. Check out this unix.stackexchange post for details on how to do that.

Docker logs from go container (log and fmt) stop after init

I'm working on an application which consists of a number of go containers. I manage them with docker compose. Recently I've been having trouble getting logs out of them. When I run "docker logs [container-name]", I only see logs that were created during init for packages in my application, and during main before the service starts listening. Subsequent calls to log.Println or fmt.Println do not appear in the output of "docker logs".
Do you know what could be going on?
You may want to write your logs into the /dev/stdout
or simply use
log.SetOutput(os.Stdout)
From log package

Docker Logs filling up on running container

I've got a Docker container currently running in production on a CentOS 7 VM. We have encountered a problem where the logs of the container are filling up the host drive (the log files found at /var/lib/docker/{continer_name}) over time and causing the container to become unresponsive forcing us to clear logs on the host in order to enable it to continue processing.
We can't take the container down, meaning I can't just bring it back up using the --log-opt flag to set up some log rotation options.
We've tried using logrotate, but the nature of the container means the logs are being written to regularly and what we find is often the logs are rotated, but the original file does not decrease in size due to being written to whilst the rotation is underway.
I'm trying to find a solution to this problem where we can set up some kind of task that will clear the logs down to a specific file size. Any help is greatly appreciated.
i would suggest mounting the containers logs directory to a host directory, and there you can schedule whatever task to zip/move/delete the log files...

Cannot deploy apps in Openshift Online: `Error syncing pod: FailedSync`

So I'm new to Openshift Online, and I'm looking to deploy a test image that when run simply runs a C++ native executable that says Hello world!.
After pushing the built docker image to the docker hub and creating an app that uses that image, I've waited for it to deploy. At some point in the process, a warning event arises stating
Error syncing pod, Reason: FailedSync
Then, the deployment stalls at pending until the deadline runs out, and it reports the deployment failed.
As far as I know, I can't have done anything wrong. I simply created an app with the default settings that uses an image.
The only thing that could be happening is that the image runs as root, which, upon creating the app, caused a warning.
WARNING: Image "me/blahblah:test" runs as the 'root' user which may not be permitted by your cluster administrator.
However, this doesn't seem to be causing the problem, since it hasn't even deployed the app by the time the process stalls until it reaches the deadline.
I'll add any extra information that could lead to the problem being solved.

Resources