So I need to dockerize my parse server for a project and im a bit new to Docker.
I'm used to Heroku where I just could use heroku logs or connect papertrial to see the parse logs to help debug things but I have no clue how to see my parse specific logs when its running in a docker container.
I'm able to do test curl and get data back so I know its working but no idea how to find log data.
Searching around really doesn't lead to any results that are more specific to Docker. I also tried to figure out how to write to the logs folder but the folder alway seems empty.. ?
Docker collects the logs of each container from stdout and stderr of a container. As described in 12-factor application, an application should send its logs to stdout (it is standardized by heroku):
A twelve-factor app never concerns itself with routing or storage of its output stream. It should not attempt to write to or manage logfiles. Instead, each running process writes its event stream, unbuffered, to stdout. During local development, the developer will view this stream in the foreground of their terminal to observe the app’s behavior.
That being said, everything you stream to stdout of a container will be stored in /var/lib/docker/containers/<container id>/<container id>-json.log directory. You can see the container id using docker ps command. You don't have to see the file always. You can do docker logs <container-id> to see the logs stored in the container's directory.
If you have to have logs in the filesystem, you can store logs inside the container and mount that directory on your host machine to see log files. You can do something like this:
docker run -v <host directory>:<log directory in container> ...
Hope it helps.
Related
I have been using docker-compose to setup some docker containers.
I am aware that the logs can be viewed using docker logs <container-name>.
All logs are being printed to STDOUT and STDERR when the containers are run, there is no log 'file' being generated in the containers.
But these logs (obtained from docker logs command) are removed when their respective containers are removed by commands like docker-compose down or docker-compose rm.
When the containers are created and started again there is a fresh set of logs. No logs from the previous 'run' is present.
I am curious if there is a way to somehow prevent the logs from being removed along with their containers.
Ideally i would like to keep all my previous logs even when the container is removed.
I believe you have two ways you can go:
Make containers log into file
You can reconfigure the applications inside the container to write into logfiles rather than stdout/stderr. As you put it, you'd like to keep the logs even when the container is removed. Therefore ensure the files are stored in a (bind) mounted volume.
Reconfigure docker to store logs
Reconfigure docker to use a different logging driver. This can be especially helpful as it prevents you from changing each and every container.
The application that I run in a container sends its logs to stdout and this can't be reconfigured. I need these logs to be written to a file to keep them. Is there a way to automatically redirect logs from stdout of a container to a file as soon as the container starts?
(I know about "docker logs" command, but it has to be controlled manually and it is no good if a container stops before logs are saved this way.)
Thanks in advance.
Modify the entrypoint with stdout and stderr redirection to a volume mount.
command > /volumemount/out 2>&1
Now all the docker logs which used to come to stdout of the container will come to the host shared volume. https://askubuntu.com/questions/625224/how-to-redirect-stderr-to-a-file/625230
I know that docker-compose default logs to a file defined by docker inspect --format='{{.LogPath}}' my_container. This file is gone as soon as I kill the container. As I deploy a new version of the image frequently, I loose a lot of log entries.
What I'd like to do is to have my container's log entries stored in a persistent log file, just like regular linux processes use. I can have my deployment script do something like this, but I'm thinking there's a less hack-ish way of doing this:
docker-compose logs -t -f >> output-`date +"%Y-%m-%d_%H%M"`.log'
One option would be to configure docker-compsose to log to syslog, but for the time being I'd like to log to a dedicated file.
How have others dealt with the issue of persistent logging?
So docker has a concept called logging-drivers. https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers
The default is the file that you mentioned. The ideal way to do this is to pass the --log-driver <driver-name> to your run command. Then have another process on the same machine picking these up and pushing to your central logging system.
Most popular of these is fluentd or splunk, I guess. But you can also choose to write to json or journald.
The docker manual for these are below
Splunk - https://docs.docker.com/config/containers/logging/splunk/
Fluentd - https://docs.docker.com/config/containers/logging/fluentd/
I need to get the logs from the running container persisted on the host, so we don't lose the logs when the container restarts.
The logs that get put in the standard apache logs are handled fine with the --log-driver=syslog --log-opt syslog-tag="app_name" run options. However, each application also has a custom debug.log output.
I tried using the --log-opt syslog-address=unix://infra/py/appinstance/app/log/debug.log run parameter, but that doesn't work. I would like to plug the debug logs into the standard syslog, but I don't see how to do it. Any ideas.
the docker run--log-driver option is to specify where to store your docker container log. The log we are talking about here is the one that you get from the docker logs command.
The content of that log is gathered from the container's process standard output and error output.
The debug.log file you are mentioning isn't sent to any of the standard or error output and has such won't be handled by docker.
You have at least two options to persist those debug messages:
writing to stdout or stderr
You can make your application write its debug messages to the standard or error output instead of to the debug.log file. This way those debug messages will be handled by docker, and given the --log-driver=syslog option will persist in your host syslog service.
mount a volume
You can also use the docker run -v option to create a volume in your container that will mount a directory from your docker host in your container.
Then configure your application so that it writes the debug.log file on that mount point.
have an application running inside docker container. the application writes log messages into local log files. how can i make the log file persistent in case the docker container stops or crashes?
Since the container are run time entity ,when i stop the image my logs/data is gone.
Thanks,
Sohan
You can do this using docker volumes:
https://docs.docker.com/userguide/dockervolumes/
For example:
docker run -v /var/log/docker:/var/log your-image
will mount the log directory on your local file system. You can also get much fancier, creating containers just for data. It's all explained in the above link.