Im using docker with my Web service.
when I deploy using Docker, loosing some logging files (nginx accesslog, service log, system log.. etc)
Cause, docker deployment system using down and up container architecures.
So I thought about this problem.
LoggingServer and serviceServer(for api) must seperate!
using these, methods..
First, Using logstash(in elk)(attaching all my logFile) .
Second, Using batch system, this batch system will moves logfiles to otherServer on every midnight.
isn't it okay?
I expect a better answer.
thanks.
There are many ways for logging which most the admin uses for containers
1 ) mount log directory to host , so even if docker goes up/down logs will be persisted on host.
2) ELK server, using logstash/filebeat for pushing logs to elastic search server with tailing option of file, so if new log contents it pushes to server.
3) if there is application logs like maven based projects, then there are many plugins which pushes logs to server
4) batch system , which is not recommended because if containers dies before mid-night then logs will be lost.
Related
I've got a simple docker development setup for Airflow that includes separate containers for the Airflow UI and Worker. I'm encountering a 403 Forbidden error whenever I attempt to view the log for a task in the Airflow UI.
So far I've ensured they all have the same secret key (in fact, using Docker Volumes they're all reading the exact same configuration file) but this doesn't seem to help. I haven't done anything about time sync, but I'd expect that docker containers would effectively be sharing the system clock anyway so I don't see how they'd get out of sync in the first place.
I can find the log file on the airflow worker, and it has run successfully - but something is obviously missing that should be allowing the airflow UI to display that (and it would be much more convenient for my workflow to be able to see the logs in the UI rather than having to rummage around on the worker).
I have an application (let's call it Master) which runs on linux and starts several processes (let's call them Workers) using fork/exec. Therefore each Worker has its own PID and writes its own logs.
When running directly on a host machine (without docker) each process uses syslog for logging, and rsyslog puts ouptut from each Worker to a separate file, using a config like this:
$template workerfile,"/var/log/%programname%.log"
:programname, startswith, "worker" ?workerfile
:programname, isequal, "master" "/var/log/master"
Now, I want to run my application inside a docker container. Docker starts Master process as the main process (in CMD section of the Dockerfile), and then it forks the Workers at runtime (not sure if it's a canonical way to use docker, but that's what I have). Of course I'm getting only the stdout for the Master process from docker, and logs of Workers get lost.
So my question is, any way I could get the logs from the forked processes?
To be precise, I want the logs from different processes to appear in individual files on the host machine eventually.
I tried to run rsyslog daemon inside docker container (just like I do when running without docker), writing logs to a mounted volume, but it doesn't seem to work. I guess it requires a workaround like supervisord to run the Master process and rsyslogd at the same time, which looks like an overkill to me.
I couldn't find any simple solution for that, though my problem seems to be trivial.
Any help is appreciated, thanks
If I am creating a docker image for one of my applications and publishing it in docker hub.
This image was downloaded by many users and ran that application in their containers and that generated application logs in a folder.
Now as a developer how can I see those application logs from my machine when that container is in remote computer for which I dont have access?
If it is a virtual machine, I can do ssh to that same machine and go to that folder anse see the logs for that particular application, so how it is possible with docker?
I am not talking about docker event logs, the logs generated by my python application with the logging module. Could you please help me on how to handle this case in dockers.
I don't have any experience with working on dockers.
docker exec can be used to run bash commands in a docker container. But in your case the containers are running in a remote machine and not in your local machine. So, in that case, you have 2 options.
1. ssh into the remote machine and then use docker exec command to check the logs.
2. Directly ssh to the docker container.
But, in both scenarios, you will need SSH access to the remote machines from the end users.
I hope this helps.
If your application writes log files to the container filesystem, this is one of a couple of good uses for Docker bind mounts. If the operator (the person running the container; not you, the original software author) starts the container with
docker run -v $PWD/logs:/app/logs ... you/yourimage
then they will be able to read the log files directly on their host system.
As the original application developer, you have no access to these logs. This is the same as every other (non-SaaS) application: the end user installs software on their system and runs it, but it's on a system you can't log into, so you can't directly see things like log files. The techniques for dealing with this are the same as anything else: when a user files a bug report make sure they provide a sufficient reproduction, log files, and relevant configuration, and reproduce the issue yourself locally.
I am trying to debug a production failure involving (multiple) nginx and tomcat logs. I have copied the logs to my dev machine. What is the easiest way for me to import these logs into an elastic/ELK stack to sift through quickly? (Currently, I'm making do with less commands across multiple windows)
So far I've found only generic docker containers (like https://elk-docker.readthedocs.io/) that require me to install filebeat and configure it. However, since my data is static, I would prefer a simpler installation.
What I did earlier is create the ELK stack with docker-compose and ingest the data via 'nc' (netcat). An example can be found at: https://github.com/deviantony/docker-elk
You might want to adjust the logstash config, so that it reads and parses your data correctly. If the amount of files is not too big, you can nc them one-by-one and otherwise you can write a small script around it, in bash for example, to loop through the files.
Say I have a container that has everything I need to run my web application (such as https://github.com/grigio/docker-stringer for example). How would I go about inspecting the logs for the different services (web server, application server, database server)? With all of the tutorials so far I have only been able to view the logs for the specific command run when starting the container.
One method would be to configure your logs to write to stdout and to use docker logs to retrieve them.
Another option would be to use a bindmount and link to your host file system.