Say I have a container that has everything I need to run my web application (such as https://github.com/grigio/docker-stringer for example). How would I go about inspecting the logs for the different services (web server, application server, database server)? With all of the tutorials so far I have only been able to view the logs for the specific command run when starting the container.
One method would be to configure your logs to write to stdout and to use docker logs to retrieve them.
Another option would be to use a bindmount and link to your host file system.
Related
I have deployed a owin hosted web applciation in AKS(windows nodepool). The container is in running state but I am not able to hit the application. There might be runtime exceptions or errors but I am not able to figure out the path where I can see such errors in AKS Windows node.
Please help me.
So, the ideal way here would be to use kubctl logs (which goes to Monitor, if you have that enabled). However, Windows containers don't pass on its logs to stdout by default. You have to use Log Monitor for that. Essentially, you have to enable Log Monitor on your container image to be able to get the logs out of the container just like you do with Linux containers. I blogged about it here: https://techcommunity.microsoft.com/t5/itops-talk-blog/troubleshooting-windows-containers-apps-on-azure-kubernetes/ba-p/3269767
The other thing you can try is to use kubectl exec to run a command inside the container and get its output.
If I am creating a docker image for one of my applications and publishing it in docker hub.
This image was downloaded by many users and ran that application in their containers and that generated application logs in a folder.
Now as a developer how can I see those application logs from my machine when that container is in remote computer for which I dont have access?
If it is a virtual machine, I can do ssh to that same machine and go to that folder anse see the logs for that particular application, so how it is possible with docker?
I am not talking about docker event logs, the logs generated by my python application with the logging module. Could you please help me on how to handle this case in dockers.
I don't have any experience with working on dockers.
docker exec can be used to run bash commands in a docker container. But in your case the containers are running in a remote machine and not in your local machine. So, in that case, you have 2 options.
1. ssh into the remote machine and then use docker exec command to check the logs.
2. Directly ssh to the docker container.
But, in both scenarios, you will need SSH access to the remote machines from the end users.
I hope this helps.
If your application writes log files to the container filesystem, this is one of a couple of good uses for Docker bind mounts. If the operator (the person running the container; not you, the original software author) starts the container with
docker run -v $PWD/logs:/app/logs ... you/yourimage
then they will be able to read the log files directly on their host system.
As the original application developer, you have no access to these logs. This is the same as every other (non-SaaS) application: the end user installs software on their system and runs it, but it's on a system you can't log into, so you can't directly see things like log files. The techniques for dealing with this are the same as anything else: when a user files a bug report make sure they provide a sufficient reproduction, log files, and relevant configuration, and reproduce the issue yourself locally.
I have an app that is dynamically creating docker containers and I can't intercept the way it is created.
I want to see logs from all the machines that are up. no matter if it was via docker-compose or just docker command line. I need to see all the logs.
Is it possible?
right no I need to run docker ps, see all the created machines and run docker log container.
I can't really monitor what is going inside.
Thanks
An approach is to use a dedicated logging container that can gather log events from other containers, aggregate them, then store or forward the events to a third-party service, this approach eliminates the dependencies on a host.
Further, dedicated logging containers can automatically collect, monitor, and analyze log events, It can scale your log events automatically without configuration. It can retrieve logs through multiple streams of log events, stats, and Docker API data.
You can check this link also for some help.
Docker Logging Best Practices
Im using docker with my Web service.
when I deploy using Docker, loosing some logging files (nginx accesslog, service log, system log.. etc)
Cause, docker deployment system using down and up container architecures.
So I thought about this problem.
LoggingServer and serviceServer(for api) must seperate!
using these, methods..
First, Using logstash(in elk)(attaching all my logFile) .
Second, Using batch system, this batch system will moves logfiles to otherServer on every midnight.
isn't it okay?
I expect a better answer.
thanks.
There are many ways for logging which most the admin uses for containers
1 ) mount log directory to host , so even if docker goes up/down logs will be persisted on host.
2) ELK server, using logstash/filebeat for pushing logs to elastic search server with tailing option of file, so if new log contents it pushes to server.
3) if there is application logs like maven based projects, then there are many plugins which pushes logs to server
4) batch system , which is not recommended because if containers dies before mid-night then logs will be lost.
I’m having problems admin’ing my cluster. I can run ‘standalone -c clustered.xml’ on Windows and everything looks ok. However, if I run ‘domain.bat’ I can’t see how to configure the domain.xml file so that it can see anything else on my local server. Is this somehow related to the host.xml file?
The domain mode is for administration, you can configure servers in cluster mode.
If you start with domain.[bat||sh] there are at least two java processes running.
A HostController, for administration/configuration and a ProcessController to start/stop or restart configured processes.
The domain.xml contains the configuration and here the profile is stored, i.e. caches, endpoints, ports, security etc.
The host.xml contains the server for this host-machine and the link to the domain-master.
Configuration is possible via console GUI or CLI commandline.