How can we view container startup logs of docker. (ie: when container is starting up viz boot.log in Jboss for eg. as in what all events are kicking up while container is coming up.)
As of now I can view any event in logs when container comes up, but I cannot find any mechanism to view logs when container is starting up.
Any idea?
Ok, I got a way to do that.
1) First do "docker events&" where you want to run you container.
2) Then run your container like:
docker run -d .... (Full command)
It will generate a Hex Id for container (look out at the end).
(container=f1b76ae5a75a1443c01181de46767gbb03621167d019f5d26d3e5131d9158843511a69, name=bridge, type=bridge)
3) Now go in another window and see the logs:
docker logs (from previous step)
This is especially useful if your container is not coming up properly.
Related
I succeeded in connecting to a remote server configured with Docker through vscode. By the way, the list of containers from the past was fetched from the remote explorer of vscode. If you look at this list of containers, they are obviously containers made with images I downloaded a few days ago. I don't know why this is happening.
Presumably, it is a problem with the settings.json file or a problem with some log.
I pressed f1 in vscode and select Remote-Containers: Attach to Running Container...
Then the docker command was entered automatically in the terminal. Here, a container (b25ee2cb9162) that I do not know where it came from has appeared.
After running this container, a new window opens with the message Starting Dev Container.
This is the list of containers that I said downloaded a few days ago. This is what vscode showed me.
What's the reason that this happened?
Those containers you are seeing are similar to those if you run docker container ls. The containers you are seeing have exited and are not automatically cleaned up by Docker unless specified in CLI --rm option.
The docs for the --rm option explain the reason for this nicely:
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag:
From this answer about these non-running containers taking up system resources you don't have to be concerned about these taking up much space expect minimal disk space.
To remove those containers, you have a few options:
[Preemptive] Use --rm flag when running container
You can pass the --rm flag when you run a container with the Docker to remove the containers after they have exited so old containers don't accumulate.
As the docs mention, the downside is after the container exits, it's difficult to debug why the container exited if something failed inside the container.
See the docs here if using docker run: https://docs.docker.com/engine/reference/run/#clean-up---rm
See this answer if using docker-compose run
Clean up existing containers from the command line
Use the docker container prune command to remove all stopped containers.
See the docs here: https://docs.docker.com/engine/reference/commandline/container_prune/
See this related SO answer if you're looking for other options:
Clean up containers from VSCode
VSCode Docker Containers Extension you clean up containers if you open the command palate and enter Docker Containers: Remove
Or you can simply right click those containers.
I have an application which writes log into file (example /var/log/my_app).
Before Docker I used supervisor to start my app and logrotate.d for rotation logs.
Logrotate.d runs script for supervisor to restart my app after log rotate (for delete old log file and create new one):
supervisor restart my_app
How I should do with Docker?
As I know if we use docker container with only one running app we should not use supervisor (start and restart will do docker). So how can I use logrotate.d for this?
Create “volume” for log dir and setup logrotate.d to restart Docker container? But I think it’s not a good idea.
Use logrotate.d within Docker container with my app? For each Docker image I will install logrotate.d and instead of run script for supervisor I should run script for close my app (kill -9 or something else).
If you decide to move to Docker you should also adapt your application.
Applications running in containers should write their logs to the console (system out). You can achieve that by using a CONSOLE appender in your logger configuration.
Once that is done you can inspect your logs from outside the container with:
docker logs <container_name>
You can also follow the logs (like you would do with "tail -f"):
docker logs -f <container_name>
You can also change the logging driver used by your container and do more fancy stuff with your logs.
See more details here: https://docs.docker.com/config/containers/logging/configure/
There are two possible options
Your application should write logs to console. Then the logs will be managed using docker logs command
If application must write logs to a file, your application should rotate the logs. Most programming languages support log frameworks that provide this functionality.
I am very new to this docker things. I created containers in docker when i use to start my container it suddenly goes to excited state. I am trying to assign port:7050 to that container. All other container which i have created they use start but one orderer container when i use to turn it on, it automatically goes to excited state.
You can refer image :
Error: Container goes to excited state
Please guide me through this, i am not getting what is the problem. I tried removed all the docker conatiner and again created but i am getting same problem.
Thanks in adavnce.
Have you tried with
docker run -d ?
This will prevent the container from exiting.
If for example, you initialize a service and it runs in the background probably the container will understand that it finished its job and it will exit.
You can check this post for more information Docker container will automatically stop after "docker run -d"
First thing would be to check the logs of the pod as with:
docker logs $id
where you swap $id for the container id as you could see in the image you linked
If that doesn't tell you enough you can also call:
docker inspect $id
I have looked at sonata project demo page: http://demo.sonata-project.org
There is something wonderfull on this page: They can start a container from a webpage.
How can we do that ?
What i want to do too is to wait the container ready before redirecting to it.
And how can they automaticly delete the container after 10 minutes ?
Thanks
You can create a frontend API over docker commands as per your customization , in backend docker commands are only running so when you press start button it will run docker run command to start container and so on.. for removing the container you can easily filter docker containers on basis of timestamps : https://docs.docker.com/engine/reference/commandline/system_prune/#filtering.
I'm trying to pull and set a container with an database (mysql) with the following command:
docker-compose up
But for some reason is failing to start the container is showing me this:
some_db exited with code 0
But that doesn't give me any information of why that happened so I can fix it, so how can I see the details of that error?.
Instead of looking for how to find an error in your composition, you should change your process. With docker it is easy to fall into using all of the lovely tools to make things work seamlessly together, but this sounds like an individual container issue, not an issue with docker compse. You should try building an empty container with the baseimage you want then using docker exec -it <somecontainername> sh to go into it and run the commands you're trying to execute in your entrypoint manually to find the specific breakpoint.
Try at least docker logs <exited_container_name/id> (or more recently: docker container logs <exited_container_name/id>)
That way, you can check if there was any error message.
An exited container means the main process stopped: you need to make sure that main process remains up in order for the container to not exit immediately.
Try also docker inspect <exited_container_name/id> (or, again, docker container inspect <exited_container_name/id>)
If you see for instance "OOMKilled": true, that would mean the container memory was exceeded. You can also look for the "ExitCode" for clues.
See more at "Ten tips for debugging Docker containers" from Mark Betz.