How to view logs for a docker image? - docker

In the docker world, one can easily see logs for docker container (that is, a running image). But during image creation, one usually issues multiple commands. For example npm install commands in node projects. It would be beneficial to see logs for those commands as well. I quickly searched from the documentation, but didn't find how one can obtain logs for docker image. Is it possible?

Had the same problem, I solved it using
docker build --no-cache --progress=plain -t my-image .

Update: Since this question has been asked, it seems everyone is finding it after seeing the output changes from buildkit. Buildkit includes the following options (docker build --help to see them all):
--build-arg list Set build-time variables
--cache-from strings Images to consider as cache sources
-f, --file string Name of the Dockerfile (Default is 'PATH/Dockerfile')
--no-cache Do not use cache when building the image
-o, --output stringArray Output destination (format: type=local,dest=path)
--platform string Set platform if server is multi-platform capable
--progress string Set type of progress output (auto, plain, tty). Use plain to show container output (default "auto")
--pull Always attempt to pull a newer version of the image
-q, --quiet Suppress the build output and print image ID on success
-t, --tag list Name and optionally a tag in the 'name:tag' format
--target string Set the target build stage to build.
The option many want with buildkit is --progress=plain:
docker build -t my-image --progress=plain .
If you really want to see the previous build output, you can disable buildkit with an environment variable, but I tend to recommend against this since there are a lot of features from buildkit you'd lose (skipping unused build steps, concurrent build steps, multi-platform images, and new syntaxes for the Dockerfile for features like RUN --mount...):
DOCKER_BUILDKIT=0 docker build -t my-image .
The OP is asking to include the logs of their build within the image itself. Generally I would recommend against this, you'd want those logs outside of the image.
That said, the easiest method for that is to use tee to send a copy of all your command output to a logfile. If you want it attached to the image, output your run commands to a logfile inside of the image with something like:
RUN my-install-cmd | tee /logs/my-install-cmd.log
Then you can run a quick one-off container to view the contents of those logs:
docker run --rm my-image cat /logs/my-install-cmd.log
If you don't need the logs attached to the image, you can log the output of every build with a single change to your build command (instead of lots of changes to the run commands) exactly as JHarris says:
docker build -t my-image . | tee my-image.build.log
With the classic docker build command, if you build without using --rm=true, then you have all the intermediate containers, and each one of those has a log you can review with
docker logs $container_id
And lastly, don't forget there's a history of the layers in the image. They don't show the output of each command, but it is useful for all of those commands that don't log any output and knowing which build each layer comes from particularly when there's lots of caching being used.
docker history my-image

You can see the logs in powershell with this command
docker logs --details <containerId>
There other options with logs here.

Use This: https://github.com/jcalles/docker-wtee
Read instructions and please give me feedback.
Or...
If you need to get logs from running container, and container has volumes exposed, run this:
docker run --rm -it --name testlogs --link <CONTAINERNAME/ID> --network CONTAINERNETWORK -p PORT:8080 --volumes-from CONTAINERNAME/ID javiercalles/wtee sh

Related

Is it safe to run the command "yes | docker images prune" on a Linux Docker host while another image is being built?

At work there is a Docker host with a pretty small /var/lib/docker which fills up pretty fast whenever a few of the docker build commands fail in a row. In particular because not all of the docker build commands use the following flags: --no-cache --force-rm --rm=true, the point of which (in my understanding) is to try to delete extra junk after successful or unsuccessful builds. You can find these flags if you visit the url https://docs.docker.com/engine/reference/commandline/build/ and the scroll down.
One issue we are having is that not everybody does docker build with the flags --no-cache --force-rm --rm=true and it is kind of hard to track down (silly, I know) but then also there may be some other causes for filling up /var/lib/docker that we have not caught. IT would not give us the permission to look inside that directory for better understanding, but we are able to run docker image prune or docker system prune and that seems to be a good solution to our problems, except for the fact that we run it manually for now, whenever things go bad.
We are thinking of getting ahead of the problem by a) running yes | docker image prune just about every time after an image is built. I wrote "just about" because it is hard to track down every repo that builds an image (successfully or not) but that is a separate story. Even if this command has some side-effect (such as breaking somebody else's simultaneous docker build on the same Docker host, it would only run once in a while, thus the probability of a clash being low. The other approach being discussed is pretty much blindly adding yes | docker image prune to a cron job that runs say every 2 hours. If this command has potential negative side effects, then the damage would be more likely.
Why do I even think that another docker build might break? Well, I do not know it for a fact, or else I would not be asking this question. In an attempt to better understand the so called images that we sometimes end up with after a broken docker build, I read this often-cited article: https://projectatomic.io/blog/2015/07/what-are-docker-none-none-images/
My understanding is that a docker build that has not finished yet, ends up leaving some images on disk, which it could then clean up at the end, depending on the flags. However, if something (such as the command yes | docker image prune that is issued in parallel) deletes some of this intermediate image layers, then the overall build would also fail.
Is this true? If so, then what is a good way to keep /var/lib/docker clean when building many images.
P.S. I am not a frequent user of S.O. so please suggest ways of improving this question if it violates some rules.
I tried to reproduce the described behavior with the following script. The idea is to start several docker build processes in parallel. During it also run several docker system prune processes in parallel.
Dockerfile:
FROM centos:7
RUN echo "before sleep"
RUN sleep 10
RUN echo "after sleep"
RUN touch /myfile
test.sh:
#!/bin/bash
docker build -t test1 --no-cache . &
docker build -t test2 --no-cache . &
docker build -t test3 --no-cache . &
docker build -t test4 --no-cache . &
sleep 5
echo Prune!
docker system prune -f &
docker system prune -f &
sleep 15
docker run --rm test1 ls -la /myfile
docker run --rm test2 ls -la /myfile
docker run --rm test3 ls -la /myfile
docker run --rm test4 ls -la /myfile
Running bash test.sh I get successful builds and prune. There was an exception from second prune process: Error response from daemon: a prune operation is already running which means that prune recognizes this conflict situation.
Tested on docker version 19.03.12, host system centos 7
A docker image prune (without the -a option) will remove only dangling images, not unused images.
As explained in "What is a dangling image and what is an unused image?"
Dangling images are images which do not have a tag, and do not have a child image (e.g. an old image that used a different version of FROM busybox:latest), pointing to them.
They may have had a tag pointing to them before and that tag later changed.
Or they may have never had a tag (e.g. the output of a docker build without including the tag option).
Intermediate image produced by a docker build should not be considered dangling, as they have a child image pointing to them.
As such (to be tested), it should be safe to use yes | docker image prune while images are being built.
Plus, Buildkit is now the default (moby v23.0.0) on Linux, and is made to avoid side effects with rest of the API (intermediate images and containers):
At the core of BuildKit is a Low-Level Build (LLB) definition format. LLB is an intermediate binary format that allows developers to extend BuildKit. >
LLB defines a content-addressable dependency graph that can be used to put together very complex build definitions.
Yes it is safe.
Due to locking of image layers for build time, for base layers of other running images or for running containers.
Made such things many times in parallel with running automated build pipelines, with running Kubernetes cluster, etc...

Is there a way to tell shell commands are available in a docker image?

I'm using the node docker images as a container for my build pipelines.
An issue I frequently run into is that a binary that I expect to exist, doesn't and I have to wait for it fail in the build pipeline. The zip command is one such example.
I can run the docker image on my local machine and ssh in to test commands.
Is there a way to summarise what commands are available for a given image?
Is there a way to summarise what commands are available for a given image?
You could look at the contents of /bin:
$ docker run --rm -it --entrypoint=ls node /bin
or /usr/local/bin:
$ docker run --rm -it --entrypoint=ls node /usr/local/bin
etc...

Can I use docker-compose without a bash script to inspect a container then copy files to host from the container?

This is the workflow I'm trying to parallelize:
docker build
docker run
docker inspect container -> returns false
docker cp container:file /host/
I was wanting to use docker-compose to do this on a single host, then transition to Kubernetes later so that I can orchestrate this on multiple hosts.
Should I create a bash script and have it RUN in the Dockerfile?
I'm looking for a solution that the community accepts as the best practice.
In single-host-Docker land, you can try to arrange things so that docker run does everything you need it to. Avoid docker inspect (which dumps out low-level diagnostics that usually aren't interesting) and docker cp. Depending on how you build things, you could build the artifact in your Dockerfile, and copy it out
docker build -t the-image .
# use "cat" to get it out?
docker run --rm the-image \
cat /app/file \ # from the container
> file # via the container's stdout, to the host
# use volumes to get it out?
docker run --rm -v $PWD:/host the-image \
cp /app/file /host
Depending on what you're building, you might extend this further to pass both the inputs and outputs in volumes, so the image is just a toolchain. For a minimal Go application, using the Docker Hub golang image, for example:
# don't docker build anything, but run a container with the toolchain
docker run --rm \
-v $PWD:/app \
-w /app \
golang:1.15 \
go build -o the_app ./cmd/the_app
In this last setup the -w working directory is the bind-mounted /app directory, so go build -o ./the_app writes out to the host.
Since this setup is a little more oriented towards single-use containers (note the docker run --rm option) it's not a good match for Compose, which generally expects long-running server-type containers.
This setup also will not translate well to Kubernetes. Kubernetes isn't really good at sharing files between containers or with systems outside the cluster. If you happen to have an NFS server you can use that, but there aren't native options; most of the volume types that it's straightforward to get are ReadWriteOnce volumes that can't be reused between multiple Kubernetes Pods (containers).
You could in principle write a Kubernetes Job that did a single compilation. It can't run docker build, so the "run" step would have to do the actual building. You can't kubectl cp out of a completed pod (see e.g. kubernetes/kubectl#454), so it needs to send its content somewhere specific when it's done.
A better high-level approach here would be to find or install some sort of network-accessible storage, especially to hold the results (an Artifactory server; object storage like AWS S3). Rewrite your build sequence as a "task" that takes the locations of the inputs and outputs and runs the build, ignoring the local filesystem. Set up a job queue like RabbitMQ, and inject the tasks into the queue. Finally, run the builder-worker as a Kubernetes Deployment; it will build as many things in parallel as the number of replicas: in the deployment.

Using docker pull & run to build dockerfile

I'm learning how to use docker.
I want to deploy a microservice for swagger. I can do
docker pull schickling/swagger-ui
docker run -p 80:8080 -e API_URL=http://myapiurl/api.json swaggerapi/swagger-ui
To deploy it, I need a dockerfile i can run.
How do i generate the dockerfile in a way I can run it with docker build ?
The original question asks for a Dockerfile, perhaps for some CI/CD workflow, so this answer addresses that requirement:
Create a very simple Dockerfile beginning with
FROM schickling/swagger-ui
Then from that directory run
$ docker build -t mycontainername .
Which can then be run:
$ docker run -p 80:8080 -e API_URL=http://myapiurl/api.json mycontainername
Usually the docker pull pulls the Dockerfile. The Dockerfile for swagger is on the docker repo for it if you wanted to edit it or customize it.
(https://hub.docker.com/r/schickling/swagger-ui/~/dockerfile/)
That one should work with the build command. The build command builds the image, the run command turns the image into a container. The docker pull command should pull the image in. You don't need to run docker build for it as you should already have the image from the pull. You only need to do docker run.

Docker image versioning and lifecycle management

I am getting into Docker and am trying to better understand how it works out there in the "real world".
It occurs to me that, in practice:
You need a way to version Docker images
You need a way to tell the Docker engine (running on a VM) to stop/start/restart a particular container
You need a way to tell the Docker engine which version of a image to run
Does Docker ship with built-in commands for handling each of these? If not what tools/strategies are used for accomplishing them? Also, when I build a Docker image (via, say, docker build -t myapp .), what file type is produced and where is it located on the machine?
docker has all you need to build images and run containers. You can create your own image by writing a Dockerfile or by pulling it from the docker hub.
In the Dockerfile you specify another image as the basis for your image, run command install things. Images can have tags, for example the ubuntu image can have the latest or 12.04 tag, that can be specified with ubuntu:latest notation.
Once you have built the image with docker build -t image-name . you can create containers from that image with `docker run --name container-name image-name.
docker ps to see running containers
docker rm <container name/id> to remove containers
Suppose we have a docker file like bellow:
->Build from git without versioning:
sudo docker build https://github.com/lordash/mswpw.git#fecomments:comments
in here:
fecomments is branch name and comments is the folder name.
->building from git with tag and version:
sudo docker build https://github.com/lordash/mswpw.git#fecomments:comments -t lordash/comments:v1.0
->Now if you want to build from a directory: first go to comments directory the run command sudo docker build .
->if you want to add tag you can use -t or -tag flag to do that:
sudo docker build -t lordash . or sudo docker build -t lordash/comments .
-> Now you can version your image with the help of tag:
sudo docker build -t lordash/comments:v1.0 .
->you can also apply multiple tag to an image:
sudo docker build -t lordash/comments:latest -t lordash/comments:v1.0 .

Resources