How to reduce docker output in CI - docker

When running Docker in CI we get lots and lots (and lots) of lines like
abc123 Extracting [=================================================> ] 237.3MB/239.3MB
is it possible to pass a switch or something to Docker to signal we run in CI and would like to output non-interactive output?

I'm assuming you are referring to docker pull or docker build commands. If so, you have the --quiet (or -q) flag:
docker pull --quiet ubuntu
docker build --quiet . -t 'something'

Related

Is there a way to tell shell commands are available in a docker image?

I'm using the node docker images as a container for my build pipelines.
An issue I frequently run into is that a binary that I expect to exist, doesn't and I have to wait for it fail in the build pipeline. The zip command is one such example.
I can run the docker image on my local machine and ssh in to test commands.
Is there a way to summarise what commands are available for a given image?
Is there a way to summarise what commands are available for a given image?
You could look at the contents of /bin:
$ docker run --rm -it --entrypoint=ls node /bin
or /usr/local/bin:
$ docker run --rm -it --entrypoint=ls node /usr/local/bin
etc...

Docker cp requires two arguments

I am running a docker image from deepai/densecap on my windows machine using docker toolbox. When i run image using docker CLI and pass the arguments for cp command as stated in below picture
It says that "docker cp" requires exactly 2 arguments". The various command i try to pass my image from local file system to container are:
docker cp C:\Users\piyush\Desktop\img1.jpg in1
docker cp densecap:C:\Users\piyush\Desktop\image1.jpg in1
docker cp C:\Users\piyush\Desktop\img1.jpg densecap:/shared/in1
I have just started using docker. Any help will be highly appreciated. I am also posting the container log:
It would seem on some versions of Docker, docker cp does not support parameter expansion...
For example
WORKS Docker version 19.03.4-ce, build 9013bf583a
CTR_ID=$(docker ps -q -f name=containername)
docker cp patches $CTR_ID:/home/build
FAILS Docker version 19.03.4-ce, build 9013bf583a
BUILDHOME=/home/build
docker cp patches containeridliteral:$BUILDHOME
In your case, maybe the pwd is not expanding properly.

How to view logs for a docker image?

In the docker world, one can easily see logs for docker container (that is, a running image). But during image creation, one usually issues multiple commands. For example npm install commands in node projects. It would be beneficial to see logs for those commands as well. I quickly searched from the documentation, but didn't find how one can obtain logs for docker image. Is it possible?
Had the same problem, I solved it using
docker build --no-cache --progress=plain -t my-image .
Update: Since this question has been asked, it seems everyone is finding it after seeing the output changes from buildkit. Buildkit includes the following options (docker build --help to see them all):
--build-arg list Set build-time variables
--cache-from strings Images to consider as cache sources
-f, --file string Name of the Dockerfile (Default is 'PATH/Dockerfile')
--no-cache Do not use cache when building the image
-o, --output stringArray Output destination (format: type=local,dest=path)
--platform string Set platform if server is multi-platform capable
--progress string Set type of progress output (auto, plain, tty). Use plain to show container output (default "auto")
--pull Always attempt to pull a newer version of the image
-q, --quiet Suppress the build output and print image ID on success
-t, --tag list Name and optionally a tag in the 'name:tag' format
--target string Set the target build stage to build.
The option many want with buildkit is --progress=plain:
docker build -t my-image --progress=plain .
If you really want to see the previous build output, you can disable buildkit with an environment variable, but I tend to recommend against this since there are a lot of features from buildkit you'd lose (skipping unused build steps, concurrent build steps, multi-platform images, and new syntaxes for the Dockerfile for features like RUN --mount...):
DOCKER_BUILDKIT=0 docker build -t my-image .
The OP is asking to include the logs of their build within the image itself. Generally I would recommend against this, you'd want those logs outside of the image.
That said, the easiest method for that is to use tee to send a copy of all your command output to a logfile. If you want it attached to the image, output your run commands to a logfile inside of the image with something like:
RUN my-install-cmd | tee /logs/my-install-cmd.log
Then you can run a quick one-off container to view the contents of those logs:
docker run --rm my-image cat /logs/my-install-cmd.log
If you don't need the logs attached to the image, you can log the output of every build with a single change to your build command (instead of lots of changes to the run commands) exactly as JHarris says:
docker build -t my-image . | tee my-image.build.log
With the classic docker build command, if you build without using --rm=true, then you have all the intermediate containers, and each one of those has a log you can review with
docker logs $container_id
And lastly, don't forget there's a history of the layers in the image. They don't show the output of each command, but it is useful for all of those commands that don't log any output and knowing which build each layer comes from particularly when there's lots of caching being used.
docker history my-image
You can see the logs in powershell with this command
docker logs --details <containerId>
There other options with logs here.
Use This: https://github.com/jcalles/docker-wtee
Read instructions and please give me feedback.
Or...
If you need to get logs from running container, and container has volumes exposed, run this:
docker run --rm -it --name testlogs --link <CONTAINERNAME/ID> --network CONTAINERNETWORK -p PORT:8080 --volumes-from CONTAINERNAME/ID javiercalles/wtee sh

Docker image versioning and lifecycle management

I am getting into Docker and am trying to better understand how it works out there in the "real world".
It occurs to me that, in practice:
You need a way to version Docker images
You need a way to tell the Docker engine (running on a VM) to stop/start/restart a particular container
You need a way to tell the Docker engine which version of a image to run
Does Docker ship with built-in commands for handling each of these? If not what tools/strategies are used for accomplishing them? Also, when I build a Docker image (via, say, docker build -t myapp .), what file type is produced and where is it located on the machine?
docker has all you need to build images and run containers. You can create your own image by writing a Dockerfile or by pulling it from the docker hub.
In the Dockerfile you specify another image as the basis for your image, run command install things. Images can have tags, for example the ubuntu image can have the latest or 12.04 tag, that can be specified with ubuntu:latest notation.
Once you have built the image with docker build -t image-name . you can create containers from that image with `docker run --name container-name image-name.
docker ps to see running containers
docker rm <container name/id> to remove containers
Suppose we have a docker file like bellow:
->Build from git without versioning:
sudo docker build https://github.com/lordash/mswpw.git#fecomments:comments
in here:
fecomments is branch name and comments is the folder name.
->building from git with tag and version:
sudo docker build https://github.com/lordash/mswpw.git#fecomments:comments -t lordash/comments:v1.0
->Now if you want to build from a directory: first go to comments directory the run command sudo docker build .
->if you want to add tag you can use -t or -tag flag to do that:
sudo docker build -t lordash . or sudo docker build -t lordash/comments .
-> Now you can version your image with the help of tag:
sudo docker build -t lordash/comments:v1.0 .
->you can also apply multiple tag to an image:
sudo docker build -t lordash/comments:latest -t lordash/comments:v1.0 .

What is the best way to iterate while building a docker container?

I'm trying to build a few docker containers and I found the iteration process of editing the Dockerfile, and scripts run within it, clunky. I'm looking for best practices and to find out how others go about.
My initial process was:
docker build -t mycontainer mycontainer
docker run mycontainer
docker exec -i -t < container id > "/bin/bash" # get into container to debug
docker rm -v < container id >
docker rmi mycontainer
Repeat
This felt expensive for each iteration, especially if it was typo.
This alternate process required a little bit less iteration:
Install vim in dockerfile
docker run mycontainer
docker exec -i -t < container id > "/bin/bash" # get into container to edit scripts
docker cp to copy edited files out when done.
If I need to run any command, I carefully remember and update the Dockerfile outside the container.
Rebuild image without vim
This requires fewer iterations, but is not painless since everything's very manual and I have to remember which files changed and got updated.
I've been working with Docker in production since 0.7 and I've definitely felt your pain.
Dockerfile Development Workflow
Note: I always install vim in the container when I'm in active development. I just take it out of the Dockerfile when I release.
Setup tmux/gnu screen/iTerm/your favorite vertical split console utility.
On the right console I run:
$ vim Dockerfile
On the left console I run:
$ docker build -t username/imagename:latest . && docker run -it -name dev-1 username/imagename:latest
Now split the left console horizontally, so that the run STDOUT is above and a shell is below. Here you will run:
docker exec -it dev-1
and edits internally or do tests with:
docker exec -it dev-1 <my command>
Every time you are satisfied with your work with the Dockerfile save (:wq!) and then in the left console run the command above. Test the behavior. If you are not happy run:
docker rm dev-1
and then edit again and repeat step #3.
Periodically, when I've built up too many images or containers I do the following:
Remove all containers: docker rm $(docker ps -qa)
Remove all images: docker rmi $(docker images -q)
I assume the files you're editing in your Alternate process are files that make up part of the application you're deploying? Such as a Bash or Python script?
That being the case, you could mount them as a volume, during your debugging process, rather than mounting them inside the docker, so that when you edit them, they are immediately changed within the docker and the host.
So for example, if your code is at /home/dragonx/codefiles, do
docker run -v /home/dragonx/codefiles:/opt/codefiles mycontainer
Then when you edit those files, either from the host or within the container, they are available in the container but you don't need to copy them out before killing the docker.
Here is the simplest way to "build a few docker containers":
docker run -it --name=my_cont1 --hostname=my_host1 ubuntu:15.10
docker run -it --name=my_cont2 --hostname=my_host2 ubuntu:15.10
...
...
docker run -it --name=my_contn --hostname=my_hostn ubuntu:15.10
That would create 'n' number of containers.
After the very first "docker run ..." command, you will be put in a Bash shell. You can do your things there, exit and run the next "docker run ..." command.
Exiting from the Bash shell does not remove the containers. They are all still there in the "Exited" status. You can list them with the docker ps -a command. And you can always get back on to them by:
docker start -ia my_cont1

Resources