Docker - max depth exceeded - docker

So I am using this example:
https://github.com/mcmoe/mssqldocker
In order to create a SQL Server image and load it with data. I have several sql scripts which I run when I run the container.
However, I started getting this error when building the image:
Step 7/9 : ENTRYPOINT ./entrypoint.sh
---> Running in c8c654f6a630
max depth exceeded
I'm not sure how to fix this, I restarted docker and even updated it.
I read something about 125 layers? Can anyone explain the cause of this and a potential fix?
I found this command to run:
docker history microsoft/mssql-server-linux:latest | wc -l
312
My docker-compose yml:
version: "3"
services:
mssql:
build: .
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=Abcgfgh123!
volumes:
- db_volume:/var/lib/mssql/data
volumes:
db_volume:

The image parameter for a service in a docker-compose.yml definition has dual meanings depending on the existence of a build parameter.
If there is no build stanza, The image will just be pulled and run.
If you have a build stanza, image will be the name your built
image is tagged as, and run.
By naming the built image microsoft/mssql-server-linux, which is the same as the FROM microsoft/mssql-server-linux image. Docker was layering the build on top of itself each time.
The original build started on the "official" microsoft/mssql-server-linux but then each subsequent build would start from your local microsoft/mssql-server-linux image which had been appended to, until eventually you hit the maximum number of layers for your storage driver.
Use your own namespace for all images you build:
version: "3"
services:
mssql:
build: .
image: 'user3437721/mssql-server-linux'

Delete all local docker images related to your dockerfile using the following and try again.
$ docker rmi -f $(docker images -a -q)

I was already using a custom image name like in the accepted answer, and I still had this issue.
In order to get past this error, I found that I needed to remove all unused (not associated with any container) and dangling (old image after a new one is made) docker images with the following command:
docker image prune -a
See https://linuxhandbook.com/remove-docker-images/

If this error pops for specific images, then you might want to take a look at the number of layers that exist in your said docker image. When doing changes to a Docker image, each change is getting added as a layer regardless the size increase in the image. I have come across scenarios where my docker images having multiple layers and failing to push in Jenkins jobs. The workaround is to use the same context in the docker image but with reduced amount of layers. For that you can simply take out the content of the image and port them into a new image with a new image name, which will result the same content in a new image and with less amount of layers. Use this tip across scenarios that is helpful. Thanks

for my situation the error was due to \r. I am running windows host and the linux image needs to run an script file for entrypoint which is copied from windows to docker image. during this process the script end lines were changed from \n to \r\n which resulted in this error. undoing this made the error go away.

Prefix the image: in docker-compose.yml with a unique custom name like "my-custom-prefix/..."
# docker-compose.yml
services:
mssql:
build: .
image: my-custom-prefix/mssql-server-linux
...
Just making sure it is not the same as the FROM on top of your Dockerfile (build file)
# Dockerfile
FROM microsoft/mssql-server-linux
...
The same goes for a .lando.yml file. That was the case for me at least.

Related

docker-compose wait on other service before build

There are a few approaches to fix container startup order in docker-compose, e.g.
depends_on
docker-compose-wait
Docker Compose wait for container X before starting Y (Asked 7 years, 6 months ago, Modified 7 months ago, Viewed 483k times)
...
However, if one of the services in a docker-compose file includes a build directive, it seems docker-compose will try to build the image first (ignoring depends_on basically - or interpreting depends_on as start dependency, not build dependency).
Is it possible for a build directive to specify that it needs another service to be up, before starting the build process?
Minimal Example:
version: "3.5"
services:
web:
build: # this will run before postgres is up
context: .
dockerfile: Dockerfile.setup # needs postgres to be up
depends_on:
- postgres
...
postgres:
image: postgres:10
...
Notwithstanding the general advice that programs should be written in a way that handles the unavailability of services (at least for some time) gracefully, are there any ways to allow builds to start only when other containers are up?
Some other related questions:
multi-stage build in docker compose?
Update/Solution: Solved the underlying problem by pushing all the (database) setup required to the CMD directive of a bootstrap container:
FROM undertest-base:latest
...
CMD ./wait && ./bootstrap.sh
where wait waits for postgres and bootstrap.sh contains the code for setting up the postgres database with fixtures so the over system becomes fully testable after that script.
With that, setting up an ephemeral test environment with database setup becomes a simple docker-compose up again.
There is no option for this in Compose, and also it won't really work.
The output of an image build is a self-contained immutable image. You can do things like docker push an image to a registry, and Docker's layer cache will avoid rebuilding an image that it's already built. So in this hypothetical setup, if you could access the database in a Dockerfile, but you ran
docker-compose build
docker-compose down -v
docker-compose up -d --build
the down -v step will remove the storage the database uses. While the up --build option will cause the image to be rebuilt, the build sequence will skip all of the steps and produce the same image as originally, and whatever changes you might have made to the database won't have happened.
At a more mechanical layer, the build sequence doesn't use the Compose-provided network, so you also wouldn't be able to connect to the database container.
There are occasional use cases where a dependency in build: would be handy, in particular if you're trying to build a base image that other images in your Compose setup share. But neither the stable Compose file v3 build: block nor the less-widely-supported Compose specification build: supports any notion of an image build depending on anything else.

Is there a way to automatically "Rebase" an image in Docker?

I have a docker-compose script that brings up a service
version : '2.0'
services:
orig-db:
image: web-url:{image_tag}
custom-db:
image: local_image: latest
Where image used in custom DB is the the result of bringing up a container with orig-db, performing some basic bash commands, and doing a docker commit. I want the custom-db image to always be the original image + these commands, even if the original image is updated. Is there a way to "rebase" off the original image?
You can think of a Dockerfile as a simple form of a "rebase".
# Content of subdir/Dockerfile
FROM orig_image:latest
RUN some.sh
RUN basic.sh
RUN bash_commands.sh
When you build an image based on this file, it will always run the bash commands on top of the base image. Inside the compose file you can use the build property to instruct docker-compose to build the image instead of using a pre-made image.
version : '2.0'
services:
orig-db:
image: web-url:{image_tag}
custom-db:
build: somedir
If the base image changes, you need to tell docker-compose to rebuild the custom-db image again, running the bash commands again on top of the updated original image.
docker-compose up -d --build custom-db

Docker commit is not saving my changes to image

I'm new to docker world: I'm at a point where i can deploy docker containers and do some work.
Trying to get to the next level of saving my changes and moving my containers/images to another pc/server.
Currently, I'm using docker on windows 10, but I do have access to Ubuntu 16.04 server to test my work.
This is where I'm stuck: I have Wordpress and MariaDB images deployed on Docker.
My WP is running perfectly OK.I have installed few themes and created few pages with images.
At this point, I like to save my work and send it to my friend who will deploy my image and do further work on this same Wordpress.
What I have read online is: I should run docker commit command to save and create my docker image in .tar format and then send this image file (.tar) to my friend. He will run docker load -i on my file to load it as image into his docker and then create container from it which should give him all of my work on Wordpress.
Just to clarify, I'm committing both Wordpress and Mariadb containers.
I don't have any external volumes mounted so all the work is being saved in containers.
I do remember putting check mark on drive C and D in docker settings but i don't know if that has anything to to do with volumes.
I don't get any error in my commit and moving .tar files process. Once my friend create his containers from my committed images, he gets clean Wordpress (like new installation of Wordpress starting from wp setup pages).
Another thing I noticed is that the image I create has the same file size as original image i pulled. When I run docker images, I see my image is 420MB ,as well as Wordpress image is 420MB.
I think my image should be a little bit bigger since I have installed themes, plugins and uploaded images to Wordpress. At least it should add 3 to 5 MB more then original images. Please help. Thank you.
Running docker system df gives me this.
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 5 3 1.259GB 785.9MB (62%)
Containers 3 3 58.96kB 0B (0%)
Local Volumes 2 2 311.4MB 0B (0%)
Build Cache 0 0 0B 0B
Make sure, as shown here, to commit a running container (to avoid any data cleanup)
docker commit CONTAINER_ID yourImage
After the docker commit command, you can use docker save to save your image in a tar, and docker load to import it back, as shown here.
You should never run docker commit.
To answer your immediate question, containers that run databases generally store their data in volumes; they are set up so that the data is stored in an anonymous volume even if there was no docker run -v option given to explicitly store data in a named volume or host directory. That means that docker commit never persists the data in a database, and you need some other mechanism to copy the actual data around.
At a more practical level, your colleague can ask questions like "where did this 400 MB tarball come from, why should I trust it, and how can I recreate it if it gets damaged in transit?" There are also good questions like "the underlying database has a security fix I need, so how do I get the changes I made on top of a newer base image?" If you're diligent you can write down everything you do in a text file. If you then have a text file that says "I started from mysql:5.6, then I ran ..." that's very close to being a Dockerfile. The syntax is straightforward, and Docker has a good tutorial on building and running custom images.
When you need a custom image, you should always describe what goes into it using a Dockerfile, which can be checked into source control, and can rebuild an image using docker build.
For your use case it doesn't sound like you actually need a custom image. I would probably suggest setting up a Docker Compose YAML file that described your setup and actually stored the data in local directories. The database half of it might look like
version: '3'
services:
db:
image: 'mysql:8.0'
volumes:
- './mysql:/var/lib/mysql/data'
ports:
- '3306:3306'
The data will be stored on the host, in a mysql subdirectory. Now you can tar up this directory tree and send that tar file to your colleague, who can then untar it and recreate the same environment with its associated data.
Use docker build (Changes to the images should be stored in the Dockerfile).
Now if you have multiple services, just use docker's brother docker-compose. One extra step you have to do is create docker-compose.yml (don't be afraid yet my friend, it's nothing trivial). All you're doing in this file is listing out your images (along with defining where their Dockerfile is for that image, could be in some subfolder for each image). You can also define some other properties there if you'd like.
Notice that certain directories are considered volume directories by docker, meaning that they are container specific and therefore never saved in the image. The \data directory is such an example. When docker commit my_container my_image:my_tag is executed, all of the containers filesystem is saved, except for /data. To work around it, you could do:
mkdir /data0
cp /data/* /data0
Then, outside the container:
docker commit my_container my_image:my_tag
Then you would perhaps want to copy the data on /data0 back to /data, in which case you could make a new image:
On the Dockerfile:
FROM my_image:my_tag
CMD "cp /data0 /data && my_other_CMD"
Notice that trying to copy content to /data in a RUN command will not work, since a new container is created in every layer and, in each of them, the contents of /data are discarded. After the container has been instatiated, you could also do:
docker exec -d my_container /bin/bash -c "cp /data0/* /data"
You have to use the volumes to store your data.
Here you can find the documentation: https://docs.docker.com/storage/volumes/
For example you can do somethink like this in your docker-compose.yml.
version: '3.1'
services:
wordpress:
image: wordpress:php7.2-apache
ports:
- "8080:80"
environment:
WORDPRESS_DB_HOST: databasename
WORDPRESS_DB_USER: username
WORDPRESS_DB_PASSWORD: password
WORDPRESS_DB_NAME: namedatabase
volumes:
- name_volume:/var/www/html
volumes:
- name_volume:
or
volumes:
- ./yourpath:/var/www/html

Docker Always force to use a cached image

I'm using docker compose to build my application using docker.
Version of docker-compose is 2.2
I have all the containers running well at the moment where one of the container has nginx running.
I need to change some configuration on this container.
The way I need to do (because of special scenario) is, to update the config inside the container.
Then I commit the container to build a new image.
docker commit <container> <image-name>
Now I have new image with tag latest.
What I want is to use this image when I run, docker-compose down && docker-compose up --build next time.
docker-compose down && docker-compose up --build -d
With --build option, docker-compose will go through the steps in Dockerfile and run those and all my changes will be reverted.
Question:
Is there anyway that I can tell docker-compose to use the newly created image as cache and ignore Dockerfile for this one container?
Solution Tried:
I have tried with docker-compose-override and using option cache-from and it's not working.
docker-compose.override.yml
container:
build:
cache_from:
- new-image:latest
Thanks in advance.
I don't understand why you would want to build an image from docker-compose even though you have already built it by docker-commit.
Now I have new image with tag latest.
What I want is to use this image when I run, docker-compose down && docker-compose up
If you have already built image, skip the build phase in docker-compose. Just specify which image should be used like so:
container:
image: new-image:latest
container_name: "Foo bar"
.....(other options)
Image
Specify the image to start the container from. Can either be a
repository/tag or a partial image ID.
image: redis
image: ubuntu:14.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd
If the image does
not exist, Compose attempts to pull it, unless you have also specified
build, in which case it builds it using the specified options and tags
it with the specified tag.
If you have any other images that you build from inside docker-compose run:
docker-compose build && docker-compose up
If not simple docker-compose up will suffice.

What is the difference between `docker-compose build` and `docker build`?

What is the difference between docker-compose build and docker build?
Suppose in a dockerized project path there is a docker-compose.yml file:
docker-compose build
And
docker build
docker-compose can be considered a wrapper around the docker CLI (in fact it is another implementation in python as said in the comments) in order to gain time and avoid 500 characters-long lines (and also start multiple containers at the same time). It uses a file called docker-compose.yml in order to retrieve parameters.
You can find the reference for the docker-compose file format here.
So basically docker-compose build will read your docker-compose.yml, look for all services containing the build: statement and run a docker build for each one.
Each build: can specify a Dockerfile, a context and args to pass to docker.
To conclude with an example docker-compose.yml file :
version: '3.2'
services:
database:
image: mariadb
restart: always
volumes:
- ./.data/sql:/var/lib/mysql
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
depends_on:
- database
When calling docker-compose build, only the web target will need an image to be built. The docker build command would look like :
docker build -t web_myproject -f Dockerfile-alpine ./web
docker-compose build will build the services in the docker-compose.yml file.
https://docs.docker.com/compose/reference/build/
docker build will build the image defined by Dockerfile.
https://docs.docker.com/engine/reference/commandline/build/
Basically, docker-compose is a better way to use docker than just a docker command.
If the question here is if docker-compose build command, will build a zip kind of thing containing multiple images, which otherwise would have been built separately with usual Dockerfile, then the thinking is wrong.
Docker-compose build, will build individual images, by going into individual service entry in docker-compose.yml.
With docker images, command, we can see all the individual images being saved as well.
The real magic is docker-compose up.
This one will basically create a network of interconnected containers, that can talk to each other with name of container similar to a hostname.
Adding to the first answer...
You can give the image name and container name under the service definition.
e.g. for the service called 'web' in the below docker-compose example, you can give the image name and container name explicitly, so that docker does not have to use the defaults.
Otherwise the image name that docker will use will be the concatenation of the folder (Directory) and the service name. e.g. myprojectdir_web
So it is better to explicitly put the desired image name that will be generated when docker build command is executed.
e.g.
image: mywebserviceImage
container_name: my-webServiceImage-Container
example docker-compose.yml file :
version: '3.2'
services:
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
image: mywebserviceImage
container_name: my-webServiceImage-Container
depends_on:
- database
Few additional words about the difference between docker build and docker-compose build.
Both have an option for building images using an existing image as a cache of layers.
with docker build, the option is --cache-from <image>
with docker-composer, there is a tag cache_from in the build section.
Unfortunately, up until now, at this level, images made by one are not compatible with the other as a cache of layers (Ids are not compatible).
However, docker-compose v1.25.0 (2019-11-18), introduces an experimental feature COMPOSE_DOCKER_CLI_BUILD so that docker-compose uses native docker builder (therefore, images made by docker build can be used as a cache of layers for docker-compose build)

Resources