How to serve multiple versions of model via standard tensorflow serving docker image? - docker

I'm new to Tensorflow serving,
I just tried Tensorflow serving via docker with this tutorial and succeeded.
However, when I tried it with multiple versions, it serves only the latest version.
Is it possible to do that? Or do I need to try something different?

This require a ModelServerConfig, which will be supported by the next docker image tensorflow/serving release 1.11.0 (available since 5. Okt 2018). Until then, you can create your own docker image, or use tensorflow/serving:nightly or tensorflow/serving:1.11.0-rc0 as stated here.
See that thread for how to implement multiple models.
If you on the other hand want to enable multiple versions of a single model, you can use the following config file called "models.config":
model_config_list: {
config: {
name: "my_model",
base_path: "/models/my_model",
model_platform: "tensorflow",
model_version_policy: {
all: {}
}
}
}
here "model_version_policy: {all:{ } }" make every versions of the model available.
Then run the docker:
docker run -p 8500:8500 8501:8501 \
--mount type=bind,source=/path/to/my_model/,target=/models/my_model \
--mount type=bind,source=/path/to/my/models.config,target=/models/models.config \
-t tensorflow/serving:nightly --model_config_file=/models/models.config
Edit:
Now that version 1.11.0 is available, you can start by pulling the new image:
docker pull tensorflow/serving
Then run the docker image as above, using tensorflow/serving instead of tensorflow/serving:nightly.

I found a way to achieve this by building my own docker image which uses --model_config_file option instead of --model_name and --model_base_path.
So I'm running tensorflow serving with below command.
docker run -p 8501:8501 -v {local_path_of_models.conf}:/models -t {docker_iamge_name}
Of course, I wrote 'models.conf' for multiple models also.
edit:
Below is what I modified from original docker file.
original version:
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME} \
modified version:
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_config_file=${MODEL_BASE_PATH}/models.conf \

Related

Avoiding duplicated arguments when running a Docker container

I have a tensorflow training script which I want to run using a Docker container (based on the official TF GPU image). Although everything works just fine, running the container with the script is horribly verbose and ugly. The main problem is that my training script allows the user to specify various directories used during training, for input data, logging, generating output, etc. I don't want to have change what my users are used to, so the container needs to be informed of the location of these user-defined directories, so it can mount them. So I end up with something like this:
docker run \
-it --rm --gpus all -d \
--mount type=bind,source=/home/guest/datasets/my-dataset,target=/datasets/my-dataset \
--mount type=bind,source=/home/guest/my-scripts/config.json,target=/config.json \
-v /home/guest/my-scripts/logdir:/logdir \
-v /home/guest/my-scripts/generated:/generated \
train-image \
python train.py \
--data_dir /datasets/my-dataset \
--gpu 0 \
--logdir ./logdir \
--output ./generated \
--config_file ./config.json \
--num_epochs 250 \
--batch_size 128 \
--checkpoint_every 5 \
--generate True \
--resume False
In the above I am mounting a dataset from the host into the container, and also mounting a single config file config.json (which configures the TF model). I specify a logging directory logdir and an output directory generated as volumes. Each of these resources are also passed as parameters to the train.py script.
This is all very ugly, but I can't see another way of doing it. Of course I could put all this in a shell script, and provide command line arguments which set these duplicated values from the outside. But this doesn't seem a nice solution, because if I want to anything else with the container, for example check the logs, I would use the raw docker command.
I suspect this question will likely be tagged as opinion-based, but I've not found a good solution for this that I can recommend to my users.
As user Ron van der Heijden points out, one solution is to use docker-compose in combination with environment variables defined in an .env file. Nice answer.

How to showvariable with gitversion Docker

I can successfully get the full json string with:
docker run --rm -v `pwd`:`pwd` gittools/gitversion-dotnetcore:linux-4.0.0 `pwd` -output json
which outputs to something like:
{
"Major":0,
"Minor":1,
"Patch":0,
"SemVer":"0.1.0-dev-2.1",
.
.
.
"CommitsSinceVersionSource":20,
"CommitsSinceVersionSourcePadded":"0020",
"CommitDate":"2020-05-28"
}
Since I am only interested in SemVer variable I try to use the -showvariable FullSemVer with:
docker run --rm -v `pwd`:`pwd` gittools/gitversion-dotnetcore:linux-4.0.0 `pwd` -output json -showvariable FullSemVer
But it fails with a quite long and nasty error log.
INFO [05/28/20 18:23:12:10] End: Loading version variables from disk cache (Took: 76.31ms)
ERROR [05/28/20 18:23:12:13] An unexpected error occurred:
System.NotImplementedException: The method or operation is not implemented.
I wonder if there is a way to use the -showvariable flag with the gitversion Docker container?
I think the problem is the path argument passed to GitVersion. pwd will give you the working directory on your host, not within the container. GitVersion is unfortunately not aware of the fact that it's executing within a container, so it needs to be provided with the volume directory /repo as the path to calculate a version number for. This is something we should consider changing in version 6.
I also can't remember when -showvariable was implemented, so to be on the safe side you should try with a newer version of our Docker containers. I can also recommend using the alpine container, as it's the smallest one we offer (only 83.9 MB). This works:
docker run \
--rm \
--volume "$(pwd):/repo" \
gittools/gitversion:5.3.4-linux-alpine.3.10-x64-netcoreapp3.1 \
/repo \
-output json \
-showvariable FullSemVer

Docker error: invalid reference format: repository name must be lowercase

Ran into this Docker error with one of my projects:
invalid reference format: repository name must be lowercase
What are the various causes for this generic message?
I already figured it out after some effort, so I'm going to answer my own question in order to document it here as the solution doesn't come up right away when doing a web search and also because this error message doesn't describe the direct problem Docker encounters.
A "reference" in docker is a pointer to an image. It may be an image name, an image ID, include a registry server in the name, use a sha256 tag to pin the image, and anything else that can be used to point to the image you want to run.
The invalid reference format error message means docker cannot convert the string you've provided to an image. This may be an invalid name, or it may be from a parsing error earlier in the docker run command line if that's how you run the image.
If the name itself is invalid, the repository name must be lowercase means you use upper case characters in your registry or repository name, e.g. YourImageName:latest should be yourimagename:latest.
With the docker run command line, this is often the result in not quoting parameters with spaces, missing the value for an argument, and mistaking the order of the command line. The command line is ordered as:
docker ${args_to_docker} run ${args_to_run} image_ref ${cmd_to_exec}
The most common error in passing args to the run is a volume mapping expanding a path name that includes a space in it, and not quoting the path or escaping the space. E.g.
docker run -v $(pwd):/data image_ref
Where if you're in the directory /home/user/Some Project Dir, that would define an anonymous volume /home/user/Some in your container, and try to run Project:latest with the command Dir:/data image_ref. And the fix is to quote the argument:
docker run -v "$(pwd):/data" image_ref
Other common places to miss quoting include environment variables:
docker run -e SOME_VAR=Value With Spaces image_ref
which docker would interpret as trying to run the image With:latest and the command Spaces image_ref. Again, the fix is to quote the environment parameter:
docker run -e "SOME_VAR=Value With Spaces" image_ref
With a compose file, if you expand a variable in the image name, that variable may not be expanding correctly. So if you have:
version: 2
services:
app:
image: ${your_image_name}
Then double check that your_image_name is defined to an all lower case string.
In my case was the -e before the parameters for mysql docker
docker run --name mysql-standalone -e MYSQL_ROOT_PASSWORD=hello -e MYSQL_DATABASE=hello -e MYSQL_USER=hello -e MYSQL_PASSWORD=hello -d mysql:5.6
Check also if there are missing whitespaces
Let me emphasise that Docker doesn't even allow mixed characters.
Good:
docker build -t myfirstechoimage:0.1 .
Bad:
docker build -t myFirstEchoImage:0.1 .
had a space in the current working directory and usign $(pwd) to map volumes. Doesn't like spaces in directory names.
In my case, the image name defined in docker-compose.yml contained uppercase letters. The fact that the error message mentioned repository instead of image did not help describe the problem and it took a while to figure out.
In my case the problem was in parameters arrangement. Initially I had --name parameter after environment parameters and then volume and attach_dbs parameters, and image at the end of command like below.
docker run -p 1433:1433 -e sa_password=myComplexPwd -e ACCEPT_EULA=Y --name sql1 -v c:/temp/:c:/temp/ attach_dbs="[{'dbName':'TestDb','dbFiles':['c:\\temp\\TestDb.mdf','c:\\temp\\TestDb_log.ldf']}]" -d microsoft/mssql-server-windows-express
After rearranging the parameters like below everything worked fine (basically putting --name parameter followed by image name).
docker run -d -p 1433:1433 -e sa_password=myComplexPwd -e ACCEPT_EULA=Y --name sql1 microsoft/mssql-server-windows-express -v C:/temp/:C:/temp/ attach_dbs="[{'dbName':'TestDb','dbFiles':['C:\\temp\\TestDb.mdf','C:\\temp\\TestDb_log.ldf']}]"
On MacOS when your are working on an iCloud drive, your $PWD will contain a directory "Mobile Documents". It does not seem to like the space!
As a workaround, I copied my project to local drive where there is no space in the path to my project folder.
I do not see a way you can get around changnig the default path to iCloud which is ~/Library/Mobile Documents/com~apple~CloudDocs
The space in the path in "Mobile Documents" seems to be what docker run does not like.
If you encounter this problem in go-swagger (Windows).
#echo off
echo.
docker run --rm -it --env GOPATH=/go -v %CD%:/go/src -w /go/src quay.io/goswagger/swagger %*
Use this instead: (add quote)
#echo off
echo.
docker run --rm -it --env GOPATH=/go -v "%CD%:/go/src" -w /go/src quay.io/goswagger/swagger %*
A reference in Docker is what points to an image. This could be in a remote registry or the local registry. Let me describe the error message first and then show the solutions for this.
invalid reference format
This means that the reference we have used is not a valid format. This means, the reference (pointer) we have used to identify an image is invalid. Generally, this is followed by a description as follows. This will make the error much clearer.
invalid reference format: repository name must be lowercase
This means the reference we are using should not have uppercase letters. Try running docker run Ubuntu (wrong) vs docker run ubuntu (correct). Docker does not allow any uppercase characters as an image reference. Simple troubleshooting steps.
1) Dockerfile contains a capital letters as images.
FROM Ubuntu (wrong)
FROM ubuntu (correct)
2) Image name defined in the docker-compose.yml had uppercase letters
3) If you are using Jenkins or GoCD for deploying your docker container, please check the run command, whether the image name includes a capital letter.
Please read this document written specifically for this error.
sometimes you miss -e flag while specific multiple env vars inline
e.g.
bad: docker run --name somecontainername -e ENV_VAR1=somevalue1 ENV_VAR2=somevalue2 -d -v "mypath:containerpath" <imagename e.g. postgres>
good: docker run --name somecontainername -e ENV_VAR1=somevalue1 -e ENV_VAR2=somevalue2 -d -v "mypath:containerpath" <imagename e.g. postgres>
In my case I had a naked --env switch, i.e. one without an actual variable name or value, e.g.:
docker run \
--env \ <----- This was the offending item
--rm \
--volume "/home/shared:/shared" "$(docker build . -q)"
Replacing image: ${DOCKER_REGISTRY}notificationsapi
with image:notificationsapi
or image: ${docker_registry}notificationsapi
in docker-compose.yml did solves the issue
file with error
version: '3.4'
services:
notifications.api:
image: ${DOCKER_REGISTRY}notificationsapi
build:
context: .
dockerfile: ../Notifications.Api/Dockerfile
file without error
version: '3.4'
services:
notifications.api:
image: ${docker_registry}notificationsapi
build:
context: .
dockerfile: ../Notifications.Api/Dockerfile
So i think error was due to non lower case letters it had
For me the issue was with the space in volume mapping that was not escaped. The jenkins job which was running the docker run command had a space in it and as a result docker engine was not able to understand the docker run command.
Indeed, the docker registry as of today (sha 2e2f252f3c88679f1207d87d57c07af6819a1a17e22573bcef32804122d2f305) does not handle paths containing upper-case characters. This is obviously a poor design choice, probably due to wanting to maintain compatible with certain operating systems that do not distinguish case at the file level (ie, windows).
If one authenticates for a scope and tries to fetch a non-existing repository with all lowercase, the output is
(auth step not shown)
curl -s -H "Authorization: Bearer $TOKEN" -X GET https://$LOCALREGISTRY/v2/test/someproject/tags/list
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"repository","Class":"","Name":"test/someproject","Action":"pull"}]}]}
However, if one tries to do this with an uppercase component, only 404 is returned:
(authorization step done but not shown here)
$ curl -s -H "Authorization: Bearer $TOKEN" -X GET https://docker.uibk.ac.at:443/v2/test/Someproject/tags/list
404 page not found
I solve this changing some uppercase words on my Dockerfile like:
FROM Base as Build
RUN npm run Build:prod
to
FROM base as build
RUN npm run build:prod
Another place:
FROM Base as Release
COPY --from=Build /usr/path/here/dist/ ./dist
to
FROM base as Release
COPY --from=build /usr/path/here/dist/ ./dist
I've encountered the same issue while using docker with mlflow.
In my case, the directory name containing my Dockerfile was "My Project" which I changed to myproject or my_project and It worked for me.
Also, follow the same naming format for all the root/super directories under which, the Dockerfile resides.
Not only for docker, but it's also good practice (especially in Unix based OS) to avoid the following while defining a directory name:-
white spaces
camel-case
upper-case
I had the same error, and for some reason it appears to have been cause by uppercase letters in the Jenkins job that ran the docker run command.
This is happening because of the spaces in the current working directory that came from $(pwd) for map volumes. So, I used docker-compose instead.
The docker-compose.yml file.
version: '3'
services:
react-app:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
"docker build -f Dockerfile -t SpringBoot-Docker ."
As in the above commend, we are creating an image file for docker container. commend says create image use file(-f refer to docker file) and -t for the target of the image file we are going to push to docker. the "." represents the current directory
solution for the above problem: provide target image name in lowercase
Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
example:
FROM python:3.7-alpine
The 'python' should be in lowercase
In my case I was trying to run postgres through docker. Initially I was running as :
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=test_password POSTGRES_USER=test_user POSTGRES_DB=test_db --rm -v ~/docker/volumes/postgres:/var/lib/postgresql/data --name pg-docker postgres
I was missing -e after each environment variable. Changing the above command to the one below worked
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=test_password -e POSTGRES_USER=test_user -e POSTGRES_DB=test_db --rm -v ~/docker/volumes/postgres:/var/lib/postgresql/data --name pg-docker postgres
I wish the error message would output the problem string. I was getting this due to a weird copy and paste problem of a "docker run" command. A space-like character was being used before the repo and image name.
Most of the answers above did not work for my case, so I will document this in case somebody finds it helpful. The first line in the dockerfile FROM node:10 for my case, the word node should not be uppercase i.e FROM NODE:10. I made that change and it worked.
In my case DockerFile contained the image name in mixed case instead of lower case.
Earlier line in my DockerFile
FROM CentOs
and when I changed above to FROM centos, it worked smoothly.
You need to enter the Name of the Docker-Image and not your File Name :P
$ docker run {your image}
Another possible cause of this error is that in your Dockerfile you have mixed capitalization in the syntax declaration itself. For example:
# syntax=docker/Dockerfile:1
instead of
# syntax=docker/dockerfile:1
If you come here after encountering this error in your GitHub Actions worflows…
Make sure to use docker/metadata-action action to handle repository naming for you. Just call it before docker/build-push-action:
# Add this
- id: docker-metadata
uses: docker/metadata-action#v4
with:
images: ghcr.io/${{ github.repository }}
# Use the extracted metadata
- uses: docker/build-push-action#v3
with:
tags: ${{ steps.docker-metadata.outputs.tags }}
labels: ${{ steps.docker-metadata.outputs.labels }}
… other properties …

How to run swagger-ui with local code changes AND my own swagger.json?

The Readme on https://github.com/swagger-api/swagger-ui specifies that Swagger-UI can be run with your own file like this
docker run -p 80:8080 -e SWAGGER_JSON=/foo/swagger.json -v /bar:/foo swaggerapi/swagger-ui
which works I if I translate it to
docker build . -t swagger-ui-local && \
docker run -p 80:8080 -e SWAGGER_JSON=/foo/my-file.json -v $PWD:/foo swagger-ui-local
This, however, ignores my local changes.
I can run my local changes with
npm run dev
but I can't figure out how to get this dev server to run anything else than the Petstore example.
Can anyone help me combine the two, so I can run swagger-ui with local code changes AND my own swagger.json?
Make sure you are volume mounting the correct local directory.
Locally, I had my swagger config in $PWD/src/app/swagger/swagger.yaml. Running the following worked fine:
docker run -p 80:8080 -e SWAGGER_JSON=/tmp/swagger.yaml -v `pwd`/src/app/swagger:/tmp swaggerapi/swagger-ui
Simply refreshing the Swagger-UI page or clicking the "Explore" button in the header triggered a refresh of the data from my YAML file.
You can also specify BASE_URL excerpt from swagger-installation
docker run -p 80:8080 -e BASE_URL=/swagger -e SWAGGER_JSON=/foo/swagger.json -v /bar:/foo swaggerapi/swagger-ui
I found this topic because I wanted to see a visual representation of my local swagger file, but could not seem to get swagger-ui (running in docker) to display anything other than the petstore.
Ultimately, my issue was with understanding the -e SWAGGER_JSON and -v flags, so I wanted to explain them here.
-v <path1>:<path2>
This option says "Mount the path <path1> from my local file system within the swagger-ui docker container on path <path2>"
-e SWAGGER_JSON=<filepath>
This option says "By default, show the swagger for the file at <filepath> using the docker container's file system." The important part here, is that this filepath should take into account how you set <path2> above
Putting it all together, I ended up with the following:
docker run -p 8085:8080 -e SWAGGER_JSON=/foo/swagger.json -v `pwd`:/foo swaggerapi/swagger-ui
This says in english: "Run my swagger-ui instance on port 8085. Mount my current working directory as '/foo' in the docker container. By default, show the swagger file at '/foo/swagger.json'."
The important thing to note is that I have a file called swagger.json in my current working directory. This command mounts my current working directory as /foo in the docker container. Then, swagger UI can pick up my swagger.json as /foo/swagger.json.
Here's how I ended up solving this, it also allows you to have multiple YML files:
docker run -p 80:8080 \
-e URLS_PRIMARY_NAME=FIRST \
-e URLS="[ \
{ url: 'docs/first.yml', name: 'FIRST' } \
, { url: 'docs/second.yml', name: 'SECOND' } \
]" \
-v `pwd`:/usr/share/nginx/html/docs/ \
swaggerapi/swagger-ui
I figured it out for npm run dev:
Place my-file.json in the dev-helpers folder. Then it's available from the search bar in on http://localhost:3200/.
To load it automatically when opening the server, alter dev-helpers/index.html by changing
url: "http://petstore.swagger.io/v2/swagger.json"
to
url: "my-file.json"
Just in case you are running a maven project with Play Framework the following steps solved my issue :
1.) Alter the conf/routes file. Add the below line :
GET /swagger.json controllers.Assets.at(path="/public/swagger-ui",file="swagger.json")
2.) Add the swagger.json file to your Swagger-UI folder
so when you run the mvn project in a port example 7777, start the play server using mvn play2:run and then, localhost:7777/docs will automatically pull the Json file that is added locally.
Docker compose solution:
create .env file and add the following:
URLS_PRIMARY_NAME=FIRST
URLS=[ { url: 'docs/swagger.yaml', name: 'FIRST' } ]
And create a docker-compose file with contents below:
version: "3.3"
services:
swagger-ui:
image: swaggerapi/swagger-ui
container_name: "swagger-ui"
ports:
- "80:8080"
volumes:
- /local/tmp:/usr/share/nginx/html/docs/
environment:
- URLS_PRIMARY_NAME=${URLS_PRIMARY_NAME}
- URLS=${URLS}
the swagger.yaml is at /local/tmp.
For people facing this issue in mac, its a permission problem. By default after Catalina, docker doesn't have permission to allow its images to read local files in your system. Once its given it worked for me and it took my local swagger json file.
To grant privileges now, go to System preferences > Security & Privacy > Files and Folders, and add Docker for Mac and your shared directory.
Another solution if you want to provide multiple URLs and from a specific folder (not default /usr/share/nginx/html/docs/):
docker run -p 80:8080 \
-e SWAGGER_JSON=/docs/api.yaml \
-e URLS="[ \
{ url: '/api1.yaml', name: 'API 1' }, \
{ url: '/api2.yaml', name: 'API 2' } \
]" \
-v `pwd`/docs:/docs \
swaggerapi/swagger-ui
Or for docker compose:
version: '3.8'
services:
swagger-ui:
image: swaggerapi/swagger-ui
volumes:
- ./docs:/docs
environment:
SWAGGER_JSON: /docs/api.yaml
URLS: '[{ url: "/api1.yaml", name: "API 1" }, { url: "/api2.yaml", name: "API 2" }]'
Please note, SWAGGER_JSON requires an absolute path, URLs in URLS require relative paths from the specified volume

How can I let the gitlab-ci-runner DinD image cache intermediate images?

I have a Dockerfile that starts with installing the texlive-full package, which is huge and takes a long time. If I docker build it locally, the intermedate image created after installation is cached, and subsequent builds are fast.
However, if I push to my own GitLab install and the GitLab-CI build runner starts, this always seems to start from scratch, redownloading the FROM image, and doing the apt-get install again. This seems like a huge waste to me, so I'm trying to figure out how to get the GitLab DinD image to cache the intermediate images between builds, without luck so far.
I have tried using the --cache-dir and --docker-cache-dir for the gitlab-runner register command, to no avail.
Is this even something the gitlab-runner DinD image is supposed to be able to do?
My .gitlab-ci.yml:
build_job:
script:
- docker build --tag=example/foo .
My Dockerfile:
FROM php:5.6-fpm
MAINTAINER Roel Harbers <roel.harbers#example.com>
RUN apt-get update && apt-get install -qq -y --fix-missing --no-install-recommends texlive-full
RUN echo Do other stuff that has to be done every build.
I use GitLab CE 8.4.0 and gitlab/gitlab-runner:latest as runner, started as
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/local/gitlab-ci-runner/config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest \
; \
The runner is registered using:
docker exec -it gitlab-runner gitlab-runner register \
--name foo.example.com \
--url https://gitlab.example.com/ci \
--cache-dir /cache/build/ \
--executor docker \
--docker-image gitlab/dind:latest \
--docker-privileged \
--docker-disable-cache false \
--docker-cache-dir /cache/docker/ \
; \
This creates the following config.toml:
concurrent = 1
[[runners]]
name = "foo.example.com"
url = "https://gitlab.example.com/ci"
token = "foobarsldkflkdsjfkldsj"
tls-ca-file = ""
executor = "docker"
cache_dir = "/cache/build/"
[runners.docker]
image = "gitlab/dind:latest"
privileged = true
disable_cache = false
volumes = ["/cache"]
cache_dir = "/cache/docker/"
(I have experimented with different values for cache_dir, docker_cache_dir and disable_cache, all with the same result: no caching whatsoever)
I suppose there's no simple answer to your question. Before adding some details, I strongly suggest to read this blog article from the maintainer of DinD, which was originally named "do not use Docker in Docker for CI".
What you might try is declaring /var/lib/docker as a volume for your GitLab runner. But be warned, depending on your file-system drivers you may use AUFS in the container on an AUFS filesystem on your host, which is very likely to cause problems.
What I'd suggest to you is creating a separate Docker-VM, only for the runner(s), and bind-mount docker.sock from the VM into your runner-container.
We are using this setup with GitLab with great success (>27.000 builds in about 12 months).
You can take a look at our runner with docker-compose support which is actually based on the shell-executor of GitLab's runner.
Currently you cannot cache intermediate layers in GitLab Docker-in-Docker. Altough there are plans to add that (that are mentioned in the link below). What you can do today to speed up your DinD build is to use the overlay filesystem. To do this you need to be running a liunx kernel >=3.18 and make sure you load the overlay kernel module. Then you set this variable in your gitlab-ci.yml:
variables:
DOCKER_DRIVER: overlay
For more information see this issue and in particular this comment on "The state of optimising Docker Builds!", see the "Using docker executor with dind" section.
https://gitlab.com/gitlab-org/gitlab-ce/issues/17861#note_12991518
For build dependencies that do not change so ofter you can do kinda manual caching with gitlab image registry.
In CI script you do not explicitely call docker build but rather wrap it in a shell script
# cat build_dependencies.sh
registry=registry.example.com
project=group/project
imagebase=$registry/$project/linux
docker pull $imagebase/devbase:1.0
if [ $? -ne 0 ]; then
docker build -f devbase.dockerfile -t $imagebase/devbase:1.0 .
docker push $imagebase/devbase:1.0
fi
...
and call that script in your CI
...
script:
- ./build_dependencies.sh
The downside to this is that when your devbase.dockerfile is updated this would get unnoticed by CI, so you need to force build and push of a new image. So for dynamicly changing images this does not work well, but for your use case this seems like a possible way to go.

Resources