Shared volume during build? - docker

I have a docker-compose environment setup like so:
Oracle
Filesystem
App
...
etc...
The filesystem container downloads the latest code from our repo and exposes its volume for other containers to mount. This works great except that containers that need to use the code to do builds can't access it since the volume isn't mounted until the containers are run.
I'd like to avoid checkout/downloading the code since the codebase is over 3 gig right now... Hence trying to do something spiffier.
Is there a better way to do this?

As you mentioned, Docker volumes won't work as volumes are used when the container start.
The best solution for your situation is to use Docker multistage Builds. The idea here is to have an image which has the code base and other images can access this code directly from this image.
You basically have an image, that is responsible for pulling the code:
FROM alpine/git
RUN git clone ...
You then build this image, either separately or as the first image in a compose file.
Other images can then use this image as such:
FROM code-image as code
COPY --from=code /git/<code-repository> /code
This will make the code available to all the images, and it will only be pulled once from the remote repo.

Related

Dockerfile FROM command - Does it always download from Docker Hub?

I just started working with docker this week and came across a 'dockerfile'. I was reading up on what this file does, and the official documentation basically mentions that the FROM keyword is needed to build a "base image". These base images are pulled from Docker hub, or downloaded from there.
Silly question - Are base images always pulled from docker hub?
If so and if I understand correctly I am assuming that running the dockerfile to create an image is not done very often (only when needing to create an image) and once the image is created then the image is whats run all the time?
So the dockerfile then can be migrated to which ever enviroment and things can be set up all over again quickly?
Pardon the silly question I am just trying to understand the over all flow and how dockerfile fits into things.
If the local (on your host) Docker daemon (already) has a copy of the container image (i.e. it's been docker pull'd) specified by FROM in a Dockerfile then it's cached and won't be repulled.
Container images include a tag (be wary of ever using latest) and the image name e.g. foo combined with the tag (which defaults to latest if not specified) is the full name of the image that's checked i.e. if you have foo:v0.0.1 locally and FROM:v0.0.1 then the local copy is used but FROM foo:v0.0.2 will pull foo:v0.0.2.
There's an implicit docker.io prefix i.e. docker.io/foo:v0.0.1 that references the Docker registry that's being used.
You could repeatedly docker build container images on the machines where the container is run but this is inefficient and the more common mechanism is that, once a container image is built, it is pushed to a registry (e.g. DockerHub) and then pulled from there by whatever machines need it.
There are many container registries: DockerHub, Google Artifact Registry, Quay etc.
There are tools other than docker that can be used to interact with containers e.g. (Red Hat's) Podman.

nexus 3 create docker with pre define configuration

I want to create nexus 3 docker with pre-define configuration (few repos and dummy artifacts) for testing my library.
I can't call the nexus API from the docker file, because it require running nexus.
I tried to up the nexus 3 container, config it manually and create image from container
docker commit ...
the new image created, but when I start the new container from it, it doesn't contains all my manual configuration that I did before.
How can I customize the nexus 3 image?
If I understand well, you are trying to create a portable, standalone customized nexus3 installation in a self-contained docker image for testing/distribution purpose.
Doing this by extending the official nexus3 docker image will not work. Have a look at their Dockerfile: it defines a volume for /nexus_data and there is currently no way of removing this from a child image.
It means that when your start a container without any specific options, a volume is created for each new container. This is why your committed image starts with blank data. The best you can do is to name the data volume when you start the container (option -v nexus_data:/nexus_data for docker run) so that the same volume is being reused. But the data will still be in your local docker installation, not in the image.
To do what you wish, you need to recreate you own docker image without a data volume. You can do it from the above official Dockerfile, just remove the volume line. Then you can customize and commit your container to an image which will contain the data.

Syncing docker images

I have 2 machines(separate hosts) running docker and I am using the same image on both the machines. How do I keep both the images in sync. For eg. suppose I make changes to the image in one of the hosts and want the changes to reflect in the other host as well. I can commit the image and copy the image over to the other host. Is there any other efficient way of doing this??
Some ways I can think of:
1. with a Docker registry
the workflow here is:
HOST A: docker commit, docker push
HOST B: docker pull
2. by saving the image to a .tar file
the workflow here is:
HOST A: docker save
HOST B: docker load
3. with a Dockerfile and by building the image again
the workflow here is:
provide a Dockerfile together with your code / files required
everytime your code has changed and you want to make a release, use docker build to create a new image.
from the hosts that you want to take the update, you will have to get the updated source code (maybe by using a version control software like Git), and then docker build the image
4. CI/CD pipeline
you can see a video here: docker.com/use-cases/cicd
Keep in mind that containers are considered to be ephemeral. This means that updating an image inside another host will then require:
to stop and remove any old container (running with the outdated image)
to run a new one (with the updated image)
I quote from: Best practices for writing Dockerfiles
General guidelines and recommendations
Containers should be ephemeral
The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.
You can perform docker push to upload you image to docker registry and perform a docker pull to get the latest image from another host.
For more information please look at this

Dealing with data in Docker Containers with Gitlab-Ci

So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.

What are the different ways of implementing Docker FROM?

Inheritance of an image is normally done using docker's from command, for example.
from centos7:centos7
In my case, I have a Dockerfile which I want to use as a base image builder, and I have two sub dockerfiles which customize that file.
I do not want to commit the original dockerfile as a container to dockerhub, so for example, I would like to do:
Dockerfile
slave/Dockerfile
master/Dockerfile
Where slave/Dockerfile looks something like this:
from ../Dockerfile
Is this (or anything similar) possible ? Or do I have to actually convert the top level Dockerfile to a container, and commit it as a dockerhub image, before I can leverage it using the docker FROM directive.
You don't have to push your images to dockerhub to be able to use them as base images. But you need to build them locally so that they are stored in your local docker repository. You can not use an image as a base image based on a relative path such as ../Dockerfile- you must base your own images on other image files that exists in your local repository.
Let's say that your base image uses the following (in the Dockerfile):
FROM centos7:centos7
// More stuff...
And when you build it you use the following:
docker build -t my/base .
What happens here is that the image centos7:centos7 is downloaded from dockerhub. Then your image my/base is built and stored (without versioning) in your local repository. You can also provide versioning to your docker container by simply providing version information like this:
docker build -t my/base:2.0 .
When an image has been built as in the example above it can then be used as a FROM-image to build other sub-images, at least on the same machine (the same local repository). So, in your sub-image you can use the following:
FROM my/base
Basically you don't have to push anything anywhere. All images lives locally on your machine. However, if you attempt to build your sub-image without a previously built base image you get an error.
For more information about building and tagging, check out the docs:
docker build
docker tag
create your own image

Resources