Dockerfile or Registry? Which is the preferred strategy for distribution? - docker

If you are making a service with a Dockerfile is it preferred for you to build an image with the Dockerfile and push it to the registry -- rather than distribute the Dockerfile (and repo) for people to build their images?
What use cases favour Dockerfile+repo distribution, and what use case favour Registry distribution?

I'd imagine the same question could be applied to source code versus binary package installs.
Pushing to a central shared registry allows you to freeze and certify a particular configuration and then make it available to others in your organisation.

At DevTable we were initially using a Dockerfile that was run when we deployed our servers in order to generate our Docker images. As our docker image become more complex and had more dependencies, it was taking longer and longer to generate the image from the Dockerfile. What we really needed was a way to generate the image once and then pull the finished product to our servers.
Normally, one would accomplish this by pushing their image to index.docker.io, however we have proprietary code that we couldn't publish to the world. You may also end up in such a situation if you're planning to build a hosted product around Docker.
To address this need in to community, we built Quay, which aims to be the Github of Docker images. Check it out and let us know if it solves a need for you.

Private repositories on your own server are also an option.
To run the server, clone the https://github.com/dotcloud/docker-registry to your own server.
To use your own server, prefix the tag with the address of the registry's host. For example:
# Tag to create a repository with the full registry location.
# The location (e.g. localhost.localdomain:5000) becomes
# a permanent part of the repository name
$ sudo docker tag 0u812deadbeef your_server.example.com:5000/repo_name
# Push the new repository to its home location on your server
$ sudo docker push your_server.example.com:5000/repo_name
(see http://docs.docker.io.s3-website-us-west-2.amazonaws.com/use/workingwithrepository/#private-registry)

I think it depends a little bit on your application, but I would prefer the Dockerfile:
A Dockerfile...
... in the root of a project makes it super easy to build and run it, it is just one command.
... can be changed by a developer if needed.
... is documentation about how to build your project
... is very small compared with an image which could be useful for people with a slow internet connection
... is in the same location as the code, so when people checkout the code, they will find it.
An Image in a registry...
... is already build and ready!
... must be maintained. If you commit new code or update your application you must also update the image.
... must be crafted carefully: Can the configuration be changed? How you handle the logs? How big is it? Do you package an NGINX within the image or is this part of the outer world? As #Mark O'Connor said, you will freeze a certain configuration, but that's maybe not the configuration someone-else want to use.
This is why I would prefer the Dockerfile. It is the same with a Vagrantfile - it would prefer the Vagrantfile instead of the VM image. And it is the same with a ANT or Maven script - it would prefer the build script instead of the packaged artifact (at least if I want to contribute code to the project).

Related

How to instruct docker or docker-compose to automatically build image specified in FROM

When processing a Dockerfile, how do I instruct docker build to build the image specified in FROM locally using another Dockerfile if it is not already available?
Here's the context. I have a large Dockerfile that starts from base Ubuntu image, installs Apache, then PHP, then some custom configuration on top of that. Whether this is a good idea is another point, let's assume the build steps cannot be changed. The problem is, every time I change anything in the config, everything has to be rebuilt from scratch, and this takes a while.
I would like to have a hierarchy of Dockerfiles instead:
my-apache : based on stock Ubuntu
my-apache-php: based on my-apache
final: based on my-apache-php
The first two images would be relatively static and can be uploaded to dockerhub, but I would like to retain an option to build them locally as part of the same build process. Only one container will exist, based on the final image. Thus, putting all three as "services" in docker-compose.yml is not a good idea.
The only solution I can think of is to have a manual build script that for each image checks whether it is available on Dockerhub or locally, and if not, invokes docker build.
Are there better solutions?
I have found this article on automatically detecting dependencies between docker files and building them in proper order:
https://philpep.org/blog/a-makefile-for-your-dockerfiles
Actual makefile from Philippe's git repo provides even more functionality:
https://github.com/philpep/dockerfiles/blob/master/Makefile

Docker - Upgrading base Image

I have a base which is being used by 100 applications. All 100 applications have the common base image in their Dockerfile. Now, I am upgrading the base image for OS upgrade or some other upgrade and bumping up the version and I also tag with the latest version.
Here, the problem is, whenever I change the base image, all 100 application needs to change the base image in their dockerfile and rebuild the app for using the latest base image.
Is there any better way to handle this?
Note :- I am running my containers in Kubernetes and Dockerfile is there in GIT for each application.
You can use a Dockerfile ARG directive to modify the FROM line (see Understand how ARG and FROM interact in the Dockerfile documentation). One possible approach here would be to have your CI system inject the base image tag.
ARG base=latest
FROM me/base-image:${base}
...
This has the risk that individual developers would build test images based on an older base image; if the differences between images are just OS patches then you might consider this a small and acceptable risk, so long as only official images get pushed to production.
Beyond that, there aren't a lot of alternatives beyond modifying the individual Dockerfiles. You could script it
# Individually check out everything first
BASE=$(pwd)
TAG=20191031
for d in *; do
cd "$BASE/$d"
sed -i.bak "s#FROM me/base-image.*#FROM:me/base-image:$TAG/" Dockerfile
git checkout -b "base-image-$TAG"
git commit -am "Update Dockerfile to base-image:$TAG"
git push
hub pull-request --no-edit
done
There are automated dependency-update tools out there too and these may be able to manage the scripting aspects of it for you.
You don't need to change the Dockerfile for each app if it uses base-image:latest. You'll have to rebuild app images though after base image update. After that you'll need to update the apps to use the new image.
For example using advises from this answer
whenever I change the base image, all 100 application needs to change the base image in their dockerfile and rebuild the app for using the latest base image.
That's a feature, not a bug; all 100 applications need to run their tests (and potentially fix any regressions) before going ahead with the new image...
There are tools out there to scan all the repos and automatically submit pull requests to the 100 applications (or you can write a custom one, if you don't have just plain "FROM" lines in Dockerfiles).
If you need to deploy that last version of base image, yes, you need to build, tag, push, pull and deploy each container again. If your base image is not properly tagged, you'll need to change your dockerfile on all 100 files.
But you have some options, like using sed to replace all occurrences in your dockerfiles, and execute all build commands from a sh file pointing to every app directory.
With docker-compose you may update your running 100 apps with one command:
docker stack deploy --compose-file docker-compose.yml
but still needs to rebuild the containers.
edit:
with docker compose you can build too your 100 containers with one command, you need to define all of them in a compose file, check the docks for the compose file.

Install ansible application to docker container

I have an application which can be installed with ansible. No I want to create docker image that includes installed application.
My idea is to up docker container from some base image, after that start installation from external machine, to this docker container. After that create image from this container.
I am just starting with dockers, could you please advise if it is good idea and how can I do it?
This isn’t the standard way to create a Docker image and it isn’t what I’d do, but it will work. Consider looking at a tool like Hashicorp’s Packer that can automate this sequence.
Ignoring the specific details of the tools, the important thing about the docker build sequence is that you have some file checked into source control that an automated process can use to build a Docker image. An Ansible playbook coupled with a Packer JSON template would meet this same basic requirement.
The important thing here though is that there are some key differences between the Docker runtime environment and a bare-metal system or VM that you’d typically configure with Ansible: it’s unlikely you’ll be able to use your existing playbook unmodified. For example, if your playbook tries to configure system daemons, install a systemd unit file, add ssh users, or other standard system administrative tasks, these generally aren’t relevant or useful in Docker.
I’d suggest making at least one attempt to package your application using a standard Dockerfile to actually understand the ecosystem. Don’t expect to be able to use an Ansible playbook unmodified in a Docker environment; but if your organization has a lot of Ansible experience and you can easily separate “install the application” from “configure the server”, the path you’re suggesting is technically fine.
You can use multi-stage builds in Docker, which might be a nice solution:
FROM ansible/centos7-ansible:stable as builder
COPY playbook.yaml .
RUN ansible-playbook playbook.yaml
FROM alpine:latest # Include whatever image you need for your application
# Add required setup for your app
COPY --from=builder . . # Copy files build in the ansible image, aka your app
CMD ["<command to run your app>"]
Hopefully the example is clear enough for you to create your Dockerfile

How to deploy an application running in docker - best practice?

We are discussing how we should deploy our application running in a docker container. At the moment, we build our application image in the pipeline containing the application code. Which means we have to build the docker image every time the application updates.
Another approach we consider is putting the application code in a volume on the server. We then pull the latest release with git on the server. So the image has not to be rebuilt.
So our discussed options are:
Build the image containing the application code
Use a volume and store the application code on the server
What is best practice to do and why?
While the other answers here have explained the point of building code into your image, I'd like to go one step further and show you how to get the benefits of both worlds while following this best practice.
Docker best practices call for building source code into your image before deployment, rather than deploying an image with dependencies installed and then source code mounted in as a volume.
This gives you a self-contained, portable container that is straightforward to test, deploy, or rollback.
May I take a stab at why you are considering hot-mounting code?
Hot-mounting code is appealing for several reasons — and they're all easy to achieve without sacrificing this best practice of building a self-contained image:
Building Docker images can be slow, so why rebuild for a minor change when you can just hot-mount the code?
A complementary best practice is to use a "base image" that installs all dependencies -- usually the slow part of building a docker image. The key insight is that this base image won't change often!
But the image that derives from it -- your application image, which installs source code -- will change with every commit you want to deploy. That derived image will be very fast to build. The Dockerfile could be as simple as:
FROM myapp/base . # all dependencies installed in base image
ADD code.tar.gz /src # automatic untaring!
CMD [...] # whatever it takes to run your app
Hot-mounting enables faster development cycles, because a developer won't need to flush their docker container, rebuild, and run a new container just to see a change.
This is a fair point. I recommend making a "dev" image (which will also derive from your base image) that enables code mounting via a volume rather than the source code installation steps you'd have in your testing and deployment images.
When you build image every time with new application you have easy way to deploy it later on to the customer or on your production server. When the docker image is ready you can keep it in the repository. Additionally you have full control on that that your docker is working with current application.
In case of keeping the application in mounted volume you have to keep in mind following problems:
life cycle of application - what to do with container when you have to update the application - gently stop, overwrite and run again
how do you deploy your application - you have to do it manually over SSH, or you want just to run simple command docker run, and it runs your latest version from your repository
The mounted volumes are rather for following casses:
you want to have externally exposed settings for container - what is also not a good idea
you want to have externally access to the data produced by the application like logs, db, etc
To automate it totally, you can:
build image for each application and push to the repository
use for example watchtower to automatic update of the system on your production servers
I believe you should follow the first approach i.e. rebuilding the docker image every time there are changes in code. Reasons are-
Firstly, if you are using volume, every time you have to manage the clean closing and removing of the previous version of the application and check whether the new version of the application is running correctly. Your new application might get affected dependencies of your previous version of the application. That need to be taken care too.
Secondly, there might be some version updates of the frameworks installed and some new frameworks are to be installed with the current application. In this case, the first approach seems to be the only option.
Thirdly, As when you are using docker volume you will be removing the most important feature of docker i.e. abstraction from outside environment. Also, the image might become machine dependent because of it, which might affect if you want to publish the app in multiple environments.
My suggestion would be creating a pipeline using some continuous integration tool and fully automate the process starting from code building, building of docker image and deploying it to your environment.

Why doesn't Docker Hub cache Automated Build Repositories as the images are being built?

Note: It appears the premise of my question is no longer valid since the new Docker Hub appears to support caching. I haven't personally tested this. See the new answer below.
Docker Hub's Automated Build Repositories don't seem to cache images. As it is building, it removes all intermediate containers. Is this the way it was intended to work or am I doing something wrong? It would be really nice to not have to rebuild everything for every small change. I thought that was supposed to be one of the best advantages of docker and it seems weird that their builder doesn't use it. So why doesn't it cache images?
UPDATE:
I've started using Codeship to build my app and then run remote commands on my DigitalOcean server to copy the built files and run the docker build command. I'm still not sure why Docker Hub doesn't cache.
Disclaimer: I am a lead software engineer at Quay.io, a private Docker container registry, so this is an educated guess based on the same problem we faced in our own build system implementation.
Given my experience with Dockerfile build systems, I would suspect that the Docker Hub does not support caching because of the way caching is implemented in the Docker Engine. Caching for Docker builds operates by comparing the commands to be run against the existing layers found in memory.
For example, if the Dockerfile has the form:
FROM somebaseimage
RUN somecommand
ADD somefile somefile
Then the Docker build code will:
Check to see if an image matching somebaseimage exists
Check if there is a local image with the command RUN somecommand whose parent is the previous image
Check if there is a local image with the command ADD somefile somefile + a hashing of the contents of somefile (to make sure it is invalidated when somefile changes), whose parent is the previous image
If any of the above steps match, then that command will be skipped in the Dockerfile build process, with the cached image itself being used instead. However, the one key issue with this process is that it requires the cached images to be present on the build machine, in order to find and verify the matches. Having all of everyone's images on build nodes would be highly inefficient, making this a harder problem to solve.
At Quay.io, we solved the caching problem by creating a variation of the Docker caching code that could precompute these commands/hashes and then ask our registry for the cached layers, downloading them to the machine only after we had found the most efficient caching set. This required significant data model changes in our registry code.
If you'd like more information, we gave a technical overview into how we do so in this talk: https://youtu.be/anfmeB_JzB0?list=PLlh6TqkU8kg8Ld0Zu1aRWATiqBkxseZ9g
The new Docker Hub came out with a new Automated Build system that supports Build Caching.
https://blog.docker.com/2018/12/the-new-docker-hub/

Resources