Docker: setting up `docker build` as a service - docker

I'm looking to be able to build docker containers on the fly given a docker file. That is, given a docker file, I want to be able to send the file to a remote "build service", which builds it for me programatically. Are there any hosted or open source services that provide this?
Note that I'm not looking to build images on CI, but rather do something in code like this:
def build_image(dockerfile_contents):
resp = docker_service.build(dockerfile_contents)
...
A simple rest API would be perfect. Any thoughts or ideas?

I don't think there are such service providers by now. But you can use Docker SDK to build images for you. You can use their official Python library.
Also, you can inspire how Docker CLI itself triggers the builds.
For the sake of remoteness and being cloudy, check out the way this article mentions using a Kubernetes cluster to build your images: You can use K8s API to trigger a job that its container builds your Docker image.

Seems like the best solution is to self-host something like kaniko. The post by #Ali Tou here seems like a more involved, but viable option.

Related

How to create a single project out of multiple docker images

I have been working on a project where I have had several docker containers:
Three OSRM routing servers
Nominatim server
Container where the webpage code is with all the needed dependencies
So, now I want to prepare a version that a user could download and run. What is the best practice to do such a thing?
Firstly, I thought maybe to join everything into one container, but I have read that it is not recommended to have several processes in one place. Secondly, I thought about wrapping up everything into a VM, but that is not really a "program" that a user can launch. And my third idea was to maybe, write a script, that would download each container from Docker Hub separately and launch the webpage. But, I am not sure if that is best practice, or maybe there are some better ideas.
When you need to deploy a full project composed of several containers.
You may use a specialized tool.
A well known for mono-server usage is docker-compose:
Compose is a tool for defining and running multi-container Docker applications
https://docs.docker.com/compose/
You could provide to your users :
docker-compose file
your application docker images (ex: through docker hub).
Regarding clusters/cloud, we talk more about orchestrator like docker swarm, Kubernetes, nomad
Kubernetes's documentation is the following:
https://kubernetes.io/

Can I have extra slash "/" in Docker (and Containerd) image name?

I need to copy images from Docker Hub into a private registry. For example, I need redislabs/rebloom:2.2.2. Then, can I name it my-private-registry.com/my-organization/redislabs/rebloom:2.2.2? (Notice there is my-organization which I cannot modify.)
In other words, is a.com/b/c/d:v1.0 ok or not?
I read this post and see Docker can parse it. However, will some tools reject this? Will Containerd reject this? I am afraid that they accept it but fails somewhere, which may be very difficult to debug.
Thank you very much!
My day job uses image names with a similar structure (hosted on Amazon ECR) and they work fine with plain Docker, Compose, and Kubernetes. I would not expect to run into any trouble with this, unless the specific image repository has stricter rules.

How to automatically deploy and run docker images on server?

I'm trying to setup the deployment of docker images to Linux server (Debian 10).
I looked over the internet to find an easy solution to deploy images from docker repository onto a server automatically.
I know that Docker Hub has webhooks.
Also, there is an option to use Kubernetes, but it seems to be a bit too much for a simple application running on one server.
What I am looking for is a way for server to detect that docker image has been updated, so that it downloads it and runs the newest version.
Currently, I have setup automatic build of docker images on Azure DevOps that are pushed to private repository on Docker Hub (I will most likely move to privately hosted Nexus repository).
I am looking for suggestions on how to do it with relatively low complexity (e.g. should I use docker-compose for it or some sort of bash script on a server).
The closest thing to what I am looking for is this solution: How to auto deploy Docker Image on own server with GitLab?
I would like to know if this is the recommended way to do or are there any other, possibly easier ways to approach it.
I found this project that looks good as a solution for my case.
https://containrrr.github.io/watchtower/

How do I setup a docker image to dynamically pull app code from a repository?

I'm using docker cloud at the moment. I'm trying to figure out a development to production workflow using docker with docker compose to pull application code for multiple applications of the same type, but simply changing the repository each pulls from. I understand the concept of mounting a volume, but all the examples show the source code in the same repo with the dockerfile and docker compose file. example. I want the app code from this example to come from a remote, dynamic repo. Would I set an environment variable in the docker image? If so how?
Any example or link to a workflow example is appreciated.
If done right, the code "baked" into Docker images should be immutable and the only thing that should change at runtime is configurable parameters like environment variables (e.g. to set the port the app will listen on).
Ideally, you should bake your code into the image. Otherwise you're losing a lot of the benefit of using Docker in the first place.
The problem is..
.. your use case does not match with the best practice. You want an image without any code embedded in it, but rather fetched at each update. If you browse the docker hub you'll find many image named as service:version. That's one of the benefit of Docker, offering different versions of the same service. If you want to always get the most up-to-date code your workflow may have some down sides.
One solution could be
Webhooks, especially if your code is versionned on GH. Or any tools of continuous integration.

How to idiomatically access sensitive data when building a Docker image?

Sometimes there is a need to use sensitive data when building a Docker image. For example, an API token or SSH key to download a remote file or to install dependencies from a private repository. It may be desirable to distribute the resulting image and leave out the sensitive credentials that were used to build it. How can this be done?
I have seen docker-squash which can squash multiple layers in to one, removing any deleted files from the final image. But is there a more idiomatic approach?
Regarding idiomatic approach, I'm not sure, although docker is still quite young to have too many idioms about.
We have had this same issue at our company, however. We have come to the following conclusions, although these are our best efforts rather than established docker best practices.
1) If you need the values at build time: Supply a properties file in the build context with the values that can be read at build, then the properties file can be deleted after build. This isn't as portable but will do the job.
2) If you need the values at run time: Pass values as environment variables. They will be visible to someone who has access to ps on the box, but this can be restricted via SELinux or other methods (honestly, I don't know this process, I'm a developer and the operations teams will deal with that part).
Sadly, there is still no proper solution for handling sensitive data while building a docker image.
This bug has a good summary of what is wrong with every hack that people suggest:
https://github.com/moby/moby/issues/13490
And most advice seems to confuse secrets that need to go INTO the container with secrets that are used to build the container, like several of the answers here.
The current solutions that seem to actually be secure, all seem to center around writing out the secret file to disk or memory, and then starting a silly little HTTP server, and then having the build process pull in the secret from the http server, use it, and not store it in the image.
The best I've found without going to that level of complexity, is to (mis)use the built in predefined-args feature of docker compose files, as specified in this comment:
https://github.com/moby/moby/issues/13490#issuecomment-403612834
That does seem to keep the secrets out of the image build history.
Matthew Close talks about this in this blog article.
Summarized: You should use docker-compose to mount sensitive information into the container.
2019, and I'm not sure there is an idomatic approach or best practices regarding secrets when using docker: https://github.com/moby/moby/issues/13490 remains open so far.
Secrets at runtime:
So far, the best approach I could find was using environment variables in a container:
with docker run -e option... but then your secrets are available in command line history
with docker env_file option or docker-compose env_file option. At least secrets are not passed in command line
Problem: in any case, secrets are now available for anyone able to run docker commands on your docker host (using docker inspect command)
Secrets at build time (your question):
I can see 2 additional (partial?) solutions to this problem:
Multistage build:
use a multi-stage docker build: basically, your dockerfile will define 2 images:
One first intermediate image (the "build image") in which:
you add your secrets to this image: either use build args or copy secret files (be careful with build args: they have to be passed in docker build command line)
you build your artefact (you now have access to your private repository)
A second image (the "distribution image") in which:
you copy the built artefact from the "build image"
distribute your image on a docker registry
This approach is explained by several comments in the quoted github thread:
https://github.com/moby/moby/issues/13490#issuecomment-408316448
https://github.com/moby/moby/issues/13490#issuecomment-437676553
Caution
This multistage build approach is far from being ideal: the "build image" is still lying on your host after the build command (and is containing your sensitive information). There are precautions to take
A new --secret build option:
I discovered this option today, and therefore did not experiment it yet... What I know so far:
it was announced in a comment from the same thread on github
this comment leads to a detailed article about this new option
the docker documentation (docker v19.03 at the time being) is not verbose about this option: it is listed with the description below, but there is no detailed section about it:
--secret
API 1.39+
Secret file to expose to the build (only if BuildKit enabled): id=mysecret,src=/local/secret
The way we solve this issue is that we have a tool written on top of docker build. Once you initiate a build using the tool, it will download a dockerfile and alters it. It changes all instructions which require "the secret" to something like:
RUN printf "secret: asd123poi54mnb" > /somewhere && tool-which-uses-the-secret run && rm /somewhere
However, this leaves the secret data available to anyone with access to the image unless the layer itself is removed with a tool like docker-squash. The command used to generate each intermediate layer can be found using the history command

Resources