Why do we build "inside" docker? - docker

When I first learned Docker I expected a config file, image producer, CLI, and options for mounting and networks. That's all there.
I did not expect to put build commands inside a Dockerfile. I thought docker would wrap/tar/include a prebuilt task I made. Why give build commands in Docker?
Surely it can import a task thus keeping Jenkins/Bazel etc. distinct and apart for making an image/container?

I guess we are dealing with a misconception here. Docker is NOT a lighweight version of VMware/Xen/KVM/Parallels/FancyVirtualization.
Disclaimer: The following is heavily simplified for the sake of comprehensiveness.
So what is Docker?
In one sentence: Docker is a system to isolate processes from the other processes within an operating system as much as possible while still providing all means to run them. Put differently:
Docker is a package manager for isolated processes.
One of its closest ancestors are chroot and BSD jails. What those basically do is to isolate (more in the case of BSD, less in the case of chroot) a part of your OS resources and have a complete environment running independently from the rest of the OS - except for the kernel.
In order to be able to do that, a Docker image obviously needs to contain everything except for a kernel. So you need to provide a shell (if you choose to do so), standard libraries like glibc and even resources like CA certificates. For reference: In order to set up chroot jails, you did all this by hand once upon a time, preinstalling your chroot environment with each and every piece of software required. Docker is basically taking the heavy lifting from you here.
The mentioned isolation even down to the installed (and usable software) sounds cumbersome, but it gives you several advantages as a developer. Since you provide basically everything except for a (compatible) kernel, you can develop and test your code in the same environment it will run later down the road. Not a close approximation, but literally the same environment, bit for bit. A rather famous proverb in relation to Docker is:
"Runs on my machine" is no excuse any more.
Another advantage is that can add static resources to your Docker image and access them via quite ordinary file system semantics. While it is true that you can do that with virtualisation images as well, they usually do not come with a language for provisioning. Docker does - the Dockerfile:
FROM alpine
LABEL maintainer="you#example.com"
COPY file/in/host destination/on/image
Ok, got it, now why the build commands?
As described above, you need to provide all dependencies (and transitive dependencies) your application has. The easiest way to ensure that is to build your application inside your Docker image:
FROM somebase
RUN yourpackagemanager install long list of dependencies && \
make yourapplication && \
make install
If the build fails, you know you have missing dependencies. Now you can tweak and tune your Dockerfile until it compiles and is tested. So now your Docker image is finished, you can confidently distribute it, since you know that as long as the docker daemon runs on the machine somebody tries to run your image on, your image will run.
In the Go ecosystem, you basically assure your go.mod and go.sum are up to date and working and your work stay's reproducible.
Again, this works with virtualisation as well, so where is the deal?
A (good) docker image only runs what it needs to run. In the vast majority of docker images, this means exactly one process, for example your Go program.
Side note: It is very bad practise to run multiple processes in one Docker image, say your application and a database server and a cache and whatnot. That is what docker-compose is there for, or more generally container orchestration. But this is far too big of a topic to explain here.
A virtualised OS, however, needs to run a kernel, a shell, drivers, log systems and whatnot.
So the deal basically is that you get all the good stuff (isolation, reproducibility, ease of distribution) with less waste of resources (running 5 versions of the same OS with all its shenanigans).

Because we want to have enviroment for reproducible build. We don't want to depend on version of language, existence of compiler, version of libraires and so on.

Building inside a Dockerfile allows you to have all the tools and environment you need inside independently of your platform and ready to use. In a development perspective is easier to have all you need inside the container.
But you have to think about the objective of building inside a Dockerfile, if you have a very complex build process with a lot of dependencies you have to be worried about having all the tools inside and it reflects on the final size of your resulting image. Because this is not the same building to generate an artifact than building to produce the final container.
Thinking about this two aspects you have to learn to use the multistage build process in Docker here. The main idea is closer to your question because you can have a as many stages as you need depending on your build process and use different FROM images to ensure you have the correct requirements and dependences on each stage, to finally generate the image with the minimum dependencies and smaller size.

I'll add to the answers above:
Doing builds in or out of docker is a choice that depends on your goal. In my case I am more interested in docker containers for kubernetes, and in addition we have mature builds already.
This link shows how you take prebuilt tasks and add them to an image. This strategy together with adding libs, env etc leverages docker well and shows that indeed docker is flexible. https://medium.com/#chemidy/create-the-smallest-and-secured-golang-docker-image-based-on-scratch-4752223b7324

Related

Why do docker containers rely on uploading (large) images rather than building from the spec files?

Having needed several times in the last few days to upload a 1Gb image after some micro change, I can't help but wonder why there isnt a deploy path built into docker and related tech (e.g. k8s) to push just the application files (Dockerfile, docker-compose.yml and app related code) and have it build out the infrastructure from within the (live) docker host?
In other words, why do I have to upload an entire linux machine whenever I change my app code?
Isn't the whole point of Docker that the configs describe a purely deterministic infrastructure output? I can't even see why one would need to upload the whole container image unless they make changes to it manually, outside of Dockerfile, and then wish to upload that modified image. But that seems like bad practice at the very least...
Am I missing something or this just a peculiarity of the system?
Good question.
Short answer:
Because storage is cheaper than processing power, building images "Live" might be complex, time-consuming and it might be unpredictable.
On your Kubernetes cluster, for example, you just want to pull "cached" layers of your image that you know that it works, and you just run it... In seconds instead of compiling binaries and downloading things (as you would specify in your Dockerfile).
About building images:
You don't have to build these images locally, you can use your CI/CD runners and run the docker build and docker push from the pipelines that run when you push your code to a git repository.
And also, if the image is too big you should look into ways of reducing its size by using multi-stage building, using lighter/minimal base images, using few layers (for example multiple RUN apt install can be grouped to one apt install command listing multiple packages), and also by using .dockerignore to not ship unnecessary files to your image. And last read more about caching in docker builds as it may reduce the size of the layers you might be pushing when making changes.
Long answer:
Think of the Dockerfile as the source code, and the Image as the final binary. I know it's a classic example.
But just consider how long it would take to build/compile the binary every time you want to use it (either by running it, or importing it as a library in a different piece of software). Then consider how indeterministic it would download the dependencies of that software, or compile them on different machines every time you run them.
You can take for example Node.js's Dockerfile:
https://github.com/nodejs/docker-node/blob/main/16/alpine3.16/Dockerfile
Which is based on Alpine: https://github.com/alpinelinux/docker-alpine
You don't want your application to perform all operations specified in these files (and their scripts) on runtime before actually starting your applications as it might be unpredictable, time-consuming, and more complex than it should be (for example you'd require firewall exceptions for an Egress traffic to the internet from the cluster to download some dependencies which you don't know if they would be available).
You would instead just ship an image based on the base image you tested and built your code to run on. That image would be built and sent to the registry then k8s will run it as a black box, which might be predictable and deterministic.
Then about your point of how annoying it is to push huge docker images every time:
You might cut that size down by following some best practices and well designing your Dockerfile, for example:
Reduce your layers, for example, pass multiple arguments whenever it's possible to commands, instead of re-running them multiple times.
Use multi-stage building, so you will only push the final image, not the stages you needed to build to compile and configure your application.
Avoid injecting data into your images, you can pass it later on-runtime to the containers.
Order your layers, so you would not have to re-build untouched layers when making changes.
Don't include unnecessary files, and use .dockerignore.
And last but not least:
You don't have to push images from your machine, you can do it with CI/CD runners (for example build-push Github action), or you can use your cloud provider's "Cloud Build" products (like Cloud Build for GCP and AWS CodeBuild)

Docker, update image or just use bind-mounts for website code?

I'm using Django but I guess the question is applicable to any web project.
In our case, there are two types of codes, the first one being python code (run in django), and others are static files (html/js/css)
I could publish new image when there is a change in any of the code.
Or I could use bind mounts for the code. (For django, we could bind-mount the project root and static directory)
If I use bind mounts for code, I could just update the production machine (probably with git pull) when there's code change.
Then, docker image will handle updates that are not strictly our own code changes. (such as library update or new setup such as setting up elasticsearch) .
Does this approach imply any obvious drawback?
For security reasons is advised to keep an operating system up to date with the last security patches but docker images are meant to be released in an immutable fashion in order we can always be able to reproduce productions issues outside production, thus the OS will not update itself for security patches being released. So this means we need to rebuild and deploy our docker image frequently in order to stay on the safe side.
So I would prefer to release a new docker image with my code and static files, because they are bound to change more often, thus requiring frequent release, meaning that you keep the OS more up to date in terms of security patches without needing to rebuild docker images in production just to keep the OS up to date.
Note I assume here that you release new code or static files at least in a weekly basis, otherwise I still recommend to update at least once a week the docker images in order to get the last security patches for all the software being used.
Generally the more Docker-oriented solutions I've seen to this problem learn towards packaging the entire application in the Docker image. That especially includes application code.
I'd suggest three good reasons to do it this way:
If you have a reproducible path to docker build a self-contained image, anyone can build and reproduce it. That includes your developers, who can test a near-exact copy of the production system before it actually goes to production. If it's a Docker image, plus this code from this place, plus these static files from this other place, it's harder to be sure you've got a perfect setup matching what goes to production.
Some of the more advanced Docker-oriented tools (Kubernetes, Amazon ECS, Docker Swarm, Hashicorp Nomad, ...) make it fairly straightforward to deal with containers and images as first-class objects, but trickier to say "this image plus this glop of additional files".
If you're using a server automation tool (Ansible, Salt Stack, Chef, ...) to push your code out, then it's straightforward to also use those to push out the correct runtime environment. Using Docker to just package the runtime environment doesn't really give you much beyond a layer of complexity and some security risks. (You could use Packer or Vagrant with this tool set to simulate the deploy sequence in a VM for pre-production testing.)
You'll also see a sequence in many SO questions where a Dockerfile COPYs application code to some directory, and then a docker-compose.yml bind-mounts the current host directory over that same directory. In this setup the container environment reflects the developer's desktop environment and doesn't really test what's getting built into the Docker image.
("Static files" wind up in a gray zone between "is it the application or is it data?" Within the context of this question I'd lean towards packaging them into the image, especially if they come out of your normal build process. That especially includes the primary UI to the application you're running. If it's things like large image or video assets that you could reasonably host on a totally separate server, it may make more sense to serve those separately.)

DevOps Simple Setup

I'm looking to start creating proper isolated environments for django web apps. My first inclination is to use Docker. Also, it's usually recommended to use virtualenv with any python project to isolate dependencies.
Is virtualenv still necessary if I'm isolating projects via Docker images?
If your Docker container is relatively long-lived or your project dependencies change, there is still value in using a Python virtual-environment. Beyond (relatively) isolating dependencies of a codebase from other projects and the underlying system (and notably, the project at a given state), it allows for a certain measure of denoting the state of requirements at a given time.
For example, say that you make a Docker image for your Django app today, and end up using it for the following three weeks. Do you see your requirements.txt file being modified between now and then? Can you imagine a scenario in which you put out a hotpatch that comes with environmental changes?
As of Python 3.3, virtual-env is stdlib, which means it's very cheap to use, so I'd continue using it, just in case the Docker container isn't as disposable as you originally planned. Stated another way, even if your Docker-image pipeline is quite mature and the version of Python and dependencies are "pre-baked", it's such low-hanging fruit that while not explicitly necessary, it's worth sticking with best-practices.
No not really if each Python / Django is going to live in it's own container.

What's the difference between Docker and Chef's new Habitat tool?

Does Chef's new Habitat tool somehow work with Docker? If so, what problem is Habitat trying to solve or is it just trying to replace tools in the Docker toolset (e.g., Docker Swarm, Docker Machine, Docker Compose, etc.)?
This is skirting the limits of StackOverflow's policy on open-ended questions, but I'll answer anyway:
Docker and Habitat don't really overlap much. The main point of competition is on building release artifacts. Docker has Dockerfiles and docker build, Habitat has plans and the Studio. The output of both can be a Docker image though, which is basically a tarball of a filesystem along with some metadata. Habitat is aimed more are building super minimal artifacts, i.e. not including a Linux distro of any kind, no package manager, just statically compiled executable code and whatever support files you need for that specific app.
As for runtime, they are 100% orthogonal. Docker is a way to run a process inside a bunch of Linux security features collectively called a "container" now. Habitat is a little stub that surrounds your process and handles things like runtime config distribution, secrets transfer, and service discovery. Those features are more overlapping with higher-level tools like Kube but even there it's only barely overlapping. You need something to actually start hab-sup, which could be docker run (possibly via Swarm), Nomad, Kube, or even a non-container system like Upstart or Runit if you wanted to. The only interaction point between those is those tools all start an entrypoint process, and hab-sup is a generic entrypoint process that gives whatever app it runs underneath some cool features if they want to use 'em.

Does Docker reduce or mitigate the need for Puppet/Chef et al?

I'm not au fait with any of these technologies (embarrassing really), but at my present gig, the company badly needs to automate.
So as I begin to read-up on Puppet and Chef and PowerShell DSC, I then remember that Docker and containerisation is coming to Windows.
Does Docker do away with the need for these tools, or do they work together?
I understand that Docker uses virtualisation technology in the OS, so I get the feeling that Docker solves a different problem, and a configuration tool is still needed but I've no certain, practical knowledge.
Does Docker do away with the need for these tools, or do they work together?
They work together: provisioning and containerization solve different issues, and you actually can provision docker containers themselves with a provisioning tool.
See for instance "Docker: Using Puppet"
Tools like Chef & Puppet are important for configuration, but they do have one weakness that Docker helps to shore up. They are not always fully idempotent (hype notwithstanding). In other words, running Chef twice on the same virtual machine may cause unexpected and hard-to-find changes on that machine, and you'd be restoring a backup to get to a known good state.
By contrast, a Docker deployment involves building an entirely new image and swapping it out with your old image. Rollback involves simply unswapping them and comparing them to diagnose the problems in the new image.
Note that you still might very well use Chef to build your Docker container. But you might very well not. Since containers are supposed to run just one process in a particular way, I've found that a series of simple shell commands is way preferable to the overhead entailed by Chef.
In short no, you don't need anything like Chef or Puppet. Of course you can use if like to but it's not required.
If you build your system in such way that everything in containerized then what you need is only a tiny OS like CoreOS or Atomic.
So you just configure your VM via Cloud-Config if needed and deploy your container either with cloud config or Docker cli itself. The idea is your machines should have a static state and they can be created whenever you want new one and destroyed when you don't need.
There are other tools that can help with Docker orchestration which another story by itself.
Tools like Swarm, Kubernetes and Mesosphere.
docker-machine is also very helpful for development purpose. (maybe deployment too).
Here is CoreOS example:
https://coreos.com/os/docs/latest/cloud-config.html
Resource: I do it in production for different apps.
UPDATE:
BTW, Docker is not only a visualization technology. It does some sort of containerization (you can call it virtualization too) and that's only a small part of the what Docker can do. Docker can configure, build, ship and run application whit eliminating its dependencies on host machine. And that's why you don't need those classic configuration tools.
Puppet and Chef are configuration management tools, where as Docker is a virtualization tool such as LXC.
Usually you'd be using Chef or puppet to manage Docker containers. For example take a look at Chef docs.
EDIT as per #ptierno comment.
Docker is three things: a cool way to run a process, a decent image-based deploy system, and a mediocre system image builder.
The first is not related to config management as those tools aren't involved in running a process, at least not directly. The second takes the place of some amount of config management in production by doing it ahead of time when you build the image. There is still often some need for last-mile config for stuff like service discovery and secrets but this can be handled by lighter tools like consul-templates or confd. The last is where the rub lies. docker build is simple, easy to get started with, and mostly unhelpful for complex situations. You get, at most, a single inheritance tree between dockerfiles which makes stuff like multi-axis matrix builds ({app1 app2 app3} x {prod qa dev}) more difficult than it could be. Also building composable abstraction for other groups to use is difficult, though again it isn't impossible. Using something like Packer to drive image builds can produce simpler code sometimes, and supports the full suite of CAPS (Chef, Ansible, Puppet, Salt) tools. This is mostly aimed at the use case where you are treating Docker images like tiny VMs, which I wish fewer people would do, but it's a thing so here we are.

Resources