What Docker scratch contains by default? - docker

There is an option to use FROM scratch for me it looks like a really attractive way of building my Go containers.
My question is what does it still have natively to run binaries do I need to add anything in order to reliably run Go binaries? Compiled Go binary seems to run it at least on my laptop.
My goal is to keep image size to a minimum both for security and infra management reasons. In an optimal situation, my container would not be able to execute binaries or shell commands outside of build phase.

The scratch image contains nothing. No files. But actually, that can work to your advantage. It turns out, Go binaries built with CGO_ENABLED=0 require absolutely nothing, other than what they use. There are a couple things to keep in mind:
With CGO_ENABLED=0, you can't use any C code. Actually not too hard.
With CGO_ENABLED=0, your app will not use the system DNS resolver. I don't think it does by default anyways because it's blocking and Go's native DNS resolver is non-blocking.
Your app may depend on some things that are not present:
Apps that make HTTPS calls (as in, to other services, i.e. Amazon S3, or the Stripe API) will need ca-certs in order to confirm HTTPS certificate authenticity. This also has to be updated over time. This is not needed for serving HTTPS content.
Apps that need timezone awareness will need the timezone info files.
A nice alternative to FROM scratch is FROM alpine, which will include a base Alpine image - which is very tiny (5 MiB I believe) and includes musl libc, which is compatible with Go and will allow you to link to C libraries as well as compile without setting CGO_ENABLED=0. You can also leverage the fact that alpine is regularly updated, using its tzinfo and ca-certs.
(It's worth noting that the overhead of Docker layers is amortized a bit because of Docker's deduplication, though of course that is negated by how often your base image is updated. Still, it helps sell the idea of using the quite small Alpine image.)
You may not need tzinfo or ca-certs now, but it's better to be safe than sorry; you can accidentally add a dependency without realizing it breaks your build. So I recommend using alpine as your base. alpine:latest should be fine.
Bonus: If you want the advantages of reproducible builds inside Docker, but with small image sizes, you can use the new Docker multi-stage builds available in Docker 17.06+.
It works a bit like this:
FROM golang:alpine
ADD . /go/src/github.com/some/gorepo # may need some go getting if you don't vendor
RUN go build -o /app github.com/some/gorepo
FROM scratch # or alpine
COPY --from=0 /app /app
ENTRYPOINT ["/app"]
(I apologize if I've made any mistakes, I'm typing that from memory.)
Note that when using FROM scratch you must use the exec form of ENTRYPOINT, because the shell form won't work (it depends on the Docker image having /bin/sh, which it won't.) This will work fine in Alpine.

Related

Docker best practice: use OS or application as base image?

I would like to build a docker image which contains Apache and Python3. What is the suggested base image to use in this case?
There is a offical apache:2.4.43-alpine image I can use as the base image, or I can install apache on top of a alpine base image.
What would be the best approach in this case?
Option1:
FROM apache:2.4.43-alpine
<Install python3>
Option2:
FROM alpine:3.9.6
<Install Apache>
<Install Python3>
Here are my rules.
rule 1: If the images are official images (such as node, python, apache, etc), it is fine to use them as your application's base image directly, more than you build your own.
rule 2:, if the images are built by the owner, such as hashicorp/terraform, hashicorp is the owner of terraform, then it is better to use it, more than build your own.
rule 3: If you want to save time only, choice the most downloaded images with similar applications installed as base image
Make sure you can view its Dockerfile. Otherwise, don't use it at all, whatever how many download counted.
rule 4: never pull images from public registry servers, if your company has security compliance concern, build your own.
Another reason to build your own is, the exist image are not built on the operation system you prefer. Such as some images proved by aws, they are built with amazon linux 2, in most case, I will rebuild with my own.
rule 5: When build your own, never mind from which base image, no need reinvent the wheel and use exis image's Dockerfile from github.com if you can.
Avoid Alpine, it will often make Python library installs muuuuch slower (https://pythonspeed.com/articles/alpine-docker-python/)
In general, Python version is more important than Apache version. Latest Apache from stable Linux distro is fine even if not latest version, but latest Python might be annoyingly too old. Like, when 3.9 comes out, do you want to be on 3.7?
As such, I would recommend python:3.8-slim-buster (or whatever Python version you want), and install Apache with apt-get.

Why do we build "inside" docker?

When I first learned Docker I expected a config file, image producer, CLI, and options for mounting and networks. That's all there.
I did not expect to put build commands inside a Dockerfile. I thought docker would wrap/tar/include a prebuilt task I made. Why give build commands in Docker?
Surely it can import a task thus keeping Jenkins/Bazel etc. distinct and apart for making an image/container?
I guess we are dealing with a misconception here. Docker is NOT a lighweight version of VMware/Xen/KVM/Parallels/FancyVirtualization.
Disclaimer: The following is heavily simplified for the sake of comprehensiveness.
So what is Docker?
In one sentence: Docker is a system to isolate processes from the other processes within an operating system as much as possible while still providing all means to run them. Put differently:
Docker is a package manager for isolated processes.
One of its closest ancestors are chroot and BSD jails. What those basically do is to isolate (more in the case of BSD, less in the case of chroot) a part of your OS resources and have a complete environment running independently from the rest of the OS - except for the kernel.
In order to be able to do that, a Docker image obviously needs to contain everything except for a kernel. So you need to provide a shell (if you choose to do so), standard libraries like glibc and even resources like CA certificates. For reference: In order to set up chroot jails, you did all this by hand once upon a time, preinstalling your chroot environment with each and every piece of software required. Docker is basically taking the heavy lifting from you here.
The mentioned isolation even down to the installed (and usable software) sounds cumbersome, but it gives you several advantages as a developer. Since you provide basically everything except for a (compatible) kernel, you can develop and test your code in the same environment it will run later down the road. Not a close approximation, but literally the same environment, bit for bit. A rather famous proverb in relation to Docker is:
"Runs on my machine" is no excuse any more.
Another advantage is that can add static resources to your Docker image and access them via quite ordinary file system semantics. While it is true that you can do that with virtualisation images as well, they usually do not come with a language for provisioning. Docker does - the Dockerfile:
FROM alpine
LABEL maintainer="you#example.com"
COPY file/in/host destination/on/image
Ok, got it, now why the build commands?
As described above, you need to provide all dependencies (and transitive dependencies) your application has. The easiest way to ensure that is to build your application inside your Docker image:
FROM somebase
RUN yourpackagemanager install long list of dependencies && \
make yourapplication && \
make install
If the build fails, you know you have missing dependencies. Now you can tweak and tune your Dockerfile until it compiles and is tested. So now your Docker image is finished, you can confidently distribute it, since you know that as long as the docker daemon runs on the machine somebody tries to run your image on, your image will run.
In the Go ecosystem, you basically assure your go.mod and go.sum are up to date and working and your work stay's reproducible.
Again, this works with virtualisation as well, so where is the deal?
A (good) docker image only runs what it needs to run. In the vast majority of docker images, this means exactly one process, for example your Go program.
Side note: It is very bad practise to run multiple processes in one Docker image, say your application and a database server and a cache and whatnot. That is what docker-compose is there for, or more generally container orchestration. But this is far too big of a topic to explain here.
A virtualised OS, however, needs to run a kernel, a shell, drivers, log systems and whatnot.
So the deal basically is that you get all the good stuff (isolation, reproducibility, ease of distribution) with less waste of resources (running 5 versions of the same OS with all its shenanigans).
Because we want to have enviroment for reproducible build. We don't want to depend on version of language, existence of compiler, version of libraires and so on.
Building inside a Dockerfile allows you to have all the tools and environment you need inside independently of your platform and ready to use. In a development perspective is easier to have all you need inside the container.
But you have to think about the objective of building inside a Dockerfile, if you have a very complex build process with a lot of dependencies you have to be worried about having all the tools inside and it reflects on the final size of your resulting image. Because this is not the same building to generate an artifact than building to produce the final container.
Thinking about this two aspects you have to learn to use the multistage build process in Docker here. The main idea is closer to your question because you can have a as many stages as you need depending on your build process and use different FROM images to ensure you have the correct requirements and dependences on each stage, to finally generate the image with the minimum dependencies and smaller size.
I'll add to the answers above:
Doing builds in or out of docker is a choice that depends on your goal. In my case I am more interested in docker containers for kubernetes, and in addition we have mature builds already.
This link shows how you take prebuilt tasks and add them to an image. This strategy together with adding libs, env etc leverages docker well and shows that indeed docker is flexible. https://medium.com/#chemidy/create-the-smallest-and-secured-golang-docker-image-based-on-scratch-4752223b7324

Buiding docker containers for golang applications without calling go build

I'm building my first Dockerfile for a go application and I don't understand while go build or go install are considered a necessary part of the docker container.
I know this can be avoided using muilt-stage but I don't know why it was ever put in the container image in the first place.
What I would expect:
I have a go application 'go-awesome'
I can build it locally from cmd/go-awesome
My Dockerfile contains not much more than
COPY go-awesome .
CMD ["go-awesome"]
What is the downside of this configuration? What do I gain by instead doing
COPY . .
RUN go get ./...
RUN go install ./..
Links to posts showing building go applications as part of the Dockerfile
https://www.callicoder.com/docker-golang-image-container-example/
https://blog.codeship.com/building-minimal-docker-containers-for-go-applications/
https://www.cloudreach.com/blog/containerize-this-golang-dockerfiles/
https://medium.com/travis-on-docker/how-to-dockerize-your-go-golang-app-542af15c27a2
You are correct that you can compile your application locally and simply copy the executable into a suitable docker image.
However, there are benefits to compiling the application inside the docker build, particularly for larger projects with multiple collaborators. Specifically the following reasons come to mind:
There are no local dependencies (aside from docker) required to build the application source. Someone wouldn't even need to have go installed. This is especially valuable for projects in which multiple languages are in use. Consider someone who might want to edit an HTML template inside of a go project and see what that looked like in the container runtime..
The build environment (version of go, dependency managment, file paths...) is constant. Any external dependencies can be safely managed and maintained via the Dockerfile.

How to compose it?

Target: build opencv docker
Dockerfile creation:
From Ubuntu14.04
or
From Python3.7
Which to choose and why?
I was trying to write dockerfile from scratch without copy paste from others dockerfile.
I would usually pick the highest-level Docker Hub library image that matches what I need. It's also worth searching the https://hub.docker.com/ search box which will often find relevant things, though of rather varied ownership and maintenance levels.
The official Docker Hub images tend to have thought through a lot of issues around persistence and configuration and first-time setup. Compare "I'll just apt-get install mysql-server" with all of the parts that go into the official mysql image; just importing that real-world experience and reusing it can save you some trouble.
I'd consider building my own from an OS base like ubuntu:16.04 if:
There is a requirement that Docker images must be built from some specific distribution base ("my job requires everything to be built off of CentOS so I need a CentOS-based MySQL image")
I need a combination of software versions or patches that the Docker Hub image no longer supports (jruby:9.1.16.0 is no longer being built, so if I need OS updates, I need to build my own base image)
I need an especially exotic set of build options for whatever reason ("I have a C extension that only works if the interpreter is specifically built with UTF-16 Unicode support")
I need or want very detailed control over what version(s) of software are embedded; for example if it's something Java-based where there's a JVM version and a runtime version and an application version that all could matter
In my opinion you should choose From Python3.7.
Since you are writing a dockerfile for opencv which is an open source computer vision and machine learning software library so you may require python also in your container.
Now if you use From Ubuntu14.04 you may need to add python also in the dockerfile whereas with From Python3.7 that will become redundant and will also make the dockerfile a bit shorter.

Which docker base image to use in the Dockerfile?

I'm having a web application, which consists of two projects:
using VueJS as a front-end part;
using ExpressJS as a back-end part;
I now need to docker-size my application using a docker, but I'm not sure about the very first line in my docker files (which is referring to the used environment I guess, source).
What I will need to do now is separate docker images for both projects, but since I'm very new to this, I can't figure out what should be the very first lines for both of the Dockerfiles (in both of the projects).
I was developing the project in Windows 10 OS, where I'm having node version v8.11.1 and expressjs version 4.16.3.
I tried with some of the versions which I found (as node:8.11.1-alpine) but what I got a warning: `
SECURITY WARNING: You are building a Docker image from Windows against
a non-Windows Docker host.
Which made me to think that I should not only care about node versions, instead to care about OS as well. So not sure which base images to use now.
node:8.11.1-alpine is a perfectly correct tag for a Node image. This particular one is based on Alpine Linux - a lightweight Linux distro, which is often used when building Docker images because of it's small footprint.
If you are not sure about which base image you should choose, just read the documentation at DockerHub. It lists all currently supported tags and describes different flavours of the Node image ('Image Variants' section).
Quote:
Image Variants
The node images come in many flavors, each designed for a specific use case.
node:<version>
This is the defacto image. If you are unsure about what your needs are, you probably want to use this one. It is designed to be used both as a throw away container (mount your source code and start the container to start your app), as well as the base to build other images off of. This tag is based off of buildpack-deps. buildpack-deps is designed for the average user of docker who has many images on their system. It, by design, has a large number of extremely common Debian packages. This reduces the number of packages that images that derive from it need to install, thus reducing the overall size of all images on your system.
node:<version>-alpine
This image is based on the popular Alpine Linux project, available in the alpine official image. Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.
This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use musl libc instead of glibc and friends, so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice. See this Hacker News comment thread for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.
To minimize image size, it's uncommon for additional related tools (such as git or bash) to be included in Alpine-based images. Using this image as a base, add the things you need in your own Dockerfile (see the alpine image description for examples of how to install packages if you are unfamiliar).
node:<version>-onbuild
The ONBUILD image variants are deprecated, and their usage is discouraged. For more details, see docker-library/official-images#2076.
While the onbuild variant is really useful for "getting off the ground running" (zero to Dockerized in a short period of time), it's not recommended for long-term usage within a project due to the lack of control over when the ONBUILD triggers fire (see also docker/docker#5714, docker/docker#8240, docker/docker#11917).
Once you've got a handle on how your project functions within Docker, you'll probably want to adjust your Dockerfile to inherit from a non-onbuild variant and copy the commands from the onbuild variant Dockerfile (moving the ONBUILD lines to the end and removing the ONBUILD keywords) into your own file so that you have tighter control over them and more transparency for yourself and others looking at your Dockerfile as to what it does. This also makes it easier to add additional requirements as time goes on (such as installing more packages before performing the previously-ONBUILD steps).
node:<version>-slim
This image does not contain the common packages contained in the default tag and only contains the minimal packages needed to run node. Unless you are working in an environment where only the node image will be deployed and you have space constraints, we highly recommend using the default image of this repository.

Resources