Do I need to share the docker image if I can just share the docker file along with the source code? - docker

I am just starting to learn about docker. Is docker repository (like Docker Hub) useful? I see the docker image as a package of source code and environment configurations (dockerfile) for deploying my application. Well if it's just a package, why can't I just share my source code with the dockerfile (via GitHub for example)? Then the user just downloads it all and uses docker build and docker run. And there is no need to push the docker image to the repository.

There are two good reasons to prefer pushing an image somewhere:
As a downstream user, you can just docker run an image from a repository, without additional steps of checking it out or building it.
If you're using a compiled language (C, Java, Go, Rust, Haskell, ...) then the image will just contain the compiled artifacts and not the source code.
Think of this like any other software: for most open-source things you can download its source from the Internet and compile it yourself, or you can apt-get install or brew install a built package using a package manager.
By the same analogy, many open-source things are distributed primarily as source code, and people who aren't the primary developer package and redistribute binaries. In this context, that's the same as adding a Dockerfile to the root of your application's GitHub repository, but not publishing an image yourself. If you don't want to set up a Docker Hub account or CI automation to push built images, but still want to have your source code and instructions to build the image be public, that's a reasonable decision.

That is how it works. You need to put the configuration files in your code, i.e,
Dockerfile and docker-compose.yml.

Related

Docker dealing with images instead of Dockerfiles

Can someone explain to me why the normal Docker process is to build an image from a Dockerfile and then upload it to a repository, instead of just moving the Dockerfile to and from the repository?
Let's say we have a development laptop and a test server with Docker.
If we build the image, that means uploading and downloading all of the packages inside the Dockerfile. Sometimes this can be very large (e.g. PyTorch > 500MB).
Instead of transporting the large imagefile to and from the server, doesn't it make sense to, perhaps compile the image locally to verify it works, but mostly transport the small Dockerfile and build the image on the server?
This started out as a comment, but it got too long. It is likely to not be a comprehensive answer, but may contain useful information regardless.
Often the Dockerfile will form part of a larger build process, with output files from previous stages being copied into the final image. If you want to host the Dockerfile instead of the final image, you’d also have to host either the (usually temporary) processed files or the entire source repo & build script.
The latter is often done for open source projects, but for convenience pre-built Docker images are also frequently available.
One tidy solution to this problem is to write the entire build process in the Dockerfile using multi-stage builds (introduced in Docker CE 17.05 & EE 17.06). But even with the complete build process described in a platform-independent manner in a single Dockerfile, the complete source repository must still be provided.
TL,DR: Think of a Docker image as a regular binary. It’s convenient to download and install without messing around with source files. You could download the source for a C application and build it using the provided Makefile, but why would you if a binary was made available for your system?
Instead of transporting the large imagefile to and from the server,
doesn't it make sense to, perhaps compile the image locally to verify
it works, but mostly transport the small Dockerfile and build the
image on the server?
Absolutely! You can, for example, set up an automated build on Docker Hub which will do just that every time you check in an updated version of your Dockerfile to your GitHub repo.
Or you can set up your own build server / CI pipeline accordingly.
IMHO, one of the reason for building the images concept and putting into repository is sharing with people too. For example we call Python's out of the box image for performing all python related stuff for a python program to run in Dockerfile. Similarly we could create a custom code(let's take example I did for apache installation with some custom steps(like ports changes and additionally doing some steps) I created its image and then finally put it to my company's repository.
I came to know after few days that may other teams are using it too and now when they are sharing it they need NOT to make any changes simply use my image and they should be done.

Setting up multi-stage Docker build on Heroku

[Edit: It looks like my specific question is how to push a multi-stage Docker build to Heroku]
I'm trying to set up a NLP server using the spacy-api-docker Github repository.
The project README lists a base image (jgontrum/spacyapi:base_v2) with no included language models as well an English language model image (jgontrum/spacyapi:en_v2) which is what I'm looking for.
When I pull and run the English language image the localhost API works perfectly, but when I try to build an image from the cloned Github repository the main Dockerfile seems to only build the base model (which is useless), and when I follow the steps listed in this heroku docker documentation and this other third party tutorial to push the container to Github it only seems to use that base Dockerfile - I can get the api running but it's useless with no models.
The repository also has a bunch of shorter language-specific Dockerfiles in a subfolder which I'm guessing need to be specified in some way? Just sticking the english Dockerfile after the main Dockerfile didn't work, at any rate.
My guess is that I might have to:
a. figure out how to push an image from Docker hub to Heroku without
a repository (the only image that's worked completely I pulled
directly from docker)
b. figure out how to make a repository from a
pulled image, which I can then make into a Heroku project with heroku
create
c. figure out how to specify the :en_v2 part when I build to
Heroku from the repository (is that a Docker tag? does it somehow
specify which additional Dockerfile to use?)
d. look into multi-stage Docker builds
I'm new to programming and have been banging my head against this for a while now, so would be very grateful for any pointers (and please pardon any terms I've used poorly, my vocabulary is pretty basic for this stuff).
Thanks!
What I can help you is just show sample code if just you wanna know how to setup multistage build and how to build.
I'm also using multistage build on Docker because several containers are required on this system and just show you related source code as below.
multistage build on Dockerfile
https://github.com/hiromaily/go-goa/blob/master/docker/Dockerfile.multistage.heroku
how to deploy to Heroku from travis-ci in my case
https://github.com/hiromaily/go-goa/blob/master/.travis.yml
I didn't read carefully, so if I miss that point, please ignore me.

How to automate Multi-Arch-Docker Image builds

I have dockerized a nodejs app on github. My Dockerfile is based on the offical nodejs images. The offical node-repo supports multiple architectures (x86, amd64, arm) seamlessly. This means I can build the exact same Dockerfile on different machines resulting in different images for the respective architecture.
So I am trying to offer the same architectures seamlessly for my app, too. But how?
My goal is automate it as much as possible.
I know I need in theory to create a docker-manifest, which acts as a docker-repo and redirects the end-users-docker-clients to their suitable images.
Docker-Hub itself can monitor a github repo and kick off an automated build. Thats would take care of the amd64 image. But what about the remaining architectures?
There is also the service called 'TravisCI' which I guess could take care of the arm-build with the help of qemu.
Then I think both repos could then be referenced statically by the manifest-repo. But this still leaves a couple architectures unfulfilled.
But using multiple services/ways of building the same app feels wrong. Does anyone know a better and more complete solution to this problem?
It's basically running the same dockerfile through a couple machines and recording them in a manifest.
Starting with Docker 18.02 CLI you can create multi-arch manifests and push them to the docker registries if you enabled client-side experimental features. I was able to use VSTS and create a custom build task for multi-arch tags after the build. I followed this pattern.
docker manifest create --amend {multi-arch-tag} {os-specific-tag-1} {os-specific-tag-2}
docker manifest annotate {multi-arch-tag} {os-specific-tag-1} --os {os-1} --arch {arch-1}
docker manifest annotate {multi-arch-tag} {os-specific-tag-2} --os {os-2} --arch {arch-2}
docker manifest push --purge {multi-arch-tag}
On a side note, I packaged the 18.02 docker CLI for Windows and Linux in my custom VSTS task so no install of docker was required. The manifest command does not appear to need the docker daemon to function correctly.

Build chain in the cloud?

(I understand this question is somewhat out of scope for stack overflow, because contains more problems and somewhat vague. Suggestions to ask it in the proper ways are welcome.)
I have some open source projects depending in each other.
The code resides in github, the builds happen in shippable, using docker images which in turn are built on docker hub.
I have set up an artifact repo and a debian repository where shippable builds put the packages, and docker builds use them.
The build chain looks like this in terms of deliverables:
pre-zenta docker image
zenta docker image (two steps of docker build because it would time out otherwise)
zenta debian package
zenta-tools docker image
zenta-tools debian package
xslt docker image
adadocs artifacts
Currently I am triggering the builds by pushing to github and sometimes rerunning failed builds on shippable after the docker build ran.
I am looking for solutions for the following problems:
Where to put Dockerfiles? Now they are in the repo of the package needing the resulting docker image for build. This way all information to build the package are in one place, but sometimes I have to trigger an extra build to have the package actually built.
How to trigger build automatically?
..., in a way supporting git-flow? For example if I change the code in zenta develop branch, I want to make sure that zenta-tools will build and test with the development version of it, before merging with master.
Are there a tool with which I can overview the health of the whole build chain?
Since your question is related to Shippable, I've created a support issue for you here - https://github.com/Shippable/support/issues/2662. If you are interested in discussing the best way to handle your scenario, you can also send me an email at support#shippable.com You can set up your entire flow, including building the docker images, using Shippable.

Microservice with long build time in Docker

We have an in-house c++ tool that we are investigating as a docker microservice and wondering if it's even a good idea.
The issue is that the tool has a lot of dependencies including GDAL which can take 30 minutes to download and compile.
Normally my provisioning steps would look like:
git clone gdal
./configure; make; make install;
git clone myTool
make myTool
My question is how should I approach this problem using docker? I can just put "RUN" statements in my Dockerfile but then it takes a long time to build containers and each one is 600MB+. I'd like to know if there's a better way.
Create a separate base image for gdal then base your final images on that. This way you rarely have to rebuild gdal.
As for image size, there is currently no clean way to distinguish build and runtime dependencies of an docker image. At work we resorted to some bash glue that essentially enables nested docker builds. For further details on that you can check out this repo.

Resources