I'd like some opinions on this workflow. The intention is to semi-automate and revision control the creation/export of docker containers.
I have some docker directories with a dockerfile etc inside (enough to build a docker image from). At the moment, I've set up a process where this becomes a local git repo, then I set up a bare repo on a remote server. Then I add in an 'update' hook to the remote repo that will take the name of the repo and call a script that proceeds to clone that repo, build docker image, start a container, export container, delete repo. Then I end up with a .tar of my docker container every time I push an update to that repo.
The only issue is that I have to manually copy the hook to each remote repo I set up (considering .git/hooks doesn't get pushed from local).
So I'm looking for some feedback on whether this whole process has any intelligence to it or if I am going about it the completely wrong way.
What you are looking for is called "Continuous Integration".
There are multiple ways to achieve it, but here's how I do it:
Set up a Jenkins server
Put all docker files into one git repo, as modules if necessary
Have Jenkins check for changes in the repo every few minutes
Have Jenkins build the docker images after pulling in the changes
Related
Is there an AWS CLI command to simply load my image tar to AWS? (I do not have docker on my computer at all nor do I want to increase the image size of our CD pipeline as I prefer to keep it very very small since it adds to the time to start in circleCI/githubCI).
Is there a way to do this?
If not, what is the way to do this with docker so I do not have to upload into local docker registry and instead can just directly load to AWS ECR.
Context:
Our CI job builds on the PR 'and' writes a version/build number on the image and into a file in CI on the PR as well BUT NONE OF THIS should be deployed yet until on master branch. PR cannot merge into master until up to date with master and a merge of master into PR triggers CI again so PR is guaranteed to be latest when landing on master. This CI job has the artifact that can be deployed by CD (ie. there is no need to rebuild it all over again which takes a while sometimes)
Our CD job triggers on merge to master and reads the artifacts and needs to deploy them to AWS ECR and the image for our CD is very very small having just AWS tooling right now (no need for java, no need for gradle, etc. etc.).
I am sorry because I don't fully understand your requirements, but maybe you could use tools like skopeo in some way.
Among other things, it will allow you to copy your container tar to different registries including AWS ECR without the need to install docker, something like this:
skopeo login -u AWS -p $(aws ecr get-login-password) 111111111.dkr.ecr.us-east-1.amazonaws.com
skopeo copy docker-archive:build/your-jib-image.tar docker://111111111.dkr.ecr.us-east-1.amazonaws.com/myimage:latest
This approach is documented as well as a possible solution in this Github issue in the GoogleContainerTools repository.
In the issue they recommend another tool named crane which looks similar in its purpose to skopeo although I have never tested it.
I have a single git repository on github with:
a Dockerfile which builds multiple images meant to be used together. (a maven build produces a war file and sql files by downloading then from artifact repositories; a multi-stage build then creates a slim tomcat image with the war and a slim mysql image with the sql data preloaded).
a docker-compose.yml file that uses the "target" instruction to build and run containers on the images from the multi-stage build.
This works well during development. But it forces users to build images on their computer. I want the users to be able to only download images. The setup should also be using dockerhub's automated build to keep images up to date.
How can I set it up to achieve this ? What command(s) or file(s) do I give the users to allow them to download images and run containers ? If it is not possible, what can I do to make it possible (split the repo? copypaste the dockerfile? publish intermediate images to dockerhub and ensure a correct build order ? Don't use dockerhub's automated build ?)
To use dockerhub's automated builds you would need to build one image per Dockerfile and have one Dockerfile per repo. The image name comes from the source repository name (with the github org/user name as the docker image user name and the github repo name as the docker image name). Multistage builds work in automated builds but only one image is published per Dockerfile (the final image of the build).
You could build the images in your CI or even on your local machine and then push to dockerhub. You'd just need to have an account on dockerhub and be logged in to that account when you use the docker push command. When doing this push there doesn't have to be any mapping to GitHub repos but your image names should start with <dockerhub_user>/ as a kind of prefix (explained at https://docs.docker.com/docker-hub/repos/). It's ok if they are built under a different name as you could rename by retagging before pushing. This way you can also build the images however you like.
When you have images in dockerhub you can just refer to them in the docker-compose file using the form image: <dockerhub_user>/<dockerhub_image_name>:<tag>. The images will automatically be pulled when the user does docker-compose up.
Here are some tips and links that should help your situation:
Automated builds are a convenient way to deploy your images.
This part is pretty easy. You'll need accounts with Docker Hub and Github. You can register these accounts for free.
When you create a a repository on Docker Hub you can link it to your Github repository to automate the build.
Recommendations:
Split your services into separate Dockerfiles. Ideally you should use separate repositories: Docker Compose will pull them together at the end. A division of services will also help if anyone wants to implement e.g. a cloud database backend for their deployment.
Don't store database files inside a container. Containers should be ephemeral
For a robust design, test your builds.
Docker Hub automated builds are very flexible with the use of build hooks.
This part is a little tricky because I haven't found the best documentation. It also might not be necessary if you split your Dockerfile.
I've successfully created automated builds with multiple tags and targets using a hook at hooks/build but after reading the documentation it looks like you should also be able to use hooks/post_build.
Your hook could simply build the correct layer and push the tag to Docker Hub
For your repository that should look like:
#!/usr/bin/env bash
docker build --target lutece-mysql -t lutece/mysql .
docker push lutece/mysql
If you end up using hooks/build you might need to build the final target as the last step.
Recommendations:
If you need multiple tags for an image use a hook at hooks/post_push to add additional tags. That way each tag should link users to the same image. e.g.
#!/usr/bin/env bash
docker tag lutece/tomcat:master lutece/tomcat:latest
docker push lutece/tomcat:latest
Additionally you can use build hooks to label your image with things like build date and git commit.
Deployment with Docker Compose
Unfortunately I haven't done this part so I can't confirm how to get this to work.
With your repositories in Docker Hub and a working docker-compose.yml your clients may only need to run docker-compose up in the directory with your docker-compose.yml file. Docker Compose should pull in the images from Docker Hub.
I have multiple projects I need to build as part of the same CI flow - some are in java, some are nodejs, some are c++ etc.
We use Jenkins and slaves are supposed to run as docker containers.
My question is - should I create a jenkins slave container image per module type, i.e a dedicated slave image which would be able to build java, and a dedicated container to build nodejs with node installed etc. or a single container which can build anything - jave, node, etc.
If I look at it from vm perspective, I would most likely use the same vm to build anything - which means a centralized build slave. But I don't like this dependency, or if tomorrow I need to update the java version and keep the old one I might create huge images with little differences between them.
WDYT?
I personally would go down the route of a container-per-module-type because of the following:
I like to keep my containers as focussed as possible. They should do one thing and do it well e.g. build Java applications, build Node applications
Docker makes it incredibly easy to build Container images
It is incredibly easy to stop and start Containers
I'd probably create myself a separate project in Git that was structured something like this:
- /slaves
- /slaves/java
- /slaves/java/Dockerfile
- /slaves/node
- /slaves/node/Dockerfile
...
I have one Dockerfile that creates and builds the container image of the slave for the given "module type". I would make changes to this project via pull requests and each time a pull request is merged into master, push the resulting images up to DockerHub as the new version to be used as my Jenkins slaves.
I would have the above handled by another project running in my Jenkins instance that simply monitored my Git repository. When changes are made to the Git repository it just runs the build commands in order and then does a push of the new images to DockerHub:
docker build -f slaves/java/Dockerfile -t my-company/java-slave:$BUILD_NUMBER -t my-company/java-slave:latest
docker build -f slaves/node/Dockerfile -t my-company/node-slave:$BUILD_NUMBER -t my-company/node-slave:latest
docker push my-company/java-slave:$BUILD_NUMBER
docker push my-company/java-slave:latest
docker push my-company/node-slave:$BUILD_NUMBER
docker push my-company/node-slave:latest
You can then update your Jenkins configuration to the new image for the slaves when you're ready.
What is the right workflow for updating and storing images?
For example:
I download source code from GitHub (project with Docker files, docker-compose.yml)
I run "docker build"
And I push new image to Docker Hub (or AWS ECR)
I make some changes in source code
Push changes to GitHub
And what I should do now to update registry (Docker Hub)?
A) Should I run again "docker build" and then push new image (with new tag) to registry?
B) Should I somehow commit changes to existing image and update existing image on Docker Hub?
This will depend on what for you will use your docker image and what "releasing" policy you adopt.
My recommendation is that you sync the tags you keep on Docker Hub with the release/or tags you have in GitHub and automate as much as you can your production with a continuous integration tools like Jenkins and GitHub webooks.
Then your flow becomes :
You do your code modifications and integrate them in GitHub ideally using a pull request scheme. This means your codes will be merged into your master branch.
Your Jenkins is configured so that when master is changed it will build against your docker file and push it to Docker hub. This will erase your "latest" tag and make sure your latest tag in docker hub is always in sync with your master release on GitHub
If you need to keep additional tags, this will be typical because of different branches or releases of your software. You'll do the same as above with the tag hooked up through Jenkins and GitHub webhooks with a non-master branch. For this, take a look at how the official libraries are organized on GitHub (for example on Postgres or MySQL images).
I already have a build server that I generate a docker image for an application with and then put it into cloud storage. This is not an image that can be publicly shared on the docker index.
How can I run this application docker image in deis?
Deis is designed to build your docker image from your git repo via a buildpack or Dockerfile (although I can't find instructions on how to use a Dockerfile instead of a buildpack). This could be considered a legacy integration issue. However, the current setup of running the build service on the application cluster is not good for me, because I want my build server to be a lot more powerful than my application server. Ideally my build server would spin up on demand, although I don't bother with that rigt now.
We are hoping to resolve this feature request with https://github.com/deis/deis/issues/533.
Ideally we see it as "build your image with - insert CI product here - then run deis push --app=appname to deploy your docker image as an application". After that, it would be treated the same as any other application deployed to deis. Basically, deis push is to pushing docker images as git push is to pushing repositories.
In regards to documentation for deploying an application with a Dockerfile, the docs are at http://docs.deis.io/en/latest/developer/dockerfile/, though this workflow will change back to a more sane deployment workflow once https://github.com/deis/deis/pull/967 is merged. There was some technical debt from v0.8.0, and Dockerfile deployments was one of them.
Deis is designed to build your docker image from your git repo via a buildpack or Dockerfile
The quote is not quite right. Deis is actually designed to build the docker image from its own git repo. When you create a deis application using deis create, Deis will create new git remote name deis, that's why you run git push deis master to build you application.
So, you don't need to push your image to a public repository in order to deploy to Deis. All you need is a Dockerfile. Just put your Dockerfile in the root directory of your application and make sure to commit that file, Deis will build the application using Dockerfile, instead of buildpack.
Hope this will help!