Deploying an existing docker image with Deis - docker

I already have a build server that I generate a docker image for an application with and then put it into cloud storage. This is not an image that can be publicly shared on the docker index.
How can I run this application docker image in deis?
Deis is designed to build your docker image from your git repo via a buildpack or Dockerfile (although I can't find instructions on how to use a Dockerfile instead of a buildpack). This could be considered a legacy integration issue. However, the current setup of running the build service on the application cluster is not good for me, because I want my build server to be a lot more powerful than my application server. Ideally my build server would spin up on demand, although I don't bother with that rigt now.

We are hoping to resolve this feature request with https://github.com/deis/deis/issues/533.
Ideally we see it as "build your image with - insert CI product here - then run deis push --app=appname to deploy your docker image as an application". After that, it would be treated the same as any other application deployed to deis. Basically, deis push is to pushing docker images as git push is to pushing repositories.
In regards to documentation for deploying an application with a Dockerfile, the docs are at http://docs.deis.io/en/latest/developer/dockerfile/, though this workflow will change back to a more sane deployment workflow once https://github.com/deis/deis/pull/967 is merged. There was some technical debt from v0.8.0, and Dockerfile deployments was one of them.

Deis is designed to build your docker image from your git repo via a buildpack or Dockerfile
The quote is not quite right. Deis is actually designed to build the docker image from its own git repo. When you create a deis application using deis create, Deis will create new git remote name deis, that's why you run git push deis master to build you application.
So, you don't need to push your image to a public repository in order to deploy to Deis. All you need is a Dockerfile. Just put your Dockerfile in the root directory of your application and make sure to commit that file, Deis will build the application using Dockerfile, instead of buildpack.
Hope this will help!

Related

dockerhub automated build from single repo with single dockerfile building multiple images

I have a single git repository on github with:
a Dockerfile which builds multiple images meant to be used together. (a maven build produces a war file and sql files by downloading then from artifact repositories; a multi-stage build then creates a slim tomcat image with the war and a slim mysql image with the sql data preloaded).
a docker-compose.yml file that uses the "target" instruction to build and run containers on the images from the multi-stage build.
This works well during development. But it forces users to build images on their computer. I want the users to be able to only download images. The setup should also be using dockerhub's automated build to keep images up to date.
How can I set it up to achieve this ? What command(s) or file(s) do I give the users to allow them to download images and run containers ? If it is not possible, what can I do to make it possible (split the repo? copypaste the dockerfile? publish intermediate images to dockerhub and ensure a correct build order ? Don't use dockerhub's automated build ?)
To use dockerhub's automated builds you would need to build one image per Dockerfile and have one Dockerfile per repo. The image name comes from the source repository name (with the github org/user name as the docker image user name and the github repo name as the docker image name). Multistage builds work in automated builds but only one image is published per Dockerfile (the final image of the build).
You could build the images in your CI or even on your local machine and then push to dockerhub. You'd just need to have an account on dockerhub and be logged in to that account when you use the docker push command. When doing this push there doesn't have to be any mapping to GitHub repos but your image names should start with <dockerhub_user>/ as a kind of prefix (explained at https://docs.docker.com/docker-hub/repos/). It's ok if they are built under a different name as you could rename by retagging before pushing. This way you can also build the images however you like.
When you have images in dockerhub you can just refer to them in the docker-compose file using the form image: <dockerhub_user>/<dockerhub_image_name>:<tag>. The images will automatically be pulled when the user does docker-compose up.
Here are some tips and links that should help your situation:
Automated builds are a convenient way to deploy your images.
This part is pretty easy. You'll need accounts with Docker Hub and Github. You can register these accounts for free.
When you create a a repository on Docker Hub you can link it to your Github repository to automate the build.
Recommendations:
Split your services into separate Dockerfiles. Ideally you should use separate repositories: Docker Compose will pull them together at the end. A division of services will also help if anyone wants to implement e.g. a cloud database backend for their deployment.
Don't store database files inside a container. Containers should be ephemeral
For a robust design, test your builds.
Docker Hub automated builds are very flexible with the use of build hooks.
This part is a little tricky because I haven't found the best documentation. It also might not be necessary if you split your Dockerfile.
I've successfully created automated builds with multiple tags and targets using a hook at hooks/build but after reading the documentation it looks like you should also be able to use hooks/post_build.
Your hook could simply build the correct layer and push the tag to Docker Hub
For your repository that should look like:
#!/usr/bin/env bash
docker build --target lutece-mysql -t lutece/mysql .
docker push lutece/mysql
If you end up using hooks/build you might need to build the final target as the last step.
Recommendations:
If you need multiple tags for an image use a hook at hooks/post_push to add additional tags. That way each tag should link users to the same image. e.g.
#!/usr/bin/env bash
docker tag lutece/tomcat:master lutece/tomcat:latest
docker push lutece/tomcat:latest
Additionally you can use build hooks to label your image with things like build date and git commit.
Deployment with Docker Compose
Unfortunately I haven't done this part so I can't confirm how to get this to work.
With your repositories in Docker Hub and a working docker-compose.yml your clients may only need to run docker-compose up in the directory with your docker-compose.yml file. Docker Compose should pull in the images from Docker Hub.

Using docker the way Openshift does?

I read this How does docker compare to openshift?
But I have a question :
This is an extremely simplified description of what usually devs do with Openshift :
Select a "pod" (let's say a JBoss/Wildfly container)
From within Openshift you point to your github repo
Openshift would clone the repo, build it and deploy it
Openshift present you with a web URL to access this repo port 8080
There's of course a lot more going on but that's as simple as it gets
Is this setup doable in my own linux box, VM or a cloud instance (Docker Container --> clone, build and deploy from git repo)? What would I need without messing too much with networking and domains etc?
from my research I see the following tools:
Kubernetes
Dokku : I see it described as "Your own Heroko"
I also keep hearing about CaaS (Containers as a Service)
I understand I would be needing another tool or process to the build (CI/CD) capability, and to triggering builds with git push.

How to deploy/run a Docker image from a build server

Having built,run and executed tests against a docker image on a CI build server(TeamCity2017), how should we deploy it to further machines?
How, for example, if we push it to a Docker registry, would our CI server instruct the target machine to pull and run the image? I.e. where it an application we would use Octopus for this deployment step, but our Octopus server doesn't support Docker deployments as yet.
Any guidance appreciated.
Michael McD.
I would use Octo to deploy your images onto target machines. You'd need to use powershell scripts to have your machines run the images. Or you can use something like Rancher, which is a docker swarm manager. There is no feasible way to have TeamCity deploy your images. The software simply isn't built to be able to do deploys.
The Rancher solution would not be automated, at least not to my knowledge. You would have to trigger upgrades when a new image is pushed to the docker registry.

Docker: updating image and registry

What is the right workflow for updating and storing images?
For example:
I download source code from GitHub (project with Docker files, docker-compose.yml)
I run "docker build"
And I push new image to Docker Hub (or AWS ECR)
I make some changes in source code
Push changes to GitHub
And what I should do now to update registry (Docker Hub)?
A) Should I run again "docker build" and then push new image (with new tag) to registry?
B) Should I somehow commit changes to existing image and update existing image on Docker Hub?
This will depend on what for you will use your docker image and what "releasing" policy you adopt.
My recommendation is that you sync the tags you keep on Docker Hub with the release/or tags you have in GitHub and automate as much as you can your production with a continuous integration tools like Jenkins and GitHub webooks.
Then your flow becomes :
You do your code modifications and integrate them in GitHub ideally using a pull request scheme. This means your codes will be merged into your master branch.
Your Jenkins is configured so that when master is changed it will build against your docker file and push it to Docker hub. This will erase your "latest" tag and make sure your latest tag in docker hub is always in sync with your master release on GitHub
If you need to keep additional tags, this will be typical because of different branches or releases of your software. You'll do the same as above with the tag hooked up through Jenkins and GitHub webhooks with a non-master branch. For this, take a look at how the official libraries are organized on GitHub (for example on Postgres or MySQL images).

Git to docker export

I'd like some opinions on this workflow. The intention is to semi-automate and revision control the creation/export of docker containers.
I have some docker directories with a dockerfile etc inside (enough to build a docker image from). At the moment, I've set up a process where this becomes a local git repo, then I set up a bare repo on a remote server. Then I add in an 'update' hook to the remote repo that will take the name of the repo and call a script that proceeds to clone that repo, build docker image, start a container, export container, delete repo. Then I end up with a .tar of my docker container every time I push an update to that repo.
The only issue is that I have to manually copy the hook to each remote repo I set up (considering .git/hooks doesn't get pushed from local).
So I'm looking for some feedback on whether this whole process has any intelligence to it or if I am going about it the completely wrong way.
What you are looking for is called "Continuous Integration".
There are multiple ways to achieve it, but here's how I do it:
Set up a Jenkins server
Put all docker files into one git repo, as modules if necessary
Have Jenkins check for changes in the repo every few minutes
Have Jenkins build the docker images after pulling in the changes

Resources