How to distribute docker-compose files? - docker

I've managed to create a docker-compose file which runs my application. Now I'm wondering if there's a standard way for distributing this file? I mean, with docker I would distribute the image uploaded to docker-hub built from my Dockerfile, can I also upload docker-compose files to docker-hub?
What would the deployment flow look like here?

You can deploy single images on DockerHub
You can't deploy a docker-compose file to DockerHub
The way that I saw the most is :
Creating a Github repository containing your project (with the
docker-compose file)
Explaining how to create the different images in a Readme.md
Push each images on DockerHub and link your DockerHub images to your
git repositories to allow people to check the whole stack.

Related

`gcloud builds submit` for Cloud Run

I have this situation, because the documentation was not clear. The gcloud builds submit --tag gcr.io/[PROJECT-ID]/helloworld command will
archive the contents of my source folder and then run the docker build on the Google build server.
Also it is only looking at the .gitignore file for the contents to archive. If it is a docker build, it should honor the .dockerignore file.
Also there is no word about how to compile the application. It has to be compiled if is not precompiled application before it is dockerized.
the quick guide only considers that the application is a precompiled one and all the contents of the folder as per the .gitignore are required required to run the application. People will not be aware of all that for a new technology. I have just figured it out by myself.
So, the alternate way of doing all that is either include the build steps in the docker file (which will make my image heavy) or create a docker image locally (manually) and then submit the image to the repository (manually) and then publish to the cloud run (using the second command documented or manually).
Is there anything I am missing over here?
Cloud Build respects .dockerignore. It will upload all files that are not in .gitignore, but once uploaded, it will respect .dockerignore regarding which files to use for the build.
Compiling your application is usually done at the same time as "containerizing" it. For example, for a Node.js app, the Dockerfile must run npm install --production. I recommend looking at the many examples in the quickstart.
I think you've got it, essentially your options are:
Building using Cloud Build
Building locally and pushing using Docker
Generally if you need additional build steps, I would recommend including them in your Docker file. Ideally you should be able to go from source + Dockerfile to a complete image in either case.

What is the recommended way of adding documentation to docker images

It seems like there are two ways to add documentation to a docker image:
You can add a readme.md in the root folder (where your docker file is located) and this is meant to be parsed by the dockerhub automated build system.
The second way is by using the manifest
https://docs.docker.com/docker-hub/publish/publish/#prepare-your-image-manifest-materials
But the documentation doesn't really explain well how to annotate the manifest file for an image. Also it looks like the manifest command is considered experimental.
What is the recommended way of documenting a docker image?
Personally i prefer not having to add documentation when the container is being built, i would much rather a file in the source control. However the md file method seems to have minimal support.
Most modern container registries (like Dockerhub, Quay, Harbor) have a webinterface that can render and display documentation in Markdown format. When you do automatic builds on Dockerhub from a Github repo, the git repo's README.md can get automatically synced to the repo on Docker Hub. If you build your images locally (or via a CI runner) and push them to Docker Hub you could also push the README file using the docker-pushrm tool. It also supports other container registries than Dockerhub.

Web development workflow using Github and Docker

I learnt the basics of github and docker and both work well in my environment. On my server, I have project directories, each with a docker-compose.yml to run the necessary containers. These project directories also have the actual source files for that particular app which are mapped to virtual locations inside the containers upon startup.
My question is now- how to create a pro workflow to encapsulate all of this? Should the whole directory (including the docker-compose files) live on github? Thus each time changes are made I push the code to my remote, SSH to the server, pull the latest files and rebuild the container. This rebuilding of course means pulling the required images from dockerhub each time.
Should the whole directory (including the docker-compose files) live on github?
It is best practice to keep all source code including dockerfiles, configuration ... versioned. Thus you should put all the source code, dockerfile, and dockercompose in a git reporitory. This is very common for projects on github that have a docker image.
Thus each time changes are made I push the code to my remote, SSH to the server, pull the latest files and rebuild the container
Ideally this process should be encapsulated in a CI workflow using a tool like Jenkins. You basically push the code to the git repository,
which triggers a jenkins job that compiles the code, builds the image and pushes the image to a docker registry.
This rebuilding of course means pulling the required images from dockerhub each time.
Docker is smart enough to cache the base images that have been previously pulled. Thus it will only pull the base images once on the first build.

How do you put your source code into Kubernetes?

I am new to Kubernetes and so I'm wondering what are the best practices when it comes to putting your app's source code into container run in Kubernetes or similar environment?
My app is a PHP so I have PHP(fpm) and Nginx containers(running from Google Container Engine)
At first, I had git volume, but there was no way of changing app versions like this so I switched to emptyDir and having my source code in a zip archive in one of the images that would unzip it into this volume upon start and now I have the source code separate in both images via git with separate git directory so I have /app and /app-git.
This is good because I do not need to share or configure volumes(less resources and configuration), the app's layer is reused in both images so no impact on space and since it is git the "base" is built in so I can simply adjust my dockerfile command at the end and switch to different branch or tag easily.
I wanted to download an archive with the source code directly from repository by providing credentials as arguments during build process but that did not work because my repo, bitbucket, creates archives with last commit id appended to the directory so there was no way o knowing what unpacking the archive would result in, so I got stuck with git itself.
What are your ways of handling the source code?
Ideally, you would use continuous delivery patterns, which means use Travis CI, Bitbucket pipelines or Jenkins to build the image on code change.
that is, every time your code changes, your automated build will get triggered and build a new Docker image, which will contain your source code. Then you can trigger a Deployment rolling update to update the Pods with the new image.
If you have dynamic content, you likely put this a persistent storage, which will be re-mounted on Pod update.
What we've done traditionally with PHP is an overlay on runtime. Basically the container will have a volume mounted to it with deploy keys to your git repo. This will allow you to perform git pull operations.
The more buttoned up approach is to have custom, tagged images of your code extended from fpm or whatever image you're using. That way you would run version 1.3 of YourImage where YourImage would contain code version 1.3 of your application.
Try to leverage continuous integration and continuous deployment. You can use Jenkins as CI/CD server, and create some jobs for building image, pushing image and deploying image.
I recommend putting your source code into docker image, instead of git repo. You can also extract configuration files from docker image. In kubernetes v1.2, it provides new feature 'ConfigMap', so we can put configuration files in ConfigMap. When running a pod, configuration files will be mounted automatically. It's very convenience.

How can I structure my docker projects for easy deployment?

Right now I have multiple components of my application in the same folder linked together with a docker-compose
This works really well in development, but when I want to push to production it's kind of fuzzy. If I keep this structure I cannot use only dockerhub to host my images because the docker-compose which links them will be missing. If I use git to pull down my docker-compose, what would be the point of dockerhub? Why not just clone my whole repo and run docker-compose up each time?
I could alternatively store each component separately in separate github repos, pushing them up to dockerhub when pushed to master. Then, simply combine them from the hub with a dockercompose. This seems less than ideal too, since one would have to clone and push to several different repos to make a change which effects the system.
How do you do it?
I have two parts: source code and config files (docker files, docker-compose files...)
I put Dockerfile and docker-compose in a folder with the struct like you and push it to a git repository. For source code (and other data), I have to manage it by hand, with separated git repositories for source code to push and pull each time it needs to update.
Be careful with the production server, just update small part instead of the whole server.
Check out the new (still experimental) docker-app (June 2018)
It will allow you to push your docker-compose to DockerHub, as well as launch your app (through docker-compose) with settings variations between dev and prod.
See example:
You can create an Application Package based on this Compose file:
$ docker-app init --single-file hello
$ ls
docker-compose.yml
hello.dockerapp
The new new file hello.dockerapp contains three YAML documents:
metadatas
the Compose file
settings for your application
See "Sharing your application on the Hub"
You can push any application to the Hub using docker-app push:
$ docker-app push --namespace myHubUser --tag latest
This command will create an image named myHubUser/hello.dockerapp:latest on your local Docker daemon, and push it to the Hub.

Resources