Where do I save docker-compose files of a swarm? - docker

We are running Docker Swarm + Ceph. All runs okay. We plan to move more of our internally developed as well as third party applications to Swarm. Now the issue I have is as follows:
Let’s say I deployed 10 third party applications as Swarm stacks. I do this with Docker stack deploy command. I supply docker-compose file to it (often, in its turn composed out of multiple docker-compose-*.yml files compiled via docker-compose config). Now, if I want to change something in the stack, I often need my initial compose-files. Is there any kind of registry for docker-compose deployment descriptors? Like docker image registry but for docker-compose descriptors?
One of the ideas I have is to maintain Git or Mercurial repository with some directory structure to version the descriptors. This idea looks interesting, but works (semi-)well just for third party applications. With our own applications this adds a problem, as we often use CI/CD. And this would mean that we need to checkout one more repository during deployment, replace deployment descriptors for our apps and commit them. This may be a little tricky, as it may potentially lead to merge conflicts, etc.
Ideally, the solution I am looking for shall provide an easy way to get the latest versions of deployment descriptors of a particular (deployed) stack at the same time holding previous versions of them.
How do you manage your docker-compose files when there are too much of them?

Related

how to use docker in development

I am new to docker and I have done some ground work on docker like how to build a image, how to create a container, what is dockerfile.yml, docker-compose.yml file does etc. I have done my practice in my local machine. I am having following questions when it is coming to real time development using docker.
1) Does each developer in a team has to do development on docker and create images in their local machine and push it to docker registry ? or developers work without docker and one person will be creating final image from the committed code?
2) For each release, do we maintain the container or an image for that release?
3) what is the best practice to maintain the database means do we incorporate the database in image and build the container or we include only application related things and build image and container, and this container will communicate to database which is in outside container or physical database server ?
Thanks in advance.
Questions like these generally do not have a definitive answer. It depends heavily on your company, your team, the tooling that is being used, the software stack, etc. The best thing to do would be to lean on the senior development resources and senior technical leadership in your organization to help shape the answers to questions like these.
Take the following answers with a silo of salt as there is no way to answer these kinds of questions definitively.
1) Depends on what is most convenient for the developers and what language you are using. Some developers find an all container workflow to be convenient, some developers find they can iterate faster with their existing IDE/CLI workflow and test against running container images last.
In most cases you will want to let CI/CD tooling take care of the builds that are intended for production.
2) Yes. You can use container tagging for this purpose.
3) Running databases in containers is possible, but unless your team is experienced with containers and container orchestration I would leave the databases on traditional bare-metal or VMs.
Containers are a fancy wrapper around a single linux process. Generally the rule of thumb is one container for one process. You should not be combining multiple things in a single container. (This story gets slightly more complicated when you go to something like Red Hat OpenShift or Kubernetes as the discussion revolves around how many containers per pod).
The setup I'm used to is that developers mostly ignore Docker, until they need it to deploy. Tools like Node's node_modules directory or Python's virtual environments can help isolate subproject from each other. Any individual developer should be able to run docker build to build an image, but won't usually need to until the final stages of testing a particular change. You should deploy a continuous integration system and that will take responsibility for testing and building a final Docker image of each commit.
You never "maintain containers". If a container goes wrong, delete it and start a new one. Your CI system should build an image for each release, and you should deploy a registry to hold these.
You should never keep the database in the same container as the application. (See the previous point about routinely deleting containers.) My experience has generally been that production databases are important and special enough to merit their own dedicated non-Docker hosts, but there's nothing actually wrong with running a database in Docker; just make sure you know how to do backups and restores and migrations and whatever else on it.
There's no technical reason you can't use Docker Compose for production, but if you wind up needing to deploy your application on more than one physical server you might find it limiting. Kubernetes is more complex but seems to be the current winner in this space; Docker Swarm has some momentum; Hashicorp Nomad is out there; or you can build a manual deployment system by hand. (Note that at least Kubernetes and Nomad are both very big on the "something changed so I'm going to delete and recreate a container" concept, and both make it extremely tricky to do live development in a quasi-production setup.)
Also note that where I say "deploy" there are public-cloud versions of all of these things (Docker Hub, CircleCI, end-to-end solutions including a registry and Kubernetes built on top of AWS or Azure or GCP) and if you're comfortable with the cost-to-effort tradeoff and the implications of using an external service in your build/deploy chain then these can help you get started faster

Docker, update image or just use bind-mounts for website code?

I'm using Django but I guess the question is applicable to any web project.
In our case, there are two types of codes, the first one being python code (run in django), and others are static files (html/js/css)
I could publish new image when there is a change in any of the code.
Or I could use bind mounts for the code. (For django, we could bind-mount the project root and static directory)
If I use bind mounts for code, I could just update the production machine (probably with git pull) when there's code change.
Then, docker image will handle updates that are not strictly our own code changes. (such as library update or new setup such as setting up elasticsearch) .
Does this approach imply any obvious drawback?
For security reasons is advised to keep an operating system up to date with the last security patches but docker images are meant to be released in an immutable fashion in order we can always be able to reproduce productions issues outside production, thus the OS will not update itself for security patches being released. So this means we need to rebuild and deploy our docker image frequently in order to stay on the safe side.
So I would prefer to release a new docker image with my code and static files, because they are bound to change more often, thus requiring frequent release, meaning that you keep the OS more up to date in terms of security patches without needing to rebuild docker images in production just to keep the OS up to date.
Note I assume here that you release new code or static files at least in a weekly basis, otherwise I still recommend to update at least once a week the docker images in order to get the last security patches for all the software being used.
Generally the more Docker-oriented solutions I've seen to this problem learn towards packaging the entire application in the Docker image. That especially includes application code.
I'd suggest three good reasons to do it this way:
If you have a reproducible path to docker build a self-contained image, anyone can build and reproduce it. That includes your developers, who can test a near-exact copy of the production system before it actually goes to production. If it's a Docker image, plus this code from this place, plus these static files from this other place, it's harder to be sure you've got a perfect setup matching what goes to production.
Some of the more advanced Docker-oriented tools (Kubernetes, Amazon ECS, Docker Swarm, Hashicorp Nomad, ...) make it fairly straightforward to deal with containers and images as first-class objects, but trickier to say "this image plus this glop of additional files".
If you're using a server automation tool (Ansible, Salt Stack, Chef, ...) to push your code out, then it's straightforward to also use those to push out the correct runtime environment. Using Docker to just package the runtime environment doesn't really give you much beyond a layer of complexity and some security risks. (You could use Packer or Vagrant with this tool set to simulate the deploy sequence in a VM for pre-production testing.)
You'll also see a sequence in many SO questions where a Dockerfile COPYs application code to some directory, and then a docker-compose.yml bind-mounts the current host directory over that same directory. In this setup the container environment reflects the developer's desktop environment and doesn't really test what's getting built into the Docker image.
("Static files" wind up in a gray zone between "is it the application or is it data?" Within the context of this question I'd lean towards packaging them into the image, especially if they come out of your normal build process. That especially includes the primary UI to the application you're running. If it's things like large image or video assets that you could reasonably host on a totally separate server, it may make more sense to serve those separately.)

How to simply use docker for deployment?

Docker seems to be the incredible new tool to solve all developer headaches when it comes to packaging and releasing an application, yet i'm unable to find simple solutions for just upgrading a existing application without having to build or buy into whole "cloud" systems.
I don't want any kubernetes cluster or docker-swarm to deploy hundreds of microservices. Just simply replace an existing deployment process with a container for better encapsulation and upgradability.
Then maybe upgrade this in the future, if the need for more containers increases so manual handling would not make sense anymore
Essentially the direct app dependencies (Language and Runtime, dependencies) should be bundled up without the need to "litter" the host server with them.
Lower level static services, like the database, should still be in the host system, as well as a entry router/load-balancer (simple nginx proxy).
Does it even make sense to use it this way? And if so, is there any "best practice" for doing something like this?
Update:
For the application i want to use it on, i'm already using Gitlab-CI.
Tests are already run inside a docker environment via Gitlab-CI, but deployment still happens the "old way" (syncing the git repo to the server and automatically restarting the app, etc).
Containerizing the application itself is not an issue, and i've also used full docker deployments via cloud services (mostly Heroku), but for this project something like this is overkill. No point in paying hundreds of $$ for a cloud server environment if i need pretty much none of the advantages of it.
I've found several of "install your own heroku" kind of systems but i don't need or want to manage the complexity of a dynamic system.
I suppose basically a couple of remote bash commands for updating and restarting a docker container (after it's been pushed to a registry by the CI) on the server, could already do the job - though probably pretty unreliably compared to the current way.
Unfortunately, the "best practice" is highly subjective, as it depends entirely on your setup and your organization.
It seems like you're looking for an extremely minimalist approach to Docker containers. You want to simply put source code and dependencies into a container and push that out to a system. This is definitely possible with Docker, but the manner of doing this is going to require research from you to see what fits best.
Here are the questions I think you should be asking to get started:
1) Is there a CI tool that will help me package together these containers, possibly something I'm already using? (Jenkins, GitLab CI, CircleCI, TravisCI, etc...)
2) Can I use the official Docker images available at Dockerhub (https://hub.docker.com/), or do I need to make my own?
3) How am I going to store Docker Images? Will I host a basic Docker registry (https://hub.docker.com/_/registry/), or do I want something with a bit more access control (Gitlab Container Registry, Harbor, etc...)
That really only focuses on the Continuous Integration part of your question. Once you figure this out, then you can start to think about how you want to deploy those images (Possibly even using one of the tools above).
Note: Also, Docker doesn't eliminate all developer headaches. Does it solve some of the problems? Absolutely. But what Docker, and the accompanying Container mindset, does best is shift many of those issues to the left. What this means is that you see many of the problems in your processes early, instead of those problems appearing when you're pushing to prod and you suddenly have a fire drill. Again, Docker should not be seen as a solve-all. If you go into Docker thinking it will be a solve-all, then you're setting yourself up for failure.

Docker, Jenkins and Rails - Setup for running specs on a typical Rails stack

I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.

Using docker to run a distributed computation

I just came across docker, and was looking through its docs to figure out how to use this to distribute a java project across multiple nodes, while making this distribution platform independent i.e the nodes can be running any platform. Currently i'm sending classes to different nodes and running it on them with the assumption that these nodes have the same environment as the client. I couldn't quite figure out how to do this, any suggestions wouldbe greatly appreciated.
I do something similar. In my humble opinion Docker or not is not your biggest problem. However, using Docker images for this purpose can and will save you a lot of headaches.
We have a build pipeline where a very large Java project is built using Maven. The outcome of this is a single large JAR file that contains the software we need to run on our nodes.
But some of our nods also need to run some 3rd party software such as Zookeeper and Cassandra. So after the Maven build we use packer.io to create a Docker image that contains all needed components which ends up on a web server that can be reached only from within our private cloud infrastructure.
If we want to roll out our system we use a combination of Python scripts that talk with the OpenStack API and create virtual machines on our cloud, and Puppet which performs the actual software provisioning inside of the VMs. Our VMs are CentOS 7 images, so what Puppet actually does is to add the Docker yum repos. Then installs Docker through yum, pulls in the Docker image from our repository server and finally uses a custom bash script to launch our Docker image.
For each of these steps there are certainly even more elegant ways of doing it.

Resources