Whats exactly mean by build, ship and run any app, anywhere with Docker - docker

Docker says "it possible to build, ship and run any app, anywhere"
Dockerising the app looks promising solution to ship and run any app, anywhere with less pain.
But how it's going to help us in building an application?

There are a few interesting use cases at build time for docker.
You could use docker to brink up database with known state inside containers for your integration tests to hit. using the Docker Maven Plugin.
Having a predefined container for your application which will not change during the development cycle is useful. Especially as you can use the same container when you go to prod deployments. This is different from say vagrant which you would not use for your production deployment.
Since there are so many docker containers already available your would not need to spend time figuring out how to deploy and mange all the various tools and services your deployment may need.

Related

Why is it a lot more complex to debug local Kubernetes containers yet local Docker containers are very simple to debug?

Debugging Docker containers is very easy on my local PC. Say I have this scenario:
1) A web application project
2) A Docker-Compose project
I set the Docker-Compose project as the startup project and then debug the project. Any breakpoints I add to my web application project work i.e. the code stops.
I have now enabled Kubernetes in Docker for Desktop and I have created a very simple app and deployed it. However, it seems to be very complex to setup a debugging environment - for example as described here: https://medium.com/#pavel.agarkov/debugging-asp-net-core-app-running-in-kubernetes-minikube-from-visual-studio-2017-on-windows-6671ddc23d93, which is making me think that I am doing something wrong. Is there a simple way to debug Kubernetes when it is installed locally like it is when debugging local Docker containers?
I was hoping that I would be able to just launch Visual Studio and it would start debugging Kubernetes containers - like with Docker. Is this possible?
Kubernetes is a tool designed to run multiple copies of a packaged application Somewhere Else. It is not designed as a live-development tool.
Imagine that you built a desktop application, packaged it up somehow, and sent it off to me. I'm running it on my desktop (Somewhere Else) and have a problem with it. I can report that problem to you, but you're not going to be able to attach your IDE to my desktop system. Instead, you need to reproduce my issue on your own development system, write a test case for it, and fix it; once you've done that you can release an update that I can run again.
Kubernetes is much more focused on this "run released software" model than a live-development environment. You can easily roll a Deployment object back to the previous version of the software that has been released, for example, assuming you have a scheme to tag distinct releases. You need to do a lot of hacking to try to get a local development tree to run inside a container, though.
The other important corollary to this is that, when you "attach your IDE to a Docker container", you are not running the code in your image! A typical setup for this starts a Docker container from an image but then overwrites all of the application code (via a bind mount) with whatever content you have on your local system. Aside from perhaps encapsulating some hard-to-install dependencies, this approach one the one hand gets the inconveniences of using Docker at all (must have root-equivalent permissions, can't locally run support tools, ...) and on the other this hides the code in the image (so you'll need to repeat whatever tests when you want to deploy to production).
I'd recommend using a local development environment for local development, and not trying to simulate it using other tools. Kubernetes isn't well-suited to live development at all, and I wouldn't try to incorporate it into your day-to-day workflow other than for pre-deployment testing once other tests have passed.
Telepresence is an useful tool to debug pods in kubernetes.Telepresence works by running your code locally, as a normal local process, and then forwarding requests to/from the Kubernetes cluster.This means development is fast: you only have to change your code and restart your process. Many web frameworks also do automatic code reload, in which case you won't even need to restart.
https://www.telepresence.io/tutorials/kubernetes
You're right it is more complicated then it needs to be. I wrote an open source framework called Robusta to solve this.
I do some tricks with code-injection to inject debuggers into already running pods. This lets you bypass the typically complex work of setting up a debug-friendly environment in advance.
You can debug any python pod in the cluster like this:
robusta playbooks trigger python_debugger name=myapp namespace=default
This will setup the debugger. All that remains to do is run kubectl port-forward into the cluster and connect Visual Studio code.
I don't know what language you're using, but if it isn't Python it should still be easy to setup. (Feel free to comment and I'll help you.)

How to use docker in the development phase of a devops life cycle?

I have a couple of questions related to the usage of Docker in a development phase.
I am going to propose three different scenarios of how I think Docker could be used in a development environment. Let's imagine that we are creating a REST API in Java and Spring Boot. For this I will need a MySQL database.
The first scenario is to have a docker-compose for development with the MySQL container and a production docker-compose with MySQL and the Java application (jar) in another container. To develop I launch the docker-compose-dev.yml to start only the database. The application is launched and debugged using the IDE, for example, IntelliJ Idea. Any changes made to the code, the IDE will recognize and relaunch the application by applying the changes.
The second scenario is to have, for both the development and production environment, a docker-compose with the database and application containers. That way, every time I make a change in the code, I have to rebuild the image so that the changes are loaded in the image and the containers are lauched again. This scenario may be the most typical and used for development with Docker, but it seems very slow due to the need to rebuild the image every time there is a change.
The third scenario consists of the mixture of the previous two. Two docker-compose. The development docker-compose contains both containers, but with mechanisms that allow a live reload of the application, mapping volumes and using, for example, Spring Dev Tools. In this way, the containers are launched and, in case of any change in the files, the application container will detect that there is a change and will be relaunched. For production, a docker-compose would be created simply with both containers, but without the functionality of live reload. This would be the ideal scenario, in my opinion, but I think it is very dependent on the technologies used since not all allow live reload.
The questions are as follows.
Which of these scenarios is the most typical when using Docker for phase?
Is scenario 1 well raised? That is, dockerize only external services, such as databases, queues, etc. and perform the development and debugging of the application with the IDE without using Docker for it.
The doubts and the scenarios that I raise came up after I raised the problem that scenario 2 has. With each change in the code, having to rebuild the image and start the containers again is a significant waste of time. In short, a question would be: How to avoid this?
Thanks in advance for your time.
NOTE: It may be a question subject to opinion, but it would be nice to know how developers usually deal with these problems.
Disclaimer: this is my own opinion on the subject as asked by Mr. Mars. Even though I did my best to back my answer with actual sources, it's mostly based on my own experience and a bit of common sense
Which of these scenarios is the most typical when using Docker for development?
I have seen all 3 scenarios iin several projects, each of them with their advantages and drawbacks. However I think scenario 3 with a Docker Compose allowing for dynamic code reload is the most advantageous in term of flexibility and consistency:
Dev and Prod Docker Compose are close matches, meaning Dev environment is as close as possible to Prod environment
You do not have to rebuild the image constantly when developping, but it's easy to do when you need to
Lots of technologies support such scenario, such as Spring Dev Tools as you mentionned, but also Python Flask, etc.
You can easily leverage Docker Compose extends a.k.a configuration sharing mechanism (also possible with scenario 2)
Is scenario 1 well raised? That is, dockerize only external services, such as databases, queues, etc. and perform the development and debugging of the application with the IDE without using Docker for it.
Scenario 1 is quite common, but the IDE environment would probably be different than the one from the Docker container (and it would be difficult to maintain a match of version for each libs, dependencies, etc. from IDE environment to Docker environment). It would also probably require to go through an intermediate step between Dev and Production to actually test the Docker image built after Dev is working before going to Production.
In my own experience doing this is great when you do not want to deal too much with Docker when actually doing dev and/or the language or technology you use is not adapted for dynamic reload as described in scenario 3. But in the end it only adds a drift between your environments and more complexity between Dev and Prod deployment method.
having to rebuild the image and start the containers again is a significant waste of time. In short, a question would be: How to avoid this?
Beside the scenarios you describe, you have ways to decently (even drastically) reduce image build time by leveraging Docker build cache and designing your Dockerfile. For example, a Python application would typically copy code as the last (or almost last) step of the build to avoid invalidating the cache, and for Java app it would be possible to split code so as to avoid compiling the entire application everytime a bit of code changes - that would depend on your actual setup.
I personally use a workflow roughly matching scenario 3 such as:
a docker-compose.yml file corresponding to my Production environment
a docker-compose.dev.yml which will override some aspect of my main Docker Compose file such as mouting code from my machine, adding dev specific flags to commands, etc. - it would be run such as
docker-compose -f docker-compose.yml -f docker-compose.dev.yml
but it would also be possible to have a docker-compose.override.yml as Docker Compose uses by default for override
in some situation I would have to use other overrides for specific situations such as docker-compose.ci.yml on my CI, but usually the main Docker Compose file is enough to describe my Prod environment (and if that's not the case, docker-compose.prod.yml does the trick)
I've seen them all used in different scenarios. There are some gotchas to avoid:
Applications inside of a container shouldn't depend on something running outside of a container on the host. So all of your dependencies should be containerized first.
File permissions with host volumes can be complicated depending on your version of docker. Some of the newer Docker Desktop installs automatically handle uid mappings, but if you develop directly on Linux you'll need to ensure the containers run as the same uid as your host user.
Avoid making changing inside the container if that isn't mapped into a host volume, since those changes will be lost when the container is recreated.
Looking at each of the options, here's my assessment of each:
Containerizing just the DB: This works well when developers already have a development environment for the language of choice, and there's no risk of external dependencies creeping in, e.g. a developer upgrading their JDK install to a newer version than the image is built with. It follows the idea of containerizing the dependencies first, while also giving developers the familiar IDE integration with their application.
Rebuilding the Image for Every Change: This tends to be the least ideal for developer workflow, but the quickest to implement when you're not familiar with the tooling. I'll give a 4th option that I consider an improvement to this.
Everything in a container, volume mounts, and live reloading: This is the most complicated to implement, and requires the language itself to support things like live reloading. However, when they do, it is nearly seamless for the developers and gets them up to speed on a new project quickly without needing to install any other tooling to get started.
Rebuild the app in the container with volume mounts: This is a halfway point between 2 and 3. When you don't have live reloading, you likely need to recompile or restart the interpreter to see any change. Rather than rebuilding the image, I put the recompile step in the entrypoint of a development image. I'll mount the code into the container, and run a full JDK instead of just a JRE (or whatever compiler is needed). I use named volumes for any dependency caches so they don't need to download on every restart. Then the method to see the changes is to restart that one container. The steps are identical to a compiled binary outside of a container, stop the old service, recompile, and restart the service, but now it happens inside of a container that should have the same tools used when building the production image.
For option 4, I tend to use a multi-stage build that has stages for build, develop, and release. The build stage pulls in the code and compiles it, the develop stage is the same base image as build but with an entrypoint that does the compile/run, and the release stage copies the result of the build stage into a minimal runtime. Developers then have a compose file for development that creates the development image and runs that with volume mounts and any debugging ports opened.
First of all, docker-compose is just for development and also testing phase, not for production. Example:
With a minimal and basic docker-compose, all your containers will run in the same machine? For development purposes it is ok, but in production, put all the apps in just one machine is a risk
Official link https://docs.docker.com/compose/production/
We will assume
01 java api
01 mysql database
01 web application that needs the api
all of these applications are already in production
Quick Answer
If you need to fix or add new feature to the java api, I advice you to use an ide like eclipse or IntelliJ Idea. Why?
Because java needs compilation.
Compile inside a docker container will take more time due to maven dependencies
IDE has code auto completion
etc
In this development phase, Docker helps you with one of its most powerful features: "Bring the production containers to your localhost". Yeah, in this case, docker-compose.yml is the best option because with one file, you could start everything you need : mysql database and web app but not your java api. Open your java api with your favorite ide.
Anyway if you want to use docker to "develop", you just need the Dockerfile and perform a docker build ... when you need to run your source code in your localhost
Basic Devops Life cycle with docker
Developer push source code changes using git
Your continuous integration (C.I) platform detect this change and perform
docker build ... (In this step, unit test are triggered)
docker push to your private hub. Container is uploaded in this step and will be used to deployments on other servers.
docker run or container deploy to the next environment : testing
Human testers ,selenium or another automation start their work
If no errors are detected, your C.I perform perform a final deploy of the uploaded container to your production environment. No docker build are required, just deploy or docker run.
Some Tips
Docker features are awesome but sometimes add too much complexity. So stop using volumes , hard disk dependency, logs, or complex configurations. If you use volumes, what will happen when your containers are in different hosts?
Java and Nodejs are a stable languages and your rest api or web apps does not need crazy configurations. Just maven compilation and java -jar ... or npm install and npm run start.
For logs you could use https://www.graylog.org/, google stasckdriver or another log management.
And like Heroku, stop using hard disk dependency as much as possible. In heroku platform disk are disposable, it means disappear when app is restarted. So instead of local file storage, you could use another file storage service with a lot of functionalities.
With this approaches, your containers can be deployed anywhere in a simple way
I'm using something similar to your 3rd scenario for my web dev, but it is Node-based. So I have 3 docker-compose files (actually 4, one is base and having all common stuff for others) for dev, staging and production environments.
Staging docker-compose config is similar to production config excluding SSL, ports and other things that may not allow to use it locally.
I have a separate container for each service (like DB, queue), and for dev, I also have additional dev DB and queue containers mostly for running auto-tests. In dev environment, all source are mounted into containers, so it allows to use IDE/editor of choice outside the container, and see changes inside.
I use supervisor to manage my workers inside a container with workers and have some commands to restart my workers manually when I need this. Maybe you can have something similar to recompile/restart your Java app. Or if you have an idea of how to organize app source code changes detection and your app auto-reloading, than could be the best variant. By the way, you gave me an idea to research something similar suitable for my case.
For staging and production environment, my source code is included inside the corresponding container using production Dockerfile. And I have some commands to restart all stuff using an environment I need, and this typically includes rebuilding containers, but because of Docker cache, it doesn't take much time (about 20 seconds). And taking into account that switching between environments is not a too frequent operation I feel quite comfortable with this.
Production docker-compose config is used only during deployment because it enables SSL, proper ports and has some additional production stuff.
Update for details on backend app restarting using Supervisor:
This is how I use it in my projects:
A part of my Dockerfile with installing Supervisor:
FROM node:10.15.2-stretch-slim
RUN apt-get update && apt-get install -y \
# Supervisor
supervisor \
...
...
# Configs for services/workers managed by supervisor
COPY some/path/worker-configs/*.conf /etc/supervisor/conf.d/
This is an example of one of Supervisor configs for a worker:
[program:myWorkerName]
command=/usr/local/bin/node /app/workers/my-worker.js
user=root
numprocs=1
stopsignal=INT
autostart=true
autorestart=true
startretries=10
In this example in your case command should run your Java app.
And this is an example of command aliases for convenient managing Supervisor from outside of containers. I'm using Makefile as a universal runner of all commands, but this can be something else.
# Used to run all workers
su-start:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl start all
# Used to stop all workers
su-stop:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl stop all
# Used to restart all workers
su-restart:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl restart all
# Used to check status of all workers
su-status:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl status
As I described above these Supervisor commands need to be run manually, but I think it is possible to implement maybe another Node-based worker or some watcher outside of a container with workers that will detect file system changes for sources directory and run these commands automatically. I think it is possible to implement something like this using Java as well like this or this.
On the other hand, it needs to be done carefully to avoid constant restarting workers on every little change.

How to simply use docker for deployment?

Docker seems to be the incredible new tool to solve all developer headaches when it comes to packaging and releasing an application, yet i'm unable to find simple solutions for just upgrading a existing application without having to build or buy into whole "cloud" systems.
I don't want any kubernetes cluster or docker-swarm to deploy hundreds of microservices. Just simply replace an existing deployment process with a container for better encapsulation and upgradability.
Then maybe upgrade this in the future, if the need for more containers increases so manual handling would not make sense anymore
Essentially the direct app dependencies (Language and Runtime, dependencies) should be bundled up without the need to "litter" the host server with them.
Lower level static services, like the database, should still be in the host system, as well as a entry router/load-balancer (simple nginx proxy).
Does it even make sense to use it this way? And if so, is there any "best practice" for doing something like this?
Update:
For the application i want to use it on, i'm already using Gitlab-CI.
Tests are already run inside a docker environment via Gitlab-CI, but deployment still happens the "old way" (syncing the git repo to the server and automatically restarting the app, etc).
Containerizing the application itself is not an issue, and i've also used full docker deployments via cloud services (mostly Heroku), but for this project something like this is overkill. No point in paying hundreds of $$ for a cloud server environment if i need pretty much none of the advantages of it.
I've found several of "install your own heroku" kind of systems but i don't need or want to manage the complexity of a dynamic system.
I suppose basically a couple of remote bash commands for updating and restarting a docker container (after it's been pushed to a registry by the CI) on the server, could already do the job - though probably pretty unreliably compared to the current way.
Unfortunately, the "best practice" is highly subjective, as it depends entirely on your setup and your organization.
It seems like you're looking for an extremely minimalist approach to Docker containers. You want to simply put source code and dependencies into a container and push that out to a system. This is definitely possible with Docker, but the manner of doing this is going to require research from you to see what fits best.
Here are the questions I think you should be asking to get started:
1) Is there a CI tool that will help me package together these containers, possibly something I'm already using? (Jenkins, GitLab CI, CircleCI, TravisCI, etc...)
2) Can I use the official Docker images available at Dockerhub (https://hub.docker.com/), or do I need to make my own?
3) How am I going to store Docker Images? Will I host a basic Docker registry (https://hub.docker.com/_/registry/), or do I want something with a bit more access control (Gitlab Container Registry, Harbor, etc...)
That really only focuses on the Continuous Integration part of your question. Once you figure this out, then you can start to think about how you want to deploy those images (Possibly even using one of the tools above).
Note: Also, Docker doesn't eliminate all developer headaches. Does it solve some of the problems? Absolutely. But what Docker, and the accompanying Container mindset, does best is shift many of those issues to the left. What this means is that you see many of the problems in your processes early, instead of those problems appearing when you're pushing to prod and you suddenly have a fire drill. Again, Docker should not be seen as a solve-all. If you go into Docker thinking it will be a solve-all, then you're setting yourself up for failure.

kubernetes development environment to reduce development time

I'm new to devops and kubernetes and was setting up the local development environment.
For having hurdle-free deployment, I wanted to keep the development environment as similar as possible to the deployment environment. So, for that, I'm using minikube for single node cluster, and that solves a lot of my problems but right now, according to my knowledge, a developer need to do following to see the changes:
write a code locally,
create a container image and then push it to container register
apply the kubernetes configuration with updated container image
But the major issue with this approach is the high development time, Can you suggest some better approach by which I can see the changes in real-time?
The official Kubernetes blog lists a couple of CI/CD dev tools for building Kubernetes based applications: https://kubernetes.io/blog/2018/05/01/developing-on-kubernetes/
However, as others have mentioned, dev cycles can become a lot slower with CI/CD approaches for development. Therefore, a colleague and I started the DevSpace CLI. It lets you create a DevSpace inside Kubernetes which allows you a direct terminal access and real-time file synchronization. That means you can use it with any IDE and even use hot reloading tools such as nodemon for nodejs.
DevSpace CLI on GitHub: https://github.com/covexo/devspace
I am afraid that the first two steps are practically mandatory if you want to have a proper CI/CD environment in Kubernetes. Because of the ephemeral nature of containers, it is strongly discouraged to perform hotfixes in containers, as they could disappear at any moment.
There are tools like helm or kubecfg that can help you with the third step
apply the kubernetes configuration with updated container image
They allow versioning and deployment upgrades. You would still need to learn how to use but they have innumerable advantages.
Another option that comes to mind (that without Kubernetes) would be to use development containers with Docker. In this kind of containers your code is in a volume, so it is easier to test changes. In the worst case you would only have to restart the container.
Examples of development containers (by Bitnami) (https://bitnami.com/containers):
https://github.com/bitnami/bitnami-docker-express
https://github.com/bitnami/bitnami-docker-laravel
https://github.com/bitnami/bitnami-docker-rails
https://github.com/bitnami/bitnami-docker-symfony
https://github.com/bitnami/bitnami-docker-codeigniter
https://github.com/bitnami/bitnami-docker-java-play
https://github.com/bitnami/bitnami-docker-swift
https://github.com/bitnami/bitnami-docker-tomcat
https://github.com/bitnami/bitnami-docker-python
https://github.com/bitnami/bitnami-docker-node
I think using Docker / Kubernetes already during development of a component is the wrong approach, exactly because of this slow development cycles. I would just develop as I'm used to do (e.g. running the component in the IDE, or a local app server), and only build images and start testing it in a production like environment once I have something ready to deploy. I only use local Docker containers, or our Kubernetes development environment, for running components on which the currently developed component depends: that might be a database, or other microservices, or whatever.
On the Jenkins X project we're big fans of using DevPods for fast development - which basically mean you compile/test/run your code inside a pod inside the exact same kubernetes cluster as your CI/CD runs using the exact same tools (maven, git, kubectl, helm etc).
This lets you use your desktop IDE of your choice while all your developers get to work using the exact same operating system, containers and images for development tools.
I do like minikube but developers often hit issues trying to get it running (usually related to docker or virtualisation issues). Plus many developers laptops are not big enough to run lots of services inside minikube and its always going to behave differently to your real cluster - plus then the developers tools and operating system are often very different to whats running in your CI/CD and cluster.
Here's a demo of how to automate your CI/CD on Kubernetes with live development with DevPods to show how it all works
It's not been so long for me to get involved in Kubernetes and Docker, but to my knowledge, I think it's the first step to learn whether it is possible and how to dockerize your application.
Kubernetes is not a tool for creating docker image and it is simply pulling pre-built image by Docker.
There are quite a few useful courses in the Udemy including this one.
https://www.udemy.com/docker-and-kubernetes-the-complete-guide/

Docker, Jenkins and Rails - Setup for running specs on a typical Rails stack

I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.

Resources