When trying to move a web container (Tomcat) to the latest technologies for better growth and support, I came across this blog. This part seems ideal for my needs:
... we are also incorporating Kubernetes into Mesos to manage the deployment of Docker workloads. Together, we provide customers with a commercial-grade, highly-available and production-ready compute fabric.
Now, how to setup a local test environment to try this out? All these technologies seem interchangable! I can run docker on mesos, mesos on docker, etc etc etc. Prepackaged instances allow me to run on others Clouds. Other videos also make this seem great! Running out on the cloud is not a viable (allowed) option for me. Unfortunately, I can not find 'instructions' on how to setup the configuration described/marketed/advertised.
If I am new to these technologies, and know there will be a learning curve, is there a way to get initialized for doing such a "simple task": running a tomcat container on a Docker machine that is running Mesos/Kubernetes? That is, without spending days trying to learn and figure out each individual part! This is the picture from the blog site referenced:
Assuming that I "only" know how to create a docker container(s) (for say, centos-7). What commands, in what order, (i.e. the secret 'code') do I need to use to configure small (2 or 3) local environment to try out running Tomcat?
Although I searched quite a bit, apparently not enough! Someone pointed me to this:
https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/mesos-docker.md
which is pretty close to exactly what I was looking for.
Related
Docker seems to be the incredible new tool to solve all developer headaches when it comes to packaging and releasing an application, yet i'm unable to find simple solutions for just upgrading a existing application without having to build or buy into whole "cloud" systems.
I don't want any kubernetes cluster or docker-swarm to deploy hundreds of microservices. Just simply replace an existing deployment process with a container for better encapsulation and upgradability.
Then maybe upgrade this in the future, if the need for more containers increases so manual handling would not make sense anymore
Essentially the direct app dependencies (Language and Runtime, dependencies) should be bundled up without the need to "litter" the host server with them.
Lower level static services, like the database, should still be in the host system, as well as a entry router/load-balancer (simple nginx proxy).
Does it even make sense to use it this way? And if so, is there any "best practice" for doing something like this?
Update:
For the application i want to use it on, i'm already using Gitlab-CI.
Tests are already run inside a docker environment via Gitlab-CI, but deployment still happens the "old way" (syncing the git repo to the server and automatically restarting the app, etc).
Containerizing the application itself is not an issue, and i've also used full docker deployments via cloud services (mostly Heroku), but for this project something like this is overkill. No point in paying hundreds of $$ for a cloud server environment if i need pretty much none of the advantages of it.
I've found several of "install your own heroku" kind of systems but i don't need or want to manage the complexity of a dynamic system.
I suppose basically a couple of remote bash commands for updating and restarting a docker container (after it's been pushed to a registry by the CI) on the server, could already do the job - though probably pretty unreliably compared to the current way.
Unfortunately, the "best practice" is highly subjective, as it depends entirely on your setup and your organization.
It seems like you're looking for an extremely minimalist approach to Docker containers. You want to simply put source code and dependencies into a container and push that out to a system. This is definitely possible with Docker, but the manner of doing this is going to require research from you to see what fits best.
Here are the questions I think you should be asking to get started:
1) Is there a CI tool that will help me package together these containers, possibly something I'm already using? (Jenkins, GitLab CI, CircleCI, TravisCI, etc...)
2) Can I use the official Docker images available at Dockerhub (https://hub.docker.com/), or do I need to make my own?
3) How am I going to store Docker Images? Will I host a basic Docker registry (https://hub.docker.com/_/registry/), or do I want something with a bit more access control (Gitlab Container Registry, Harbor, etc...)
That really only focuses on the Continuous Integration part of your question. Once you figure this out, then you can start to think about how you want to deploy those images (Possibly even using one of the tools above).
Note: Also, Docker doesn't eliminate all developer headaches. Does it solve some of the problems? Absolutely. But what Docker, and the accompanying Container mindset, does best is shift many of those issues to the left. What this means is that you see many of the problems in your processes early, instead of those problems appearing when you're pushing to prod and you suddenly have a fire drill. Again, Docker should not be seen as a solve-all. If you go into Docker thinking it will be a solve-all, then you're setting yourself up for failure.
I'm new to devops and kubernetes and was setting up the local development environment.
For having hurdle-free deployment, I wanted to keep the development environment as similar as possible to the deployment environment. So, for that, I'm using minikube for single node cluster, and that solves a lot of my problems but right now, according to my knowledge, a developer need to do following to see the changes:
write a code locally,
create a container image and then push it to container register
apply the kubernetes configuration with updated container image
But the major issue with this approach is the high development time, Can you suggest some better approach by which I can see the changes in real-time?
The official Kubernetes blog lists a couple of CI/CD dev tools for building Kubernetes based applications: https://kubernetes.io/blog/2018/05/01/developing-on-kubernetes/
However, as others have mentioned, dev cycles can become a lot slower with CI/CD approaches for development. Therefore, a colleague and I started the DevSpace CLI. It lets you create a DevSpace inside Kubernetes which allows you a direct terminal access and real-time file synchronization. That means you can use it with any IDE and even use hot reloading tools such as nodemon for nodejs.
DevSpace CLI on GitHub: https://github.com/covexo/devspace
I am afraid that the first two steps are practically mandatory if you want to have a proper CI/CD environment in Kubernetes. Because of the ephemeral nature of containers, it is strongly discouraged to perform hotfixes in containers, as they could disappear at any moment.
There are tools like helm or kubecfg that can help you with the third step
apply the kubernetes configuration with updated container image
They allow versioning and deployment upgrades. You would still need to learn how to use but they have innumerable advantages.
Another option that comes to mind (that without Kubernetes) would be to use development containers with Docker. In this kind of containers your code is in a volume, so it is easier to test changes. In the worst case you would only have to restart the container.
Examples of development containers (by Bitnami) (https://bitnami.com/containers):
https://github.com/bitnami/bitnami-docker-express
https://github.com/bitnami/bitnami-docker-laravel
https://github.com/bitnami/bitnami-docker-rails
https://github.com/bitnami/bitnami-docker-symfony
https://github.com/bitnami/bitnami-docker-codeigniter
https://github.com/bitnami/bitnami-docker-java-play
https://github.com/bitnami/bitnami-docker-swift
https://github.com/bitnami/bitnami-docker-tomcat
https://github.com/bitnami/bitnami-docker-python
https://github.com/bitnami/bitnami-docker-node
I think using Docker / Kubernetes already during development of a component is the wrong approach, exactly because of this slow development cycles. I would just develop as I'm used to do (e.g. running the component in the IDE, or a local app server), and only build images and start testing it in a production like environment once I have something ready to deploy. I only use local Docker containers, or our Kubernetes development environment, for running components on which the currently developed component depends: that might be a database, or other microservices, or whatever.
On the Jenkins X project we're big fans of using DevPods for fast development - which basically mean you compile/test/run your code inside a pod inside the exact same kubernetes cluster as your CI/CD runs using the exact same tools (maven, git, kubectl, helm etc).
This lets you use your desktop IDE of your choice while all your developers get to work using the exact same operating system, containers and images for development tools.
I do like minikube but developers often hit issues trying to get it running (usually related to docker or virtualisation issues). Plus many developers laptops are not big enough to run lots of services inside minikube and its always going to behave differently to your real cluster - plus then the developers tools and operating system are often very different to whats running in your CI/CD and cluster.
Here's a demo of how to automate your CI/CD on Kubernetes with live development with DevPods to show how it all works
It's not been so long for me to get involved in Kubernetes and Docker, but to my knowledge, I think it's the first step to learn whether it is possible and how to dockerize your application.
Kubernetes is not a tool for creating docker image and it is simply pulling pre-built image by Docker.
There are quite a few useful courses in the Udemy including this one.
https://www.udemy.com/docker-and-kubernetes-the-complete-guide/
I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.
I've found quite a few blogs on how to run your Jenkins in Docker but none really explain the advantages of doing it.
These are the only reasons I found:reasons to use Docker.
1) I want most of the configuration for the server to be under version control.
2) I want the ability to run the build server locally on my machine when I’m experimenting with new features or configurations
3) I want to easily be able to set up a build server in a new environment (e.g. on a local server, or in a cloud environment such as AWS)
Luckily I have people who take care of my Jenkins server for me so these points don't matter as much.
Are these the only reasons or are there better arguments I'm overlooking, like automated scaling and load balancing when many builds are triggered at once (I assume this would be possible with Docker)?
This answer for Docker, what is it and what is the purpose
covered What is docker? and Why docker?
Docker official site also provides an explanation.
The simple guide here is:
Faster delivery of your applications
Deploy and scale more easily
Get higher density and run more workloads
Faster deployment makes for easier management
For Jenkins usage, it's faster and easier to deploy/install in the docker way.
Maybe you don't need the scale more easily feature right now. And since the docker is quite lightweight, so you can run more workloads.
However
The docker way would also bring some other problem. Generally speaking, it's the accessing privilege.
Like when you need to run Docker inside the Jenkins(in Docker), it would become complicated somehow. This blog would provide you with some knowledge of that situation.
So there is no silver bullet as always. There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity.
The choice should be made based on the specific scenario.
Jenkins as Code
You list mainly the advantages of having "Jenkins as Code". Which is a very powerfull setup indeed, but does not necessary requires Docker.
So why is Docker the best choice for a Jenkins as Code setup?
Docker
The main reason is that Jenkins pipelines work really well with Docker. Without Docker you need to install additional tools and add different agents to Jenkins. With Docker,
there is no need to install additional tools, you just use images of these tools. Jenkins will download them from internet for you (Docker Hub).
For each stage in the pipeline you can use a different image (i.e. tool). Essentially you get "micro Jenkins agents" which only exists temporary. Hence you do not need fixed agents anymore. This makes your Jenkins setup much more clean.
Getting started
A while ago I have written an small blog on how to get started with Jenkins and Docker, i.e. create a Jenkins image for development which you can launch and destroy in seconds.
I'm not au fait with any of these technologies (embarrassing really), but at my present gig, the company badly needs to automate.
So as I begin to read-up on Puppet and Chef and PowerShell DSC, I then remember that Docker and containerisation is coming to Windows.
Does Docker do away with the need for these tools, or do they work together?
I understand that Docker uses virtualisation technology in the OS, so I get the feeling that Docker solves a different problem, and a configuration tool is still needed but I've no certain, practical knowledge.
Does Docker do away with the need for these tools, or do they work together?
They work together: provisioning and containerization solve different issues, and you actually can provision docker containers themselves with a provisioning tool.
See for instance "Docker: Using Puppet"
Tools like Chef & Puppet are important for configuration, but they do have one weakness that Docker helps to shore up. They are not always fully idempotent (hype notwithstanding). In other words, running Chef twice on the same virtual machine may cause unexpected and hard-to-find changes on that machine, and you'd be restoring a backup to get to a known good state.
By contrast, a Docker deployment involves building an entirely new image and swapping it out with your old image. Rollback involves simply unswapping them and comparing them to diagnose the problems in the new image.
Note that you still might very well use Chef to build your Docker container. But you might very well not. Since containers are supposed to run just one process in a particular way, I've found that a series of simple shell commands is way preferable to the overhead entailed by Chef.
In short no, you don't need anything like Chef or Puppet. Of course you can use if like to but it's not required.
If you build your system in such way that everything in containerized then what you need is only a tiny OS like CoreOS or Atomic.
So you just configure your VM via Cloud-Config if needed and deploy your container either with cloud config or Docker cli itself. The idea is your machines should have a static state and they can be created whenever you want new one and destroyed when you don't need.
There are other tools that can help with Docker orchestration which another story by itself.
Tools like Swarm, Kubernetes and Mesosphere.
docker-machine is also very helpful for development purpose. (maybe deployment too).
Here is CoreOS example:
https://coreos.com/os/docs/latest/cloud-config.html
Resource: I do it in production for different apps.
UPDATE:
BTW, Docker is not only a visualization technology. It does some sort of containerization (you can call it virtualization too) and that's only a small part of the what Docker can do. Docker can configure, build, ship and run application whit eliminating its dependencies on host machine. And that's why you don't need those classic configuration tools.
Puppet and Chef are configuration management tools, where as Docker is a virtualization tool such as LXC.
Usually you'd be using Chef or puppet to manage Docker containers. For example take a look at Chef docs.
EDIT as per #ptierno comment.
Docker is three things: a cool way to run a process, a decent image-based deploy system, and a mediocre system image builder.
The first is not related to config management as those tools aren't involved in running a process, at least not directly. The second takes the place of some amount of config management in production by doing it ahead of time when you build the image. There is still often some need for last-mile config for stuff like service discovery and secrets but this can be handled by lighter tools like consul-templates or confd. The last is where the rub lies. docker build is simple, easy to get started with, and mostly unhelpful for complex situations. You get, at most, a single inheritance tree between dockerfiles which makes stuff like multi-axis matrix builds ({app1 app2 app3} x {prod qa dev}) more difficult than it could be. Also building composable abstraction for other groups to use is difficult, though again it isn't impossible. Using something like Packer to drive image builds can produce simpler code sometimes, and supports the full suite of CAPS (Chef, Ansible, Puppet, Salt) tools. This is mostly aimed at the use case where you are treating Docker images like tiny VMs, which I wish fewer people would do, but it's a thing so here we are.