kubernetes development environment to reduce development time - docker

I'm new to devops and kubernetes and was setting up the local development environment.
For having hurdle-free deployment, I wanted to keep the development environment as similar as possible to the deployment environment. So, for that, I'm using minikube for single node cluster, and that solves a lot of my problems but right now, according to my knowledge, a developer need to do following to see the changes:
write a code locally,
create a container image and then push it to container register
apply the kubernetes configuration with updated container image
But the major issue with this approach is the high development time, Can you suggest some better approach by which I can see the changes in real-time?

The official Kubernetes blog lists a couple of CI/CD dev tools for building Kubernetes based applications: https://kubernetes.io/blog/2018/05/01/developing-on-kubernetes/
However, as others have mentioned, dev cycles can become a lot slower with CI/CD approaches for development. Therefore, a colleague and I started the DevSpace CLI. It lets you create a DevSpace inside Kubernetes which allows you a direct terminal access and real-time file synchronization. That means you can use it with any IDE and even use hot reloading tools such as nodemon for nodejs.
DevSpace CLI on GitHub: https://github.com/covexo/devspace

I am afraid that the first two steps are practically mandatory if you want to have a proper CI/CD environment in Kubernetes. Because of the ephemeral nature of containers, it is strongly discouraged to perform hotfixes in containers, as they could disappear at any moment.
There are tools like helm or kubecfg that can help you with the third step
apply the kubernetes configuration with updated container image
They allow versioning and deployment upgrades. You would still need to learn how to use but they have innumerable advantages.
Another option that comes to mind (that without Kubernetes) would be to use development containers with Docker. In this kind of containers your code is in a volume, so it is easier to test changes. In the worst case you would only have to restart the container.
Examples of development containers (by Bitnami) (https://bitnami.com/containers):
https://github.com/bitnami/bitnami-docker-express
https://github.com/bitnami/bitnami-docker-laravel
https://github.com/bitnami/bitnami-docker-rails
https://github.com/bitnami/bitnami-docker-symfony
https://github.com/bitnami/bitnami-docker-codeigniter
https://github.com/bitnami/bitnami-docker-java-play
https://github.com/bitnami/bitnami-docker-swift
https://github.com/bitnami/bitnami-docker-tomcat
https://github.com/bitnami/bitnami-docker-python
https://github.com/bitnami/bitnami-docker-node

I think using Docker / Kubernetes already during development of a component is the wrong approach, exactly because of this slow development cycles. I would just develop as I'm used to do (e.g. running the component in the IDE, or a local app server), and only build images and start testing it in a production like environment once I have something ready to deploy. I only use local Docker containers, or our Kubernetes development environment, for running components on which the currently developed component depends: that might be a database, or other microservices, or whatever.

On the Jenkins X project we're big fans of using DevPods for fast development - which basically mean you compile/test/run your code inside a pod inside the exact same kubernetes cluster as your CI/CD runs using the exact same tools (maven, git, kubectl, helm etc).
This lets you use your desktop IDE of your choice while all your developers get to work using the exact same operating system, containers and images for development tools.
I do like minikube but developers often hit issues trying to get it running (usually related to docker or virtualisation issues). Plus many developers laptops are not big enough to run lots of services inside minikube and its always going to behave differently to your real cluster - plus then the developers tools and operating system are often very different to whats running in your CI/CD and cluster.
Here's a demo of how to automate your CI/CD on Kubernetes with live development with DevPods to show how it all works

It's not been so long for me to get involved in Kubernetes and Docker, but to my knowledge, I think it's the first step to learn whether it is possible and how to dockerize your application.
Kubernetes is not a tool for creating docker image and it is simply pulling pre-built image by Docker.
There are quite a few useful courses in the Udemy including this one.
https://www.udemy.com/docker-and-kubernetes-the-complete-guide/

Related

how to use docker in development

I am new to docker and I have done some ground work on docker like how to build a image, how to create a container, what is dockerfile.yml, docker-compose.yml file does etc. I have done my practice in my local machine. I am having following questions when it is coming to real time development using docker.
1) Does each developer in a team has to do development on docker and create images in their local machine and push it to docker registry ? or developers work without docker and one person will be creating final image from the committed code?
2) For each release, do we maintain the container or an image for that release?
3) what is the best practice to maintain the database means do we incorporate the database in image and build the container or we include only application related things and build image and container, and this container will communicate to database which is in outside container or physical database server ?
Thanks in advance.
Questions like these generally do not have a definitive answer. It depends heavily on your company, your team, the tooling that is being used, the software stack, etc. The best thing to do would be to lean on the senior development resources and senior technical leadership in your organization to help shape the answers to questions like these.
Take the following answers with a silo of salt as there is no way to answer these kinds of questions definitively.
1) Depends on what is most convenient for the developers and what language you are using. Some developers find an all container workflow to be convenient, some developers find they can iterate faster with their existing IDE/CLI workflow and test against running container images last.
In most cases you will want to let CI/CD tooling take care of the builds that are intended for production.
2) Yes. You can use container tagging for this purpose.
3) Running databases in containers is possible, but unless your team is experienced with containers and container orchestration I would leave the databases on traditional bare-metal or VMs.
Containers are a fancy wrapper around a single linux process. Generally the rule of thumb is one container for one process. You should not be combining multiple things in a single container. (This story gets slightly more complicated when you go to something like Red Hat OpenShift or Kubernetes as the discussion revolves around how many containers per pod).
The setup I'm used to is that developers mostly ignore Docker, until they need it to deploy. Tools like Node's node_modules directory or Python's virtual environments can help isolate subproject from each other. Any individual developer should be able to run docker build to build an image, but won't usually need to until the final stages of testing a particular change. You should deploy a continuous integration system and that will take responsibility for testing and building a final Docker image of each commit.
You never "maintain containers". If a container goes wrong, delete it and start a new one. Your CI system should build an image for each release, and you should deploy a registry to hold these.
You should never keep the database in the same container as the application. (See the previous point about routinely deleting containers.) My experience has generally been that production databases are important and special enough to merit their own dedicated non-Docker hosts, but there's nothing actually wrong with running a database in Docker; just make sure you know how to do backups and restores and migrations and whatever else on it.
There's no technical reason you can't use Docker Compose for production, but if you wind up needing to deploy your application on more than one physical server you might find it limiting. Kubernetes is more complex but seems to be the current winner in this space; Docker Swarm has some momentum; Hashicorp Nomad is out there; or you can build a manual deployment system by hand. (Note that at least Kubernetes and Nomad are both very big on the "something changed so I'm going to delete and recreate a container" concept, and both make it extremely tricky to do live development in a quasi-production setup.)
Also note that where I say "deploy" there are public-cloud versions of all of these things (Docker Hub, CircleCI, end-to-end solutions including a registry and Kubernetes built on top of AWS or Azure or GCP) and if you're comfortable with the cost-to-effort tradeoff and the implications of using an external service in your build/deploy chain then these can help you get started faster

What are the advantages of running Jenkins in a docker container

I've found quite a few blogs on how to run your Jenkins in Docker but none really explain the advantages of doing it.
These are the only reasons I found:reasons to use Docker.
1) I want most of the configuration for the server to be under version control.
2) I want the ability to run the build server locally on my machine when I’m experimenting with new features or configurations
3) I want to easily be able to set up a build server in a new environment (e.g. on a local server, or in a cloud environment such as AWS)
Luckily I have people who take care of my Jenkins server for me so these points don't matter as much.
Are these the only reasons or are there better arguments I'm overlooking, like automated scaling and load balancing when many builds are triggered at once (I assume this would be possible with Docker)?
This answer for Docker, what is it and what is the purpose
covered What is docker? and Why docker?
Docker official site also provides an explanation.
The simple guide here is:
Faster delivery of your applications
Deploy and scale more easily
Get higher density and run more workloads
Faster deployment makes for easier management
For Jenkins usage, it's faster and easier to deploy/install in the docker way.
Maybe you don't need the scale more easily feature right now. And since the docker is quite lightweight, so you can run more workloads.
However
The docker way would also bring some other problem. Generally speaking, it's the accessing privilege.
Like when you need to run Docker inside the Jenkins(in Docker), it would become complicated somehow. This blog would provide you with some knowledge of that situation.
So there is no silver bullet as always. There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity.
The choice should be made based on the specific scenario.
Jenkins as Code
You list mainly the advantages of having "Jenkins as Code". Which is a very powerfull setup indeed, but does not necessary requires Docker.
So why is Docker the best choice for a Jenkins as Code setup?
Docker
The main reason is that Jenkins pipelines work really well with Docker. Without Docker you need to install additional tools and add different agents to Jenkins. With Docker,
there is no need to install additional tools, you just use images of these tools. Jenkins will download them from internet for you (Docker Hub).
For each stage in the pipeline you can use a different image (i.e. tool). Essentially you get "micro Jenkins agents" which only exists temporary. Hence you do not need fixed agents anymore. This makes your Jenkins setup much more clean.
Getting started
A while ago I have written an small blog on how to get started with Jenkins and Docker, i.e. create a Jenkins image for development which you can launch and destroy in seconds.

Docker for non-code deployments?

I am trying to help a sysadmin group reduce server & service downtime on the projects they manage. Their biggest issue is that they have to take down a service, install upgrade/configure, and then restart it and hope it works.
I have heard that docker is a solution to this problem, but usually from developer circles in the context of deploying their node/python/ruby/c#/java, etc. applications to production.
The group I am trying to help is using vendor software that requires a lot of configuration and management. Can docker still be used in this case? Can we install any random software on a container? Then keep that in a private repository, upgrade versions, etc.?
This is a windows environment if that makes any difference.
Docker excels at stateless applications. You can use it for persistent data style applications, but requires the use of volume commands.
Can docker still be used in this case?
Yes, but it depends on the application. It should be able to be installed headless, and a couple other things that are pretty specific. (EG: talking to third party servers to get an license can create issues)
Can we install any random software on a container?
Yes... but: remember that when the container restarts, that software will be gone. It's better to create it as an image, and then deploy it.See my example below.
Then keep that in a private repository, upgrade versions, etc.?
Yes.
Here is an example pipeline:
Create a Dockerfile for the OS and what steps it takes to install the application. (Should be headless)
Build the image (at this point, it's called an image, not a container)
Test the image locally by creating a local container. This container is what has the configuration data such as environment variables, the volumes for persistent data it needs, etc.
If it satisifies the local developers wants, then you can either:
Let your build servers create the image and publish it an internal
docker registry (best practice)
Let your local developer publish it
to an internal docker registry
At that point, your next level environments can then pull down the image from the docker registry, configure them and create the container.
In short, it will require a lot of elbow grease but is possible.
Can we install any random software on a container?
Generally yes, but you can have many problems with legacy software which was developed to work on bare metal.
At first it can be persistence problem, but it can be solved using volumes.
At second program that working good on full OS can work not so good in container. Containers have some difference with VM's or bare metal. For example due to missing init process some containers have zombie process issue. About others difference you can read here
Docker have big profit for stateless apps, but some heave legacy apps can work not so good inside containers and should be tested good before using it in production.

Kubernetes for a Development Environment

Good day
We have a development environment that consists of 6 virtual machines. Currently we are using Vagrant and Ansible with VirtualBox.
As you can imagine, hosting this environment is a maintenance nightmare particularly as versions of software/OS change. Not too mention resource load for developer machines.
We have started migrating some virtual machines to docker. But this itself poses problems around orchestration, correct configurations, communication etc. This led me to Kubernetes.
Would someone be so kind as to provide some reasoning as to whether Kubernetes would or wouldn't be the right tool for the job? That is managing and orchestrating 'development' docker containers.
Thanks
This is quite complex topic and many things have to be considered if it's worth to use k8s as local dev environment. Especially I used it when I wanted to have my local developer environment very close to production one which was running on Kubernetes. This helped to avoid many configuration bugs.
In my opinion Kubernetes(k8s) will provide you all you need for a development environment.
It gives you much flexibility and does much configuration itself. Few examples:
An easy way to deploy new version into local kubernetes stack
You prepare k8s replication controller files for each of your application module (keep in mind that they need to be stateless modules)
In replication controller you specify the docker image and that's it.
Using this approach you can push new docker images to local docker_registry and then using kubectl control the lifecycle of your application.
Easy way to scale your application modules
For example:
kubectl scale rc your_application_service --replicas=3
This way k8s will check how many pods you have running for your service and if it recognises that the number is smaller then the replicas value it will create new to satisfy the replicas number.
It's endless topic and many other things come to my mind, but I would suggest you to try it out.
There is a https://github.com/kubernetes/kubernetes/blob/master/docs/devel/developer-guides/vagrant.md project for running the k8s cluster in vagrant.
Of course you have to remember that if you have many services all of them have to be pushed to local repository and run by k8s. This will require some time but if you automate local deploy with some custom scripts you won't regret.
As wsl mentioned before, it is a quite complex topic. But i'm doing this as well at the moment. So let me summaries some things for you:
With Kubernetes (k8s) you're going to orchestrate your SaaS Application. In best case, it is a Cloud-native Application. The properties/requirements for a Cloud-native Application are formulated by the Cloud Native Computing Foundation (CNCF), which basically were formed around k8s, after Google donates it to the Linux Foundation.
So the properties/requirements for a Cloud-native Application are: Container packaged, Dynamically managed and Micro-services oriented (cncf.io/about/charter). You will benefit mostly from k8s, if your applications are micro-service based and every service has a separate container.
With micro-service based applications, every service can be developed independently. The developer only needs to follow the 12Factor Method (12factor.net) for example (use env var instead of hard coded IP addresses, etc).
In the next step the developer build the container for a service and pushes it the a container registry. For a local develop environment, you may need to run a container registry inside the cluster as well, so the developer can push and test his code locally.
Then you're able to define your k8s replication-controllers, services, PetSets, etc. with Ports, Port-mapping, env vars, Container Images... and create and run it inside the cluster.
The k8s-documentation recommend Minikube for running k8s locally (kubernetes.io/docs/getting-started-guides/minikube/). With Minikube you got features like DNS, NodePorts, ConfigMaps and Secrets
Dashboards.
But I choose the multi node CoreOS Kubernetes with Vagrant Cluster for my Development Environment as Puja Abbassi mentioned in the Blog "Finding The Right Local Kubernetes Development Environment" (https://deis.com/blog/2016/local-kubernetes-development-environment/), it is closer to the my production environment (12Factor: 10 - Dev/prod parity).
With the Vagrant Environment you got features like:
Networking with flannel
Service Discovery with etcd
DNS names for a set of containers with SkyDNS
internal load balancing
If you want to know, how everything works look inside this Github repo github.com/coreos/coreos-kubernetes/tree/master/multi-node (vagrant and generic folder).
So you have to ask yourself, if you or your developers really need to run a complete "cloud environment" locally. In many cases a developer can develop a service (based on micro-services and containers) independently.
But sometimes it is necessary to have multiple or all services run on your local machine as a dev-environment.

Mesosphere local development

I'm currently investigating using Mesosphere in production to run a couple of micro-services as Docker containers.
I got the DCOS deployment done and was able to successfully run one of the services. Before continuing with this approach I however also need to capture the development side (not of Mesos or Mesosphere itself but the development of the micro-services).
Are there any best practices how to run a local deployment of Mesosphere in a Vagrantbox or something similar that would enable our developers to run all the services that are in our eco-system from existing docker images and run the one service you are currently working on from a local code folder?
I already know how to link the devs code folder into a Vagrant machine and should also get the Docker part running but I'm still kind off lost on the whole Mesosphere integration part.
Is there anyone who could forward me to some resource in the Internet describing a possible solution for this? Did anyone of you do something similar and would care to share some insights on this?
Sneak Peak
Mesosphere is actively working on improving the developer experience surrounding DCOS. Part of that effort includes work on a local development cluster to aid application, service, and DCOS package developers. However, the solution is not quite ready for prime time yet. We have begun giving early access to select DCOS Enterprise Edition customers tho. If you'd like to hear more about that, please talk to your sales representative or contact sales through our web site: https://mesosphere.com/contact/
Public Tools
That said, there are many different tools already available that can help when developing Mesos frameworks or Marathon applications.
mesos-compose-dind
playa-mesos
mini mesos
coreos-mesos-cluster
vagrant-mesos
vagrant-puppet-mesosphere
Disambiguation
Mesosphere, Inc. is the company developing the Datacenter Operating System (DCOS).
The "mesosphere stack" historically refers to Mesos + Marathon (sometimes Chronos too, depending who you ask).
DCOS builds upon those open source tools and adds more (web gui, package manager, cli, centralized control plane, dns, etc.).
Update 2017-08-03
The two currently recommended local development options for DC/OS are:
dcos-vagrant
dcos-docker
I think there's not "the" solution... I guess every company will try to work out the best way to find a fit with their development processes.
My company for example is not using DCOS, but a normal Mesos cluster with clustered Marathon and Chronos schedulers. We have three environments, each running CoreOS and Mesos/Marathon (in different versions, to be able to test against version upgrades etc.):
Local Vagrant clusters for our developers for local development/testing (can be configured to use different CoreOS/Mesos/Marathon versions based on the user_data files)
A test cluster (virtualized, latest CoreOS beta, latest Mesos/Marathon/Chronos)
A production cluster (bare metal, latest CoreOS stable, currently Mesos 0.25.0 and Marathon 0.14.1)
Our build cycle uses a build server (TeamCity in our case, Jenkins etc. should also work fine) which builds the Docker images and pushed them to our private Docker repository. The images are tagged automatically in this process.
We also have to possibility to automatically launch them via Marathon API calls to the cluster defined in the build itself, or they can be deployed manually by the developers. The updated Docker images are thereby pulled from our private Docker repository (make sure to use "forcePullImage": true to get the latest version if you don't use specific image tags).
See
https://mesosphere.github.io/marathon/docs/native-docker.html
https://mesosphere.github.io/marathon/docs/native-docker-private-registry.html
https://mesosphere.github.io/marathon/docs/rest-api.html#post-v2-apps
https://github.com/tobilg/coreos-mesos-cluster

Resources