I am interested in installing OpenStack to a couple of physical we have lying around, and then, somehow, deploying CloudFoundry on top of of it, as the PaaS.
I am also interested in playing around with Docker and CoreOS, and see that an integration between OpenStack and CoreOS already exists.
My question: if I have OpenStack/Nova spinning up VMs running CoreOS, and hence be Docker/container-based, will this be compatible with CloudFoundry, or is CloudFoundry somehow incompatible with Docker containers?
Cloud Foundry is installed using a specialised tool called Bosh. It has support for Openstack and I think would require deployment using Ubuntu VMs (open to correction on this point). Cloud Foundry has not integrated Docker yet, that is coming in the next version, google "Cloud Foundry" and "Diego".
maybe I'm not fully understanding here, but I was under the impression
that containers can't just stand on their own. They would require
living inside a VM. So my thinking/hope was that I could use
CloudFoundry to spin up VM instances, and inside those instances,
deploy containers. Thoughts?
Containers are completely standalone, they are a form of lightweight virtualization. Cloud Foundry is a platform for deploying your application. It runs on virtual machines (or physical servers) and instances of your application are compiled and run on the CF hosts within containers. Currently the container tech used by CF is something called Warden. Diego is a new CF component coming in 2015 that will offer Docker support.
then what is the difference between CF Diego and Kubernetes, which
also seems to be about deploying/distributing your container across
pools of nodes? Do they serve different, similar or identical
purposes? In other words, would there be a use case for having both CF
Diego and Kubernetes managing your app deployments, if so, what?
Kubernetes is a Google sponsored project for orchestrating containers across multiple hosts. Cloud Foundry goes further because it also contains features for building and versioning applications that are deployed. It's worth noting that Redhat have a competing PAAS solution called Openshift. The next version (already available in github) has integrated Kubernetes and added in all the missing application build support, making it comparable to what Cloud Foundry offers. Both CF Diego and Openshift V3 are due for delivery sometime in 2015.
Update
I see from your other questions, you're familiar with Camel. You'd be interested in the fabric8 framework which has recently integrated Openshift V3. (Fabric is the upstream project for the JBoss Fuse product)
Related
I am new to devops. I want to install jenkins. So out of all options available to install jenkins provided in official documentation which one should I use. I am zeroed on docker or kubernetes. So parameters I am looking for decision are below.
portability - can be installed on any major os or cloud provider.
minimal changes to move to production.
Kubernetes is a container orchestrator that may use Docker as its container runtime. So, they are quite different things—essentially, different levels of abstraction.
You could theoretically run an application at both of these abstraction levels. Here's a comparison:
Docker
You can run an application as a Docker container on any machine that has Docker installed (i.e. any OS or cloud provider instance that supports Docker). However, you would need to implement any operations-related features that are relevant for production, such as health checks, replication, load balancing, etc. yourself.
Kubernetes
Running an application on Kubernetes requires a Kubernetes cluster. You can run a Kubernetes cluster either on-premises, in the cloud, or use a managed Kubernetes service (such as Amazon EKS, Google GKE, or Azure AKS). The big advantage of Kubernetes is that it provides all the production-relevant features mentioned above (health checks, replication, load balancing, etc.) as part of the platform. So, you don't need to implement them yourself but just use the primitives that Kubernetes provides to you.
Regarding your two requirements, Kubernetes provides both of them, while using Docker alone does not provide easy production-readiness (requirement 2). So, if you're opting for production stability, setting up a Kubernetes cluster is certainly worth the effort.
istio An open platform to connect, manage, and secure micro-services looks very interesting, but supports only Kubernetes. I couldn't find a roadmap or mention of future support for other container management platforms, specifically Docker Swarm
The project's github site does state the following explicitly:
Istio currently only supports the Kubernetes platform, although we
plan support for additional platforms such as Cloud Foundry, and Mesos
in the near future.
I don't know about the plans for Docker Swarm however I believe it probably would figure in the plans.
The roadmap at https://istio.io/docs/reference/release-roadmap.html shows that VM support is planned for 0.2
You can see that work is happening in the Cloud Foundry world when you see issues such as this.
The docker team indicated recently they are very interested in looking at istio and docker swarm integration so stay tuned this may happen in the next few quarters before you know it :)
Good day
We have a development environment that consists of 6 virtual machines. Currently we are using Vagrant and Ansible with VirtualBox.
As you can imagine, hosting this environment is a maintenance nightmare particularly as versions of software/OS change. Not too mention resource load for developer machines.
We have started migrating some virtual machines to docker. But this itself poses problems around orchestration, correct configurations, communication etc. This led me to Kubernetes.
Would someone be so kind as to provide some reasoning as to whether Kubernetes would or wouldn't be the right tool for the job? That is managing and orchestrating 'development' docker containers.
Thanks
This is quite complex topic and many things have to be considered if it's worth to use k8s as local dev environment. Especially I used it when I wanted to have my local developer environment very close to production one which was running on Kubernetes. This helped to avoid many configuration bugs.
In my opinion Kubernetes(k8s) will provide you all you need for a development environment.
It gives you much flexibility and does much configuration itself. Few examples:
An easy way to deploy new version into local kubernetes stack
You prepare k8s replication controller files for each of your application module (keep in mind that they need to be stateless modules)
In replication controller you specify the docker image and that's it.
Using this approach you can push new docker images to local docker_registry and then using kubectl control the lifecycle of your application.
Easy way to scale your application modules
For example:
kubectl scale rc your_application_service --replicas=3
This way k8s will check how many pods you have running for your service and if it recognises that the number is smaller then the replicas value it will create new to satisfy the replicas number.
It's endless topic and many other things come to my mind, but I would suggest you to try it out.
There is a https://github.com/kubernetes/kubernetes/blob/master/docs/devel/developer-guides/vagrant.md project for running the k8s cluster in vagrant.
Of course you have to remember that if you have many services all of them have to be pushed to local repository and run by k8s. This will require some time but if you automate local deploy with some custom scripts you won't regret.
As wsl mentioned before, it is a quite complex topic. But i'm doing this as well at the moment. So let me summaries some things for you:
With Kubernetes (k8s) you're going to orchestrate your SaaS Application. In best case, it is a Cloud-native Application. The properties/requirements for a Cloud-native Application are formulated by the Cloud Native Computing Foundation (CNCF), which basically were formed around k8s, after Google donates it to the Linux Foundation.
So the properties/requirements for a Cloud-native Application are: Container packaged, Dynamically managed and Micro-services oriented (cncf.io/about/charter). You will benefit mostly from k8s, if your applications are micro-service based and every service has a separate container.
With micro-service based applications, every service can be developed independently. The developer only needs to follow the 12Factor Method (12factor.net) for example (use env var instead of hard coded IP addresses, etc).
In the next step the developer build the container for a service and pushes it the a container registry. For a local develop environment, you may need to run a container registry inside the cluster as well, so the developer can push and test his code locally.
Then you're able to define your k8s replication-controllers, services, PetSets, etc. with Ports, Port-mapping, env vars, Container Images... and create and run it inside the cluster.
The k8s-documentation recommend Minikube for running k8s locally (kubernetes.io/docs/getting-started-guides/minikube/). With Minikube you got features like DNS, NodePorts, ConfigMaps and Secrets
Dashboards.
But I choose the multi node CoreOS Kubernetes with Vagrant Cluster for my Development Environment as Puja Abbassi mentioned in the Blog "Finding The Right Local Kubernetes Development Environment" (https://deis.com/blog/2016/local-kubernetes-development-environment/), it is closer to the my production environment (12Factor: 10 - Dev/prod parity).
With the Vagrant Environment you got features like:
Networking with flannel
Service Discovery with etcd
DNS names for a set of containers with SkyDNS
internal load balancing
If you want to know, how everything works look inside this Github repo github.com/coreos/coreos-kubernetes/tree/master/multi-node (vagrant and generic folder).
So you have to ask yourself, if you or your developers really need to run a complete "cloud environment" locally. In many cases a developer can develop a service (based on micro-services and containers) independently.
But sometimes it is necessary to have multiple or all services run on your local machine as a dev-environment.
I'm currently investigating using Mesosphere in production to run a couple of micro-services as Docker containers.
I got the DCOS deployment done and was able to successfully run one of the services. Before continuing with this approach I however also need to capture the development side (not of Mesos or Mesosphere itself but the development of the micro-services).
Are there any best practices how to run a local deployment of Mesosphere in a Vagrantbox or something similar that would enable our developers to run all the services that are in our eco-system from existing docker images and run the one service you are currently working on from a local code folder?
I already know how to link the devs code folder into a Vagrant machine and should also get the Docker part running but I'm still kind off lost on the whole Mesosphere integration part.
Is there anyone who could forward me to some resource in the Internet describing a possible solution for this? Did anyone of you do something similar and would care to share some insights on this?
Sneak Peak
Mesosphere is actively working on improving the developer experience surrounding DCOS. Part of that effort includes work on a local development cluster to aid application, service, and DCOS package developers. However, the solution is not quite ready for prime time yet. We have begun giving early access to select DCOS Enterprise Edition customers tho. If you'd like to hear more about that, please talk to your sales representative or contact sales through our web site: https://mesosphere.com/contact/
Public Tools
That said, there are many different tools already available that can help when developing Mesos frameworks or Marathon applications.
mesos-compose-dind
playa-mesos
mini mesos
coreos-mesos-cluster
vagrant-mesos
vagrant-puppet-mesosphere
Disambiguation
Mesosphere, Inc. is the company developing the Datacenter Operating System (DCOS).
The "mesosphere stack" historically refers to Mesos + Marathon (sometimes Chronos too, depending who you ask).
DCOS builds upon those open source tools and adds more (web gui, package manager, cli, centralized control plane, dns, etc.).
Update 2017-08-03
The two currently recommended local development options for DC/OS are:
dcos-vagrant
dcos-docker
I think there's not "the" solution... I guess every company will try to work out the best way to find a fit with their development processes.
My company for example is not using DCOS, but a normal Mesos cluster with clustered Marathon and Chronos schedulers. We have three environments, each running CoreOS and Mesos/Marathon (in different versions, to be able to test against version upgrades etc.):
Local Vagrant clusters for our developers for local development/testing (can be configured to use different CoreOS/Mesos/Marathon versions based on the user_data files)
A test cluster (virtualized, latest CoreOS beta, latest Mesos/Marathon/Chronos)
A production cluster (bare metal, latest CoreOS stable, currently Mesos 0.25.0 and Marathon 0.14.1)
Our build cycle uses a build server (TeamCity in our case, Jenkins etc. should also work fine) which builds the Docker images and pushed them to our private Docker repository. The images are tagged automatically in this process.
We also have to possibility to automatically launch them via Marathon API calls to the cluster defined in the build itself, or they can be deployed manually by the developers. The updated Docker images are thereby pulled from our private Docker repository (make sure to use "forcePullImage": true to get the latest version if you don't use specific image tags).
See
https://mesosphere.github.io/marathon/docs/native-docker.html
https://mesosphere.github.io/marathon/docs/native-docker-private-registry.html
https://mesosphere.github.io/marathon/docs/rest-api.html#post-v2-apps
https://github.com/tobilg/coreos-mesos-cluster
I want to migrate my current deploy to docker, it counts on a mongodb service, a redis service, a pg server and a rails app, I have created already a docker container for each but i have doubts when it comes to start and linking them. Under development I'm using fig but I think it was not meant to be used on production. In order to take my deployment to production level, what mechanism should I use to auto-start and link containers together? my deploy uses a single docker host that already runs Ubuntu so i can't use CoreOS.
Linknig containers in production is a tricky thing. It will hardwire the IP addresses of the dependent containers so if you ever need to restart a container or launch a replacement (like upgrading the version of mongodb) your rails app will not work out of the box with the new container and its new IP address.
This other answer explains some available alternatives to linking.
Regarding starting the containers, you can use any deployment tool to run the required docker commands (Capistrano can easily do that). After that, docker will restart running the containers after a reboot.
You might need a watcher process to restart containers if they die, just as you would have one for a normal rails app.
Services like Tutum and Dockerize.it can make this simpler. As far as I know, Tutum will not deploy to your servers. Dockerize.it will, but is very rough (disclaimer: I'm part of the team building it).
You can convert your fig configuration to CoreOS formatted systemd configuration files with fig2coreos. Google App Engine supports CoreOS, or you can run CoreOS on AWS or your cloud provider of choice. fig2coreos also supports deploying to CoreOS in Vagrant for local development.
CenturyLink (fig2coreos authors) have an example blog post here:
This blog post will show you how to bridge the gap between building
complex multi-container apps using Fig and deploying those
applications into a production CoreOS system.
EDIT: If you are constrained to an existing host OS you can use QEMU ("a generic and open source machine emulator and virtualizer") to host a CoreOS instance. Instructions are available from the CoreOS team.