On premise Docker Central - docker

We have team of j2ee spring and angular developers. We are developing small applications in short span. As of now we don't have luxury to have DevOps team to maintain staging and QA environments.
I am checking feasibility that developer who want to get their application tested can build docker image and float it on on-premise central docker server (At times they work from remote locations as well). We are in process of CI but it may take some time.
Due to cost pressure we can not use AWS except for production.
Any pointer will be helpful.
Thanks in advance.

Since you plans on using Docker, you can infact setup a simple build flow which makes lives easier in the long run.
Use DockerHub for building and storing docker images (This saves time for building and also provides a easy way of rolling back shipping and DevOps). It takes few minutes to connect your Github/Bitbucket repository to DockerHub and tell for each branch/tag build an image upon PR merge or push. Also the cost for the service is minimal.
Use these images for your local environment as well as production environment (Giving guarantee that you refer the correct versions)
For production use AWS Elastic Beanstalk or AWS ECS (I prefer ECS due to Container Orchestration capabilities) to simplify the Deployments & DevOps where most of the configurations can be done from AWS Web Console). Cost is only for the underlying EC2 instances.
For Dockerizing your Java application, this video might be helpful to get insights of JVM
Note: Later on you can connect these Dots using your CI environment reducing further efforts

Related

How to design & Implement containerize architecture for nginx with reactjs and laravel with mongo?

I'm using reactjs for frontend with Nginx load balancer and laravel for backend with MongoDB.
as old architecture design, code upload to GitHub with different frontend and backend repo.
still did not use DOCKER AND KUBERNETS, I want to implement them, in the new Architecture design, I used a private cloud server so, restricted to deploying on AWS\AZURE\GCP\etc...
share your Architecture plan and implementation for a better approach to microservices!
as per my thinking,
first make a docker file for react and laravel project
then upload to docker private registry.[dockerhub]
install docker and k8s on VM
deploy container of 1=react and 2=laravel from image
also deploy 3=nginx and 4=mongo container from default market image
Some of my questions:
How to make the connection?
How to take a new pull on the container, for the new update release version?
How to make a replica, for a disaster recovery plan?
How to monitor errors and performance?
How to make the pipeline MOST important?
How to make dev, staging, and production environments?
This is more of planning question, more of the task can be automated by developer/devops, except few administrative task like monitoring and environment creation.
still this can be shared responsibility. or if team is available to manage product/services.
You can use gitlab, which can directly attach to kub8 provider. Can reduce multiple build steps.

Automated deployment of a dockerized application on a single machine

I have a web application consisting of a few services - web, DB and a job queue/worker. I host everything on a single Google VM and my deployment process is very simple and naive:
I manually install all services like the database on the VM
a bash script scheduled by crontab polls a remote git repository for changes every N minutes
if there were changes, it would simply restart all services using supervisord (job queue, web, etc)
Now, I am starting a new web project where I enjoy using docker-compose for local development. However, I seem to suck in analysis paralysis deciding between available options for production deployment - I looked at Kubernetes, Swarm, docker-compose, container registries and etc.
I am looking for a recipe that will keep me productive with a single machine deployment. Ideally, I should be able to scale it to multiple machines when the time comes, but simplicity and staying frugal (one machine) is more important for now. I want to consider 2 options - when the VM already exists and when a new bare VM can be allocated specifically for this application.
I wonder if docker-compose is a reasonable choice for a simple web application. Do people use it in production and if so, how does the entire process look like from bare VM to rolling out an updated application? Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?
I wonder if docker-compose is a reasonable choice for a simple web application.
It can be, sure, if the development time is best spent focused on the web application and less on the non-web stuff such as the job queue and database. The other asterisk is whether the development environment works ok with hot-reloads or port-forwarding and that kind of jazz. I say it's a reasonable choice because 99% of the work of creating an application suitable for use in a clustered environment is the work of containerizing the application. So if the app already works under docker-compose, then it is with high likelihood that you can take the docker image that is constructed on behalf of docker-compose and roll it out to the cluster.
Do people use it in production
I hope not; I am sure there are people who use docker-compose to run in production, just like there are people that use Windows batch files to deploy, but don't be that person.
Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?
Similarly, don't be a person that deploys the entire application on a single virtual machine or be mentally prepared for one failure to wipe out everything that you value. That's part of what clustering technologies are designed to protect against: one mistake taking down the entirety of the application, web, queuing, and persistence all in one fell swoop.
Now whether deploying kubernetes for your situation is "overkill" or not depends on whether you get benefit from the other things that kubernetes brings aside from mere scaling. We get benefit from developer empowerment, log aggregation, CPU and resource limits, the ability to take down one Node without introducing any drama, secrets management, configuration management, using a small number of Nodes for a large number of hosted applications (unlike creating a single virtual machine per deployed application because the deployments have no discipline over the placement of config file or ports or whatever). I can keep going, because kubernetes is truly magical; but, as many people will point out, it is not zero human cost to successfully run a cluster.
Many companies I have worked with are shifting their entire production environment towards Kubernetes. That makes sense because all cloud providers are currently pushing Kubernetes and we can be quite positive about Kubernetes being the future of cloud-based deployment. If your application is meant to run in any private or public cloud, I would personally choose Kubernetes as operating platform for it. If you plan to add additional services, you will be easily able to connect them and scale your infrastructure with a growing number of requests to your application. However, if you already know that you do not expect to scale your application, it may be over-powered to use a Kubernetes cluster to run it although Google Cloud etc. make it fairly easy to setup such a cluster with a few clicks.
Regarding an automated development workflow for Kubernetes, you can take a look at my answer to this question: How to best utilize Kubernetes/minikube DNS for local development

kubernetes development environment to reduce development time

I'm new to devops and kubernetes and was setting up the local development environment.
For having hurdle-free deployment, I wanted to keep the development environment as similar as possible to the deployment environment. So, for that, I'm using minikube for single node cluster, and that solves a lot of my problems but right now, according to my knowledge, a developer need to do following to see the changes:
write a code locally,
create a container image and then push it to container register
apply the kubernetes configuration with updated container image
But the major issue with this approach is the high development time, Can you suggest some better approach by which I can see the changes in real-time?
The official Kubernetes blog lists a couple of CI/CD dev tools for building Kubernetes based applications: https://kubernetes.io/blog/2018/05/01/developing-on-kubernetes/
However, as others have mentioned, dev cycles can become a lot slower with CI/CD approaches for development. Therefore, a colleague and I started the DevSpace CLI. It lets you create a DevSpace inside Kubernetes which allows you a direct terminal access and real-time file synchronization. That means you can use it with any IDE and even use hot reloading tools such as nodemon for nodejs.
DevSpace CLI on GitHub: https://github.com/covexo/devspace
I am afraid that the first two steps are practically mandatory if you want to have a proper CI/CD environment in Kubernetes. Because of the ephemeral nature of containers, it is strongly discouraged to perform hotfixes in containers, as they could disappear at any moment.
There are tools like helm or kubecfg that can help you with the third step
apply the kubernetes configuration with updated container image
They allow versioning and deployment upgrades. You would still need to learn how to use but they have innumerable advantages.
Another option that comes to mind (that without Kubernetes) would be to use development containers with Docker. In this kind of containers your code is in a volume, so it is easier to test changes. In the worst case you would only have to restart the container.
Examples of development containers (by Bitnami) (https://bitnami.com/containers):
https://github.com/bitnami/bitnami-docker-express
https://github.com/bitnami/bitnami-docker-laravel
https://github.com/bitnami/bitnami-docker-rails
https://github.com/bitnami/bitnami-docker-symfony
https://github.com/bitnami/bitnami-docker-codeigniter
https://github.com/bitnami/bitnami-docker-java-play
https://github.com/bitnami/bitnami-docker-swift
https://github.com/bitnami/bitnami-docker-tomcat
https://github.com/bitnami/bitnami-docker-python
https://github.com/bitnami/bitnami-docker-node
I think using Docker / Kubernetes already during development of a component is the wrong approach, exactly because of this slow development cycles. I would just develop as I'm used to do (e.g. running the component in the IDE, or a local app server), and only build images and start testing it in a production like environment once I have something ready to deploy. I only use local Docker containers, or our Kubernetes development environment, for running components on which the currently developed component depends: that might be a database, or other microservices, or whatever.
On the Jenkins X project we're big fans of using DevPods for fast development - which basically mean you compile/test/run your code inside a pod inside the exact same kubernetes cluster as your CI/CD runs using the exact same tools (maven, git, kubectl, helm etc).
This lets you use your desktop IDE of your choice while all your developers get to work using the exact same operating system, containers and images for development tools.
I do like minikube but developers often hit issues trying to get it running (usually related to docker or virtualisation issues). Plus many developers laptops are not big enough to run lots of services inside minikube and its always going to behave differently to your real cluster - plus then the developers tools and operating system are often very different to whats running in your CI/CD and cluster.
Here's a demo of how to automate your CI/CD on Kubernetes with live development with DevPods to show how it all works
It's not been so long for me to get involved in Kubernetes and Docker, but to my knowledge, I think it's the first step to learn whether it is possible and how to dockerize your application.
Kubernetes is not a tool for creating docker image and it is simply pulling pre-built image by Docker.
There are quite a few useful courses in the Udemy including this one.
https://www.udemy.com/docker-and-kubernetes-the-complete-guide/

Elixir/Erlang Applications with Docker in production?

I would like to know what are the strong reasons to go or not to go with the Docker with Elixir/Erlang Application in Production.This is the first time I am asked for starting with the Docker in production.I worked on the production without Docker.I am an Erlang/Elixir Developer.I worked on the high traffic productions servers with millions of transactions per second which are running without Docker.I spent one day for creating and running a Elixir Application image with lots of issues with the network.I had to do lots of configurations for DNS setup etc.After that I started thinking What are the strong reasons for proceeding further.Are there any strong reasons to go or not to go with the Docker with Elixir/Erlang Applications in production.
I went through some of the reasons in the forums but still It am not convinced.All the advantages that docker is providing is already there in the Erlang VM. Could any Erlang Expert in the form please help me.
I deploy Elixir packaged in Docker on AWS in production.
This used to be my preferred way of doing things but now I am more inclined to create my own AMI using Packer with everything preinstalled.
The matter central in deployments is that of control, which to a certain extent I feel is relinquished when leveraging Docker.
The main disadvantage of Docker is that it limits the capabilities of Erlang/Elixir, such as internode connection over epmd. This also means that remsh is practically out of the question and the cool :observer.start is a no-no. If you ever need to interact with a production node for whatever reason, there is an extra barrier of entry of first ssh-ing into the server, going inside Docker etc.. Fine when it is just about checking something, frustrating when production is burning down in agony. Launching multiple containers in one Node is kinda useless as the BEAM makes efficient use of all your cores. Hot upgrades are practically out of the question, but that is not really a feature we personally have an intrinsic business need for.
Effort has been made to have epmd working within container setup, such as: https://github.com/Random-Liu/Erlang-In-Docker but that will require you to rebuild Erlang for custom net_kernel modifications.
Amazon has recently released a new feature to AWS ECS, AWS VPC Networking Mode, which perhaps may facilitate inter-container epmd communication and thus connecting to your node directly. I haven't validated that as yet.
Besides the issue of epmd communication is the matter of deployment time. Creating your image with Docker, even though you have images that boast 5MB only, quickly will end up taking 300MB, with 200MB of that just for all the various dependencies to have your release created. There may be ways to reduce that, but that requires specialized knowledge and dedicated effort. I would classify this extra space more as an annoyance as opposed to a deal breaker, but believe me if you have to wait 25 minutes for your immutable deployments to complete, any minute you can shave off would be worthwhile.
Performance wise, I did not notice a significant difference between bare metal deployments and docker deployments. AWS EB Docker nicely expands the container resources to that of the EC2-instance.
The advantage of course is that of portability. If you have a front end engineer that needs to hit a JSON API then in terms of local development it is a huge win that with some careful setup they can just spawn up the latest api running on their local without having to know about Erlang/Elixir/Rserve/Postgres.
Also, Vendor lock-in is greatly reduced, especially ever since AWS launched their support for Kubernetes
This is a question of tradeoffs, if you are a developer who needs to get to production and have very little Devops knowledge, then perhaps a Docker deployment may be warranted. If you are more familiar with infrastructure, deployments etc., then as developer I believe that creating your own AMI gives you more control over your environment.
All by all, I would encourage to at least play around with Docker and experiment with it, it may open a new realm of possibilities.
Maybe it depends on the server you want to use. From what I know, for example, Docker facilitates the deployment of a Phoenix application on AWS Elastic Beanstalk a lot, but I'm not competent enough to give you very specific reasons at the moment.
Maybe someone can elaborate more.
Docker is primarily a deployment and distribution tool. From the Docker docs:
Docker streamlines the development lifecycle by allowing developers to work in standardized environments using local containers which provide your applications and services. Containers are great for continuous integration and continuous development (CI/CD) workflows.
If your application has external dependencies (for example, a crypto library), interacts with another application written in another language (for example, a database running as a separate process), or if it relies on certain operating system / environment configuration (you mentioned you had to do some DNS configuration), then packaging your application in a docker container helps you avoid doing duplicate work installing dependencies and configuring the environment. It helps you avoid extra work keeping in sync your testing and production environment in terms of dependencies or investigating why an application works on one machine in one environement, but not another.
The above is not specific to an Erlang application, though I can agree that Erlang helps eliminate some of the problems being cross-platform and abstracting away some of the dependencies, and OTP release handling helps you package your application.
Since you mentioned you are a developer, it is worth mentioning that Docker offers more advantages for an administrator or a team running the infrastructure rather than it does for a developer.

Mesosphere local development

I'm currently investigating using Mesosphere in production to run a couple of micro-services as Docker containers.
I got the DCOS deployment done and was able to successfully run one of the services. Before continuing with this approach I however also need to capture the development side (not of Mesos or Mesosphere itself but the development of the micro-services).
Are there any best practices how to run a local deployment of Mesosphere in a Vagrantbox or something similar that would enable our developers to run all the services that are in our eco-system from existing docker images and run the one service you are currently working on from a local code folder?
I already know how to link the devs code folder into a Vagrant machine and should also get the Docker part running but I'm still kind off lost on the whole Mesosphere integration part.
Is there anyone who could forward me to some resource in the Internet describing a possible solution for this? Did anyone of you do something similar and would care to share some insights on this?
Sneak Peak
Mesosphere is actively working on improving the developer experience surrounding DCOS. Part of that effort includes work on a local development cluster to aid application, service, and DCOS package developers. However, the solution is not quite ready for prime time yet. We have begun giving early access to select DCOS Enterprise Edition customers tho. If you'd like to hear more about that, please talk to your sales representative or contact sales through our web site: https://mesosphere.com/contact/
Public Tools
That said, there are many different tools already available that can help when developing Mesos frameworks or Marathon applications.
mesos-compose-dind
playa-mesos
mini mesos
coreos-mesos-cluster
vagrant-mesos
vagrant-puppet-mesosphere
Disambiguation
Mesosphere, Inc. is the company developing the Datacenter Operating System (DCOS).
The "mesosphere stack" historically refers to Mesos + Marathon (sometimes Chronos too, depending who you ask).
DCOS builds upon those open source tools and adds more (web gui, package manager, cli, centralized control plane, dns, etc.).
Update 2017-08-03
The two currently recommended local development options for DC/OS are:
dcos-vagrant
dcos-docker
I think there's not "the" solution... I guess every company will try to work out the best way to find a fit with their development processes.
My company for example is not using DCOS, but a normal Mesos cluster with clustered Marathon and Chronos schedulers. We have three environments, each running CoreOS and Mesos/Marathon (in different versions, to be able to test against version upgrades etc.):
Local Vagrant clusters for our developers for local development/testing (can be configured to use different CoreOS/Mesos/Marathon versions based on the user_data files)
A test cluster (virtualized, latest CoreOS beta, latest Mesos/Marathon/Chronos)
A production cluster (bare metal, latest CoreOS stable, currently Mesos 0.25.0 and Marathon 0.14.1)
Our build cycle uses a build server (TeamCity in our case, Jenkins etc. should also work fine) which builds the Docker images and pushed them to our private Docker repository. The images are tagged automatically in this process.
We also have to possibility to automatically launch them via Marathon API calls to the cluster defined in the build itself, or they can be deployed manually by the developers. The updated Docker images are thereby pulled from our private Docker repository (make sure to use "forcePullImage": true to get the latest version if you don't use specific image tags).
See
https://mesosphere.github.io/marathon/docs/native-docker.html
https://mesosphere.github.io/marathon/docs/native-docker-private-registry.html
https://mesosphere.github.io/marathon/docs/rest-api.html#post-v2-apps
https://github.com/tobilg/coreos-mesos-cluster

Resources