We are currently moving towards microservices with Docker from a monolith application running in JBoss. I want to know the platform/tools/frameworks to be used to test these Docker containers in developer environment. Also what tools should be used to deploy these containers to this developer test environment.
Is it a good option to use some thing like Kubernetes with chef/puppet/vagrant?
I think so. Make sure to get service discovery, logging and virtual networking right. For the former you can check out skydns. Docker now has a few logging plugins you can use for log management. For virtual networking you can look for Flannel and Weave.
You want service discovery because Kubernetes will schedule the containers the way it sees fit and you need some way of telling what IP/port your microservice will be at. Virtual networking make it so each container has it's own subnet thus preventing port clashes in case you have two containers with the same ports exposed in the same host (kubernetes won't let it clash, it will schedule containers to run until you have hosts with ports available, if you try to create more it just won't run).
Also, you can try the built-in cluster tools in Docker itself, like docker service, docker network commands and Docker Swarm.
Docker-machine helps in case you already have a VM infrastructure in place.
We have created and open-sourced a platform to develop and deploy docker based microservices.
It supports service discovery, clustering, load balancing, health checks, configuration management, diagnosing and mini-DNS.
We are using it in our local development environment and production environment on AWS. We have a Vagrant box with everything prepared so you can give it a try:
http://armada.sh
https://github.com/armadaplatform/armada
Related
I have a somewhat complex situation and am probably out of luck here, but here's hoping. This is part of a large development project, so my options for what changes I can make are somewhat limited.
I have a virtual machine running a k8s cluster. That cluster has an http service that is exposed via ingress, and is available, on my local machine, at develop.com, via an /etc/hosts entry on the host mac.
I have a container, necessarily (see above) separate from the cluster, which needs access to this service. This container uses an env var, SERVICE_HOST to configure its requests.
What is the simplest way to provide a value that can be resolved by the standalone container to my cluster? Ideally, something other than ngrok which is simple, but is complicated by the fact that it's already in use in this setup to allow the cluster to reach the standalone container! I'd much prefer to make this work without premium features...
I'm aware of --net=host concept, but it doesn't work on an OSX host.
I am new to the containers topic and would appreciate if this forum is the right place to ask this question.
I am learning dockers and containers and I now have some skills using the docker commands and dealing with containers. I understand that docker has two main parts, the docket client (docker.exe) and the docker server (dockerd.exe). Now in the development life both are installed on my local machine (I am manually installed them on windows server 2016) followed Nigel Poulton tutorial here https://app.pluralsight.com/course-player?clipId=f1f27565-e2bf-4e58-96f3-bc2c3b160ec9. Now when it comes to the real production life, then, how would I configure my docker client to communicate with a remote docker server. I tried to make some research on the internet but honestly could not find a simple answer for this question. I installed docker for desktop on my windows 10 machine and noticed that it created a hyper-v machine which might be Linux machine, my understanding is that this machine has the docker server that my docker client interacts with but do not understand how is this interaction gets done.
I would appreciate if I get some guidance or clear answer to my inquiries.
In production environments you never have a remote Docker daemon. Generally you interact with Docker either through a dedicated orchestrator (Kubernetes, Docker Swarm, Nomad, AWS ECS), or through a general-purpose system automation tool (Chef, Ansible, Salt Stack), or if you must by directly ssh'ing to the system and running docker commands there.
Remote access to the Docker daemon is something of a security disaster. If you can access the Docker daemon at all, you can edit any file on the host system as root, and pretty trivially take over the whole thing. (Google "Docker cryptojacking" for some real-world examples.) In principle you can secure it with mutual TLS, but this is a tricky setup.
The other important best practice is that Docker images should be self-contained. Don't try to deploy a Docker image to production, and also separately copy your application code. The same Ansible setup that can deploy a Docker container can also install Node directly on the target system, avoiding a layer; it's tricky to copy application code into a Kubernetes volume, especially when Kubernetes pods can restart outside your direct control. Deploy (and test!) your images with all of the code COPYd in a Dockerfile, minimizing the use of bind mounts.
Pretty basic question. We have an existing swarm and I want to start migrating to Kubernetes. Can I run both using the same docker hosts?
See the official documentation for Docker for Mac at https://docs.docker.com/docker-for-mac/kubernetes/ stating:
When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.
So: yes, both should be able to run in parallel.
If you're using Docker on Linux you won't have the convenient tools available like in Docker for Mac/Windows, but both orchestrators should still be able to run in parallel without further issues. On system level, details like e.g. ports on a network interface are still shared resources, so they cannot be bound by different orchestrators.
I have a couple of Docker swarm questions (Sorry for not splitting them up but they are all closely related):
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
Do I have to run swarm mode to be able to communicate between instances?
What is the key difference between swarm mode and just running a number of containers as regular?
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
Is there any built in support for a message bus or similar in Docker?
Is there support for any consensus protocol in Docker?
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
Can a container list other containers, stop/restart some and start new ones? (to be able to function as a manager for other containers)
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
Background: What I'm trying to figure out is how I should go about and build a micro service mesh using Docker. The containers will be running .NET Core. I'm not too keen on relying too much on specifically Docker since it may not be the preferred tech in a couple of years. What can/should I do with Docker and what can/should I do inside the containers. That's what I'm trying to figure out.
I've copied your questions and tried to answer them.
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
You can have only one machine in a swarm and run multiple tasks of the same service or in other words your scale of a service can be more than the number of actual machines. I have a testing swarm with a single machine and one with three and it works the same way.
Do I have to run swarm mode to be able to communicate between instances?
You have to run your docker in swarm mode in order to create a service, please see this link
What is the key difference between swarm mode and just running a number of containers as regular?
The key difference afaik is, that when a task goes down, docker puts another task up automatically. And you can easily scale your services, which means you can easily have multiple tasks just by scaling your service (up or down). As of running a container - when it goes down you have to manually start another.
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
I've currently only tested with a couple of wildfly servers in a swarm, which are on the same network. I'm not sure about others, but would love to find out. I've only read about RabbitMQ, but can't seem to find the link atm.
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
I can't say.
Is there any built in support for a message bus or similar in Docker?
I can't say.
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
I've tested rancher and portainer.io, for a list of them I found this link
Can a container list other containers, stop/restart some and start new ones?
I'm not sure why would you want to do that? And I guess it's possible, see this link
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
I can't say.
#namokarm did a great job, and I'm filling in the gaps:
Benefits of Swarm over docker run or docker-compose.
All communications between containers has to be TCP/UDP etc. You could force two containers to only run on a single machine, then bind-mount their socket so they skip the network, but that would be a bit of an anti-pattern. Swarm is designed for everything to be distributed and TCP/UDP.
In a few cases, such as PHP-FPM + Nginx, I recommend bundling both in the same container (against docker best practices, but trust me it's easier than separate containers). This will ensure they scale together (1-to-1 relationship) and stay fast since they use local sockets to communicate). I only recommend this for a few setups like this, the other being ColdFusion + Nginx because they are two parts of the same tool that provide a HTTP response... I don't recommend bundling images together in nearly all other cases, but I'm open to ideas :).
Rancher is no longer supporting Swarm. Portainer and SwarmPit are GUI options.
Yes a container running something like Portainer/SwarmPit or controlling the Docker socket through a bind-mount or TCP can control the whole Swarm. This is how all docker management works :)
For reverse proxy, you would run a container-based proxy like Traefik or Docker Flow Proxy, which sets up HAProxy for Docker and Swarm.
Many of these topics are discussed in my DockerCon talks: https://www.bretfisher.com/dockercon18/
I know that Docker and Kubernetes aren’t direct competitors. Docker is the container platform and containers are coordinated and scheduled by Kubernetes, which is a tool.
What does it really mean and how can I deploy my app on Docker for Azure ?
Short answer:
Docker (and containers in general) solve the problem of packaging an application and its dependencies. This makes it easy to ship and run everywhere.
Kubernetes is one layer of abstraction above containers. It is a distributed system that controls/manages containers.
My advice: because the landscape is huge... start learning and putting the pieces of the puzzle together by following a course. Below I have added some information from the:
Introduction to Kubernetes, free online course from The Linux Foundation.
Why do we need Kubernetes (and other orchestrators) above containers?
In the quality assurance (QA) environments, we can get away with running containers on a single host to develop and test applications. However, when we go to production, we do not have the same liberty, as we need to ensure that our applications:
Are fault-tolerant
Can scale, and do this on-demand
Use resources optimally
Can discover other applications automatically, and communicate with each other
Are accessible from the external world
Can update/rollback without any downtime.
Container orchestrators are the tools which group hosts together to form a cluster, and help us fulfill the requirements mentioned above.
Nowadays, there are many container orchestrators available, such as:
Docker Swarm: Docker Swarm is a container orchestrator provided by Docker, Inc. It is part of Docker Engine.
Kubernetes: Kubernetes was started by Google, but now, it is a part of the Cloud Native Computing Foundation project.
Mesos Marathon: Marathon is one of the frameworks to run containers at scale on Apache Mesos.
Amazon ECS: Amazon EC2 Container Service (ECS) is a hosted service provided by AWS to run Docker containers at scale on its infrastructrue.
Hashicorp Nomad: Nomad is the container orchestrator provided by HashiCorp.
Kubernetes is built on Docker technology. It is an orchestration tool for Docker container whereas Docker is a technology to create and deploy containers.
Docker, starting with a platform-as-a-service (PaaS) provider named dotCloud.
All in all, Kubernetes is related to the Docker container, allowing you to implement application portability and extensibility in container orchestration.
DOCKER
Easy and fast to install and configure
Functionality is provided and limited by the Docker API
Quick container deployment and scaling even in very large clusters
Automated internal load balancing through any node in the cluster
Simple shared local volumes
Kubernetes
Require some work to get up and running
Client, API and YAML definitions are unique to Kubernetes
Provides strong guarantees to cluster states at the expense of speed
To Enable load balancing requires manual service configuration
Volumes shared within pods
This is just a basic idea which at least explains the difference.If you want to go in depth see my posts
http://www.thecreativedev.com/an-introduction-to-kubernetes/
http://www.thecreativedev.com/learn-docker-works/
Docker and Kubernetes are complementary. Docker provides an open standard for packaging and distributing containerized applications, while Kubernetes provides for the orchestration and management of distributed, containerized applications created with Docker. In other words, Kubernetes provides the infrastructure needed to deploy and run applications built with Docker.