ESXI merge two server to share their resources? - esxi

Is there any way to combine two ESXI host with each other so that they can share their resource with each an other? Or is there an way to run VM on two ESXI so that it will use both ESXI resources or distribute resources among them is this possible on ESXI hypervisor or is there any hypervisor available to run VM on multiple node? Is there any solution?

Is there any way to combine two ESXI host with each other so that they can share their resource with each an other?
you can merge datastores by vSan Technology
is there an way to run VM on two ESXI so that it will use both ESXI
resources or distribute resources among them is this possible on ESXI
hypervisor
no for now
is there any hypervisor available to run VM on multiple node
nope :(
Is there any solution?
you can ask for specify what you need and you will find alot of answers that may help you

Related

Can we run a single container over multiple machines (hosts)?

I just want to know .. Is there any kind of facility available now in docker. I have already gone through some of the documentations in docker regarding the multi-host facility such as,
Docker swarm
Docker service (with replicas)
And also I am aware about the volume problems in swarm mode and the maximum resource (RAM and CPU) limit to a container will vary and depends upon where (at what machine) it assigned by the swarm manager. So here my question is,
How to run a single container instance over multiple machines (not as service) ? (This means a single container can acquire all resources [RAM1 + RAM2 + ... + RAMn] over these connected machines)
is there any way to achieve this ?
My question may be idiotic. But I am curious to know.. how to achieve the same ?
The answer is No. Containerization technologies cannot handle compute, network and storage resources across cluster as one unit. They're only orchestrate them.
Docker and Co. based on cgroup, namespaces, layered FS, virtual networks, etc. All of them wired to specific machine + running processes and requiring additional servicing to manage containers not only on concrete machine, but in the cluster(For example, Mesos, k8s or Swarm).
You can check products such as Hadoop, Spark, Cassandra, Akka framework and other distributed computation implementations to see examples how to manage cluster resources as one unit.
PS You should always think about increasing system complexity with rising of components distribution.

why docker virtualization is faster vs VM [duplicate]

This question already has answers here:
How is Docker different from a virtual machine?
(23 answers)
Closed 1 year ago.
From what I understand, VMs use hardware virtualization, whereas dockers use software virtualization and therefore have better performance (in a case, lets say, I am running a Dockerized Linux on a Windows machine). But what is exactly the reason that OS virtualization is faster than hardware virtualization?
Docker doesn't do virtualization. It uses kernel namespaces to achieve a chroot-like effect not just for the root filesystem but process information (PID namespace), mount points, networking, IPC (shared memory), UTS information (hostname) & user id's.
The containers share the kernel with the host. For security Docker uses AppArmor/SELinux, Linux capabilities and seccomp to filter system calls. Control groups (known as cgroups] are used for process accounting and for imposing limits on resources.
Docker is not about virtualization. It's about containerization (how to run a process in an isolated environment).
This means that you can't run a linux container on windows or a windows container on linux without using some kind of virtualization (Virtualbox, Hyper-v...) It's ok to do this on your laptop while developing but in production you would choose the appropriate architecture for your containers.
What is a container?
from A sysadmin's guide to containers:
Traditional Linux containers are really just ordinary processes on a Linux system. These groups of processes are isolated from other groups of processes using resource constraints:
(control groups [cgroups]),
Linux security constraints (Unix permissions, capabilities, SELinux, AppArmor, seccomp, etc.), and
namespaces (PID, network, mount, etc.).
Setting all these manually (network namespaces, iptable-rules etc..) with linux commands would be tricky, so it's the docker-daemon's job to do them when you type docker ... commands and things happen under the hood...
About speed...
First of all, containers can be less fast than running a process directly on the host networking stack, because of the complexity which is introduced. See for example this: Performance issues running nginx in a docker container
But, they will offer you speed. How?:
containers are not full OSs (base images have small size)
containers follow the concepts of micro-services and "do one thing, do it well". This means that you don't put everything in a container the same way you would do with VMs. This is called separation of concerns and it results in more lightweight app components. It also gives speed to developers because different teams can work on their component separately (others also mention this as developer velocity) with different programming languages and frameworks.
image layers: docker has an internal way of splitting an image to layers and when you build a new image, layers can be reused. This gives you good deployment speeds (consider how useful this is in case of a rollback)
About Windows Containers
Containers was a "linux" thing but this wave of containerization has also had an effect on the Windows land. In the beginning docker-toolbox was using Virtualbox to run containers on a linux VM. Later, docker-for-windows was introduced and gives the option to run containers directly on the host or on hyper-v. If you visit Windows Container Types you can find more.

How to collect metrics from services running on docker containers using collectd, telegraph or similar tools

What is the common practice to get metrics from services running inside Docker containers, using tools like CollectD, or InfluxDD Telegraph?
These tools are normally configured to run as agents in the system and get metrics from localhost.
I have read collectd docs and some plugins allow to get metrics from remote systems so I could have for example, an NGINX container and then a collectd container to get the metrics, but there isnt a simpler way?
Also, I dont want to use Supervisor or similar tools to run more that "1 process per container".
I am thinking about this in conjunction with a System like DC/OS or Kubernetes.
What do you think?
Thank you for your help.

How can you run multiple docker containers from your computer, and have each use a different IP (using a VPN)?

I'm trying to understand how this would conceptually work, and also put it into use.
I'm on a Mac running OS 10.11.X, using a Private Internet Access VPN. I'm trying to run multiple (ideally 3-4) docker containers and tell them all to use separate IPs for their connection. I'm not sure how I could even split different IPs from PIA VPN to begin with, then figure out the docker side.
Or do I have the picture wrong?
Thanks in advance.

How can I run multiple docker nodes on my laptop to simulate a cluster?

My goal is to simulate a cluster environment where I can test my applications and tools.
I need to have minimum of 3 Docker nodes (not container) running, and have access to them over ssh.
I have tried the following:
1 - Installing multiple VMs Machines from Ubuntu MinimalCD
Result: ended up with huge files to maintain, repeating the process is really harmful, and unpleasant.
2- Downloading Vagrant box that has docker inside (there are some here).
Result: I can't access them over ssh, and can't really fire up more than one box (Ok, I can but it is still not optimal).
3- Tried to run "Kitematic" multiple times, but had no success with it.
What do you do to test your clustering tools for docker?
My only "easy" solution is to run multiple instances from some provider and pay per hour usage, but that is really not that easy when I am offline, and when I just don't want to pay.
I don't need to run multiple "containers", but multiple "hosts", which I can then join together into a single Cluster to simulate distributed Data Center.
You could use docker-machine to create a few VMs locally. You can connect to all of them by changing the environment variables.
You might also be interested in something like https://github.com/dnephin/compose-swarm-sandbox/. It creates multiple docker hosts inside containers using https://github.com/dnephin/docker-swarm-slave.
If you are using something other than swarm, you would just remove that service from /srv/.
I would recommend you to use docker-machine for this purpose as they are very light weight and very easy to install, run and manage.
Try creating 3-4 docker machine, pull the swarm image on them and make a cluster and use docker compose to manage the cluster in one go.
Option 2 should be a valid option but what you looked at was to use a VM box using the docker provisionner. I would recommend looking at vagrant docker provider you do not need a vagrant box in this scenario but docker images. The Vagrant file though is still there and you can easily setup your multiple machines from the single Vagrant file
here is a nice blog but I am sure there are plenty of other good articles that explains in detail
I recommend Running CoreOS on Vagrant, it has been designed for your request with cluster enable and 3 instances will be started by default.
With etcd and fleetd, you should be fine to get cluster work properly.

Resources