I am pretty new to containerization and after looking into k8s and docker documentation, I am not sure, if I can achieve my goal.
In real life I have 6 PCs connected in LAN, exchanging some data and showing it on monitors. I want to mimic the setup without the hardware, using Kubernetes. Note, the setup is running GUI applications, 6 monitors total with different apps on each of those. Red lines indicate connections through Ethernet cable. Added IP addresses for all PCs.
So expected result would be a k8s cluster with 6 docker containers configured to talk to each other and some of those containers to be able to make use of displays.
Is it possible to recreate this setup, providing the OS are the same as on hardware or I should make some changes?
As I want to run it all on 1 PC and have 6 monitors, to show GUIs, the PC should have multiple videocards as well?
The payload per machine is not that big, so I think one 8 core processor unit might be enough?
Thanks for any insights and help.
Related
I'm trying to measure the power consumption of Jitsi servers with varying numbers of clients.
current set-up:
several different devices
all connected locally with a switch to the server
Problem : it is inefficient and unfeasible for scaling up.
Is there a better way?
Is there a way to have docker clients running with a virtual camera/audio?
I've considered trying to have clients run in docker.
I'm also aware that you can connect with different instances on the same device, but I'm trying to avoid this as I don't know if this will affect the results (since the server may only receive one video stream to decode, changing the power consumption recorded).
Thanks
Maybe it could be a good idea to create :
a Dockerfile for each "type" of device.
a Dockerfile for your jitsi server
After that you can install minikube (a local standalone k8s cluster) to create :
a deployment for each device (using the created docker image just before) : you will be able to update the number of pods (instances of your devices) as you wish
a deployment for your jitsi server (using the created docker image just before)
i.e : you can set 100 pods of 'devices' if you want.
There are many articles and videos comparing containers, virtual machines, physical machines. However almost all information is theoretical: containers are fast, VMs are secure, etc. But I could not find description of specific use cases or guidance on when to choose virtual machines, physical machines, but not containers. So, currently I cannot imagine situation when somebody gives recommendation to not use containers.
Question:
Could you please list specific applications or solutions when you would recommend using VMs, but not container?
Could you please list specific applications or solutions when you would recommend using OS over bare metal, but not containers or VMs?
Here is example of answer I would appreciate to get (note, that I am not sure if this information is correct):
Use case 1: Edge Router
Edge router is a router which connects organizational network to the Internet. Also, in this case it is assumed, that vendor of the router provides it not as device but as a software package (virtualized router).
Edge router most probably will be one of target of hacker's attacks. Thus security requirements come to the first place.
Containers are not recommended in this case. By default containers provide mediocre level of security. Strong security can be achieved with complex configuration (what configuration?) but this is more difficult than in case of VM or bare metal. In addition, high security level may require special hardened Linux kernel, however containers technology does not allow adjusting kernel configuration.
Virtual Machines would be a good choice if vendor of the router provides software as VM image or when organization has many edge routers (for example, many offices with internet access points), and has (or is ready to create) well-established process of preparation of VM images. In this case using VMs will simplify rollout, update and healing the virtualized edge router. VM also provides high security level; nevertheless is it still recommended to place such a VM in a separate server and to not share same server with other applications/VMs to avoid cross-VM attacks.
Physical machine would be a good choice if router vendor provides router's software as an application package (not as a VM) such as .rpm, and rollout, update and healing processes are not expected to take much efforts; this might be the case when when company has few routers (so updates can be performed manually or automated with tools like Ansible), and couple of hour of planned and unplanned downtime is acceptable.
Use case 2: ...
Thank you in advance.
The question is a bit vague so I'll try my best:
you'd usually allocate work to containers when you have a few separate applications with limited physical resources and you'd like to run them each with their own different environment (different runtime version, architecture and dependencies) which managing on a machine (physical or virtual) would be cumbersome.
you'd use a VM when you want specifically a feature that containers couldn't satisfy or it would just be a headache to set them up with it and a simple quick and easy VM could solve (and again you have limited resources you'd like to share between use cases)
and finally, a physical machine when performance is of the essence like I/O requests and latency around that.
you can also mix and match to match each tier needs:
we need to run many applications that VM would be too much of an overhead for them and containers would make their handling more automated and streamline so containers with k8s, but on the other hand, we want local storage offered to those containers to be very fast so we run the k8s cluster on physical machines.
if recoverability would be of the essence we would have used VM due to the options of snapshotting VM states over time.
It's all a big LEGO set you can mix and match depending on your use case and needs
I've never done anything with Docker Swarm, or Kubernetes so I'm trying to learn what does what, and which is best for my purpose before tackling it.
My scenario:
I have a Desktop PC running Docker Desktop, and ..
I have a Raspberry PI running Docker on Raspbian
This is all on a home LAN, so I don't really want to get crazy with complicated things.
I want to run Pi Hole and DNSCrypt Proxy containers on both 'machines', (as redundancy, mostly because the Docker Desktop seems to crash a lot taking down my entire DNS system with it when I just use that machine for Pi-hole).
My main thing is, I want all the data/configurations, etc. between them to stay in sync (i.e. Pi hole's container data stays in sync on both devices, etc.), and I want the manager to make sure it's always up, in case of crashes, and so on.
My questions:
Being completely new to this area, and just doing a bit of poking around:
it seems that Kubernetes might be a bit much, and more complicated than I need for this?
That's why I was thinking Swarm instead, but I'm also not sure whether either of them will keep data synced?
And, say I create 2 Pi-hole containers on the Manager machine, does it create 1 on the manager machine, and 1 on the worker machine?
Any info is appreciated!
Docker doesn't quite have anything that directly meets your need, but if you've got a reliable file server on your home LAN, you could do it really easily.
Broadly speaking you want to look at Docker Volume Plugins. Most of them ultimately work via an external storage provider and so won't be that helpful for you. There's a couple of more exotic ones like Portworx or StorageOS that can do portable/replicated storage purely in Docker, but I think most of them are a paid license.
But, if you have a fileserver that you trust to stay up and running, you can mount an NFS/CIFS share as a volume as mentioned in the Docker Docs, and Docker can handle re-connecting it when a container moves from one node to another due to a failure.
One other note: you want two manager nodes and one container per service in your swarm. You need to have one working Manager node for the swarm to work (this is important if a Manager crashes). Multiple separate instances would generally only be helpful if the service was designed as a distributed/fault tolerant application.
I am pretty new to docker. At the moment I want to maintain a network of different Rapsberry PIs. Each PI should have the same OS with exactly the same system running. To handle deployment and updates of Software, I want to handle these things by docker.
Currently I am using HypriotOS, which offers docker on their Images.
My Main goal is to run an applocation in the docker containers, which need to access the wifi interface directly. The pure network access won't be enough, there needs to be deeper access like changing the wifi mode (Monitor Mode).
Long Story short: is it possible to passthrough an USB WiFi card directly to the docker Container, that it appears as wlan0 interface? Or are there other ways that you can think of?
Thanks for your answers in advance!
Take a look at the privileged flag for your container, it will give you full access to the devices on the system. See the Docker Run Documention for more information.
I'm using Delphi to develop real-time control software and over the last couple of years I have done some work running older Windows installations under Microsoft's VirtualPC and it works fine for 'pure software' development (i.e no or limited access to the outside world). Such tools seem able to work with network connections but I have to maintain software which performs I/O via the parallel port (via a device driver). We also use USB I/O. In the past I've liked Microsoft's virtual tools because it takes time to install a new operating system and then (in my case) install Delphi and a load of libraries and components to provide development support. In these circumstances I've not been too bothered by my lack of access to the low-level I/O ports.
I want to up my game and I'm happy to pay for a good virtualisation tool IF I can have access from it to the outside world, i.e I want to be able to configure it to allow access to my machine's parallel port and com ports in the same way as if it was running natively. This access has to be able to expose the parallel port in register terms, i.e to 'see' the port at address $03f8 for example and to support I/O operations of those registers (via the appropriate kernel access) as my Windows 7 64-bit installation is able to do.
I see that there are a number of virtualisation solution out there now but it's quite hard to acertain the capability of each at such a low level. Does anyone have any experience or knowledge in this area?
The VMware products would be suited best for this. You can add virtual serial and parallel ports and forward them to a physical port on the host, or even to a file or a named pipe.
You can also connect any USB device that is connected to the host machine.
This works with VMware Workstation, but might even work with the free VMware player too.