I am pretty new to docker. At the moment I want to maintain a network of different Rapsberry PIs. Each PI should have the same OS with exactly the same system running. To handle deployment and updates of Software, I want to handle these things by docker.
Currently I am using HypriotOS, which offers docker on their Images.
My Main goal is to run an applocation in the docker containers, which need to access the wifi interface directly. The pure network access won't be enough, there needs to be deeper access like changing the wifi mode (Monitor Mode).
Long Story short: is it possible to passthrough an USB WiFi card directly to the docker Container, that it appears as wlan0 interface? Or are there other ways that you can think of?
Thanks for your answers in advance!
Take a look at the privileged flag for your container, it will give you full access to the devices on the system. See the Docker Run Documention for more information.
Related
I am pretty new to containerization and after looking into k8s and docker documentation, I am not sure, if I can achieve my goal.
In real life I have 6 PCs connected in LAN, exchanging some data and showing it on monitors. I want to mimic the setup without the hardware, using Kubernetes. Note, the setup is running GUI applications, 6 monitors total with different apps on each of those. Red lines indicate connections through Ethernet cable. Added IP addresses for all PCs.
So expected result would be a k8s cluster with 6 docker containers configured to talk to each other and some of those containers to be able to make use of displays.
Is it possible to recreate this setup, providing the OS are the same as on hardware or I should make some changes?
As I want to run it all on 1 PC and have 6 monitors, to show GUIs, the PC should have multiple videocards as well?
The payload per machine is not that big, so I think one 8 core processor unit might be enough?
Thanks for any insights and help.
I need to develop one-to-one network communication applications between two or more applications. According to the network prototype, both nodes of the communication need to be on the same port. In a local (Linux) development environment, I can't map both applications to the same port. To solve the problem, I think of using Docker. I, however, don't know how to go about it. To get started, I try to build docker images of two applications. After building one of them, the second one can't be built due to the occupied port. I do some online search and don't see any helpful information. I use Spring Boot to build those applications and I know I need to use Kubernette for service discovery at some point later.
How shall I go about this?
I've never done anything with Docker Swarm, or Kubernetes so I'm trying to learn what does what, and which is best for my purpose before tackling it.
My scenario:
I have a Desktop PC running Docker Desktop, and ..
I have a Raspberry PI running Docker on Raspbian
This is all on a home LAN, so I don't really want to get crazy with complicated things.
I want to run Pi Hole and DNSCrypt Proxy containers on both 'machines', (as redundancy, mostly because the Docker Desktop seems to crash a lot taking down my entire DNS system with it when I just use that machine for Pi-hole).
My main thing is, I want all the data/configurations, etc. between them to stay in sync (i.e. Pi hole's container data stays in sync on both devices, etc.), and I want the manager to make sure it's always up, in case of crashes, and so on.
My questions:
Being completely new to this area, and just doing a bit of poking around:
it seems that Kubernetes might be a bit much, and more complicated than I need for this?
That's why I was thinking Swarm instead, but I'm also not sure whether either of them will keep data synced?
And, say I create 2 Pi-hole containers on the Manager machine, does it create 1 on the manager machine, and 1 on the worker machine?
Any info is appreciated!
Docker doesn't quite have anything that directly meets your need, but if you've got a reliable file server on your home LAN, you could do it really easily.
Broadly speaking you want to look at Docker Volume Plugins. Most of them ultimately work via an external storage provider and so won't be that helpful for you. There's a couple of more exotic ones like Portworx or StorageOS that can do portable/replicated storage purely in Docker, but I think most of them are a paid license.
But, if you have a fileserver that you trust to stay up and running, you can mount an NFS/CIFS share as a volume as mentioned in the Docker Docs, and Docker can handle re-connecting it when a container moves from one node to another due to a failure.
One other note: you want two manager nodes and one container per service in your swarm. You need to have one working Manager node for the swarm to work (this is important if a Manager crashes). Multiple separate instances would generally only be helpful if the service was designed as a distributed/fault tolerant application.
I need two things:
Disabled Internet access on my VM.
Enabled local network access from my VM.
I'm currently trying to replicate a bug on my CentOS7 VM which requires that I have no direct internet access, only able to connect to the web through a proxy on my local network. I've taken two paths to this so far:
Disable the Internet on my Windows machine. Why this didn't work: My VM just...froze until the internet was turned back on. Currently considering looking into the possibility of a daemon and disabling it.
Disable Internet access only on my VM. This hasn't worked yet. It's the path I'm taking right now, but everything I've tried has done the same as the above: frozen my VM, only this time in order to get it back I need to shut it down completely and restart it. Given that I have to mount drives on it to do what I need to do, it's understandable that this is a less than ideal approach. Below are images of my NAT settings and the in-VM Network UI.
I've also gone in and turned on Airplane Mode, disabled the IPv4 and IPv6 manually, and went through all the network settings to see what there was. A Google search turned up nothing except an OSX-specific workaround which I couldn't replicate on my system.
Does anybody have any suggestions?
EDIT:
The above still applies, but I'm trying to take another route to #2. What I'd like to do is shut down all traffic to my VM except from the proxy and network. However, my network is accessible only through my host machine, so I don't want to shut my host machine out entirely, just internet coming from it. Any thoughts?
You could achieve the desired effect by disabling the nameserver configuration.
Just empty the /etc/resolv.conf file (of course after making a backup for later).
I was wondering if it is possible to offer Docker images, but not allow any access to the internals of the built containers. Basically, the user of the container images can use the services they provide, but can't dig into any of the code within the containers.
Call it a way to obfuscate the source code, but also offer a service (the software) to someone on the basis of the container, instead of offering the software itself. Something like "Container as a Service", but with the main advantage that the developer can use these container(s) for local development too, but with no access to the underlying code within the containers.
My first thinking is, the controller of the Docker instances controls everything down to root access. So no, it isn't possible. But, I am new to Docker and am not aware of all of its possibilities.
Is this idea in any way possible?
An obfuscation-based only solution would not be enough, as "Encrypted and secure docker containers" details.
You would need full control of the host your containers are running in order to prevent any "poking". And that is not the case in your scenario, where a developer does have access to the host (ie his/her local development machine) where said container would run.
What is done sometimes is to have some piece of "core" code to run on a remote location (remote server, usb device), in a way that the external piece of code on the one hand can do some client authentication but also and more importantly run some business core code in order to guarantee that the externally located code "has" to be executed to have the things done. If it were only some check that is not actually core code, a cracker could just override it and avoid calling it on the client side. But if the code is actually required to be run and its not then the software won't be able to finish its processing. Of course there is an overhead for all of this, both in complexity and probably computation times, but that's one way you could deploy something that will unfailingly be required to contact your server/external device.
Regards,
Eduardo