KVM monitoring without agents - monitoring

I have a host with KVM guests and I want to monitor these guest (like Nagios).
However I doesn't want to install an agent on the virtual machines.
In fact I want to monitor the VM since the host
Is it possible to get the same information ?
Thanks in your advance for your help
John S

On host side you can monitor presense of hypervisor process (it may likely provide some information about guest state/health). It cannot provide info on you guest OS or app running inside guest so you likely need some agent inside guest to get meaningful information. Perhaps for nagios SNMP server inside guest is the easiest approach.
So the answer depends on what exactly you mean by "monitoring the guest".

Related

Can I use Google Container Optimised OS as a secure container sandbox?

I have a VM running Googles Container Optimised OS, and I want to allow running code that users provide - each user has their own container.
This code can be malicious - I want to limit the scope of the code to just its own container.
https://cloud.google.com/container-optimized-os/docs/concepts/security
Questions
A. Does the OS add enough protections for containers to be used as a sandbox? The documentation mentions that there is added security, but it does not mention anything about how effective it is at containing malicious code within a container.
B. Can docker volumes be used to limit the file system scope of the code running in a container? I want to use the CLI provided docker volume to give each user a folder on disk they can write to, but I want to prevent users reading each others data.
Any help much appreciated, Thanks.

Data exchange through FIROS fails

On a computer, 2 ubuntu virtual machines are installed. On one of them there is another virtual machine with Fiware-orion Context broker. Both VMs have ROS.
I am trying to make a simple publisher-subscriber ROS program, that sends a message from one VM to another one through FIROS(firos is installed and configured). The problem is that the message from a publishing VM is being sent to FIROS(or it is better to say, the topic is shared through FIROS), but somehow it is not being achieved by the subscribing VM, and therefore I cannot see the message being sent.
We are using the local network so there shouldn't be an issue with port forwarding. Moreover, using rostopic list it is visible that it has fiwaretopics on both VMs running.
Can it be, that the issue lies in using Virtual Machines rather than 2 separate PCs?
Thank you in advance.
I solved this.
There were 2 problems - first, the IP address of the server in config.json must be of the machine where the FIROS is running, not where I wanted to send it.
2 problem, the FIROS has to be launched last, after all other nodes are being run. Therefore it is able to subscribe to those topics and send the data. I was running FIROS first and therefore failed to subscribe, because there were nothing to subscribe to at that particular moment.

How can I somewhat securely run distccd on a docker image in the cloud?

I'm compiling things on a raspberry pi and it's not going fast enough, even when I use my desktop's CPU to help.
I could just install distcc the old fashioned way on a cloud server, but what if someday I was to real quick spin up a bunch of servers for a minute with docker machine?
distccd can use SSH auth, but I don't see a good way to run both SSH and distccd. And it seems there will be hassle with managing ssh keys.
What if configured distcc to only accept the WAN IP of my house (and then turned the image off as soon as it was done)?
But it'd be great to make something other raspberry pi users could easily spin up.
You seem to already know the answer to this, set up distcc to use SSH. This will ensure encrypted communication between your distcc client and the distcc servers you have deployed as Docker images in the cloud. You have highlighted that the cost of doing this would be spending time to set up an SSH key that would be accepted by all of your Docker images. From memory this key could be the same for all the Docker nodes, as long as they all had the same user name using the same key. Is that really such a complex task?
You ask for a slightly less secure option for building your Compile Farm. Well limiting things based on the Internet accessible IP address for your house would limit the scope and increase the complexity of others using your build cluster. Someone might spoof the special IP address and get access to your distcc servers but that would just cost you their runtime. The larger concern would be that your code could be transmitted in plain text over the internet to these distcc servers. If that is not a big concern then it could be considered low risk.
An alternative might be to setup a secure remote network of docker nodes and set up VPN access to them. This would bind your local machine to the remote network and you could consider the whole thing to be a secured LAN. If it is considered safe to have the Docker nodes talk between themselves in an unencrypted manner within the cloud, it should be as secure to have a VPN link to them and do the same.
They best option might be to dig out a some old PCs and set those up as local distcc servers. Within a LAN their is no need for security.
You mention a wish to share this with other Raspberry PI users. There have been other Public Compile Farms in the past but many of them have fallen out of favour. Distributing such things publicly, as computational projects such as BOINC do, works poorly because the network latency and transfer rates can slow the builds significantly.

Docker: get access to wifi interface

I am pretty new to docker. At the moment I want to maintain a network of different Rapsberry PIs. Each PI should have the same OS with exactly the same system running. To handle deployment and updates of Software, I want to handle these things by docker.
Currently I am using HypriotOS, which offers docker on their Images.
My Main goal is to run an applocation in the docker containers, which need to access the wifi interface directly. The pure network access won't be enough, there needs to be deeper access like changing the wifi mode (Monitor Mode).
Long Story short: is it possible to passthrough an USB WiFi card directly to the docker Container, that it appears as wlan0 interface? Or are there other ways that you can think of?
Thanks for your answers in advance!
Take a look at the privileged flag for your container, it will give you full access to the devices on the system. See the Docker Run Documention for more information.

What is the worst case when the epmd port is open?

When using Erlang programs like ejabberd the Erlang port mapper daemon epmd is started and opens port 4369.
This port is accessible over the internet (only most recent ejabberd versions allow to configure that epmd should bind to localhost) by default.
The ejabberd documentation recommends blocking this port via packet filter rules and a comment in the Debian bug tracker calls this default behavior 'a nightmare from a security point of view'.
What is the worst case scenario when ejabberd is running and port 4369 is not blocked?
Say - the firewall is mis-configured by accident or something like that.
What would be the most evil thing a Erlang-fluent attacker could do over this port?
Under what user/privileges runs the epmd under a linux distribution (e.g. Debian/Ubuntu)?
Great question.
Besides port 4369 you also have to take into account the ports it will suggest for the actual inter-node communication (5001-6024 by default). Like all tcp services it will be vulnerable to evil-doers, as software is never bug free thus hackable. Think SSH and it's buffer overflow vulnerabilities. As 'epmd' doesn't provide a lot of services, internode communication is authenticated with a secure cookie code and the relative old age of Erlang you would expect not a lot of bugs in that area. But a good pedigree alone doesn't count in the security area. ;-)
As you wrote, you need a properly configured firewall to make sure the server is not exposed like that. You need to make sure in your maintenance process that proper functioning of the firewall is thoroughly checked.
Oh, and I run my Erlang node as non-root user with limited file permissions.
You might find out the source/destination addresses and port number pairs of active connections between BEAMs. This may lead into DoS attacks to the inter-BEAM connections.

Resources