Restrict service detection in OpenNMS based on "hostname" - monitoring

I am able to restrict a service detection based on the ipaddress , but suppose if I want to use another parameter like hostname or node_label for service detection , then how do I configure that?
I need to know exact snippet config for hostname in default-foreign-source.xml
P.S : I am using the Discovery demon i.e auto-discovery of nodes
Any help would be appreciated.

The OpenNMS model is as follows:
node --> interface --> service
So OpenNMS has no way of associating a node label with a service. There is a BusinessServiceMonitor in development that will help deal with more complicated models, but it isn't in release code at the moment.
This is why you aren't able to associate as you want.
You might get around this by labeling (ifAlias) interfaces with tags and matching categories to tags to exclude the service.
Also, you should never edit provisioning XML configuration files directly. OpenNMS utilizes caching for those configs for performance purposes and you can break your system (unlikely but possible).
I would also get away from using discovery. It limits the ability for you to separate groups of nodes out as distinct requisitions, which give you the ability to apply different sets of provisioning policies (filters, ability to monitor or not monitor services or data collections) to different groups of nodes. Discovery operates only against the default foreign source policy so you lose that kind of flexibility.

Related

Unique agents across application instances and BEAM

My requirement is to use named agents. Basically one agent per record with a custom id. Can we query the Agent's name across application instances and BEAM ? I mean that if we have 2 instances of an app on 2 different BEAM machines we need to make sure that we have only one agent per record. Not more. How can I achieve this?
Agent is basically a GenServer. The latter has three options to register it’s name. Both {:global, term} and {:via, module, term} register the name globally.
Of course, all the nodes should be connected for this to work.
To make it easier to address globally registered processes, one might use Registry, although in this particular case {:global, name} should be fine enough.

Sharing a graph database between Microservices

Is there any way to share a neo4j / aws Neptune graph database between microservices while restricting the access to the specific parts of the graph database to only a specific microservice ? By doing so, will there be any performance impact ?
In Amazon Neptune, there is no way to have ACLs for a portion of a graph at the moment. You can have IAM users who have full access to a cluster or no access at all. (Allow All or Deny All). You would need to handle this at application layer. Fine grained access control would be a good feature to have, so you may want to place a feature request for that (via AWS Forums, for example).
If you rule out access control, and the only thing you need is to make micro services not impact each other, then you can create read replicas, and use that them in your micro services (whether sharing database across micro services is a good choice or not is a separate discussion). Two approaches there are:
Add enough replicas in your cluster and use the cluster-ro (reader) endpoints in your read only micro services. All micro services would share the read replicas, but with DNS round robin.
Add replicas for various use cases, and then use specific instance endpoints with specific micro services. The micro services would not impact each other, however, a drawback with this approach would be that your instance can get promoted to master in the event of crashes and that may be something that you'd need to handle or be ready for.

Can I have some keyspaces replicated to some nodes?

I am trying to build multiple API for which I want to store the data with Cassandra. I am designing it as if I would have multiple hosts but, the hosts I envisioned would be of two types: trusted and non-trusted.
Because of that I have certain data which I don't want to end up replicated on a group of the hosts but the rest of the data to be replicated everywhere.
I considered simply making a node for public data and one for protected data but that would require the trusted hosts to run two nodes and it would also complicate the way the API interacts with the data.
I am building it in a docker container also, I expect that there will be frequent node creation/destruction both trusted and not trusted.
I want to know if it is possible to use keyspaces in order to achieve my required replication strategy.
You could have two Datacenters one having your public data and the other the private data. You can configure keyspace replication to only replicate that data to one (or both) DCs. See the docs on replication for NetworkTopologyStrategy
However there are security concerns here since all the nodes need to be able to reach one another via the gossip protocol and also your client applications might need to contact both DCs for different reads and writes.
I would suggest you look into configuring security perhaps SSL for starters and then perhaps internal authentication. Note Kerberos is also supported but this might be too complex for what you need at least now.
You may also consider taking a look at the firewall docs to see what ports are used between nodes and from clients so you know which ones to lock down.
Finally as the above poster mentions, the destruction / creation of nodes too often is not good practice. Cassandra is designed to be able to grow / shrink your cluster while running, but it can be a costly operation as it involves not only streaming data from / to the node being removed / added but also other nodes shuffling around token ranges to rebalance.
You can run nodes in docker containers, however note you need to take care not to do things like several containers all accessing the same physical resources. Cassandra is quite sensitive to io latency for example, several containers sharing the same physical disk might render performance problems.
In short: no you can't.
All nodes in a cassandra cluster from a complete ring where your data will be distributed with your selected partitioner.
You can have multiple keyspaces and authentication and authorziation within cassandra and split your trusted and untrusted data into different keyspaces. Or you an go with two clusters for splitting your data.
From my experience you also should not try to create and destroy cassandra nodes as your usual daily business. Adding and removing nodes is costly and needs to be monitored as your cluster needs to maintain repliaction and so on. So it might be good to split cassandra clusters from your api nodes.

How to deploy ContextBroker in production?

we are analysing the FIWARE NGSI architecture to provide easy to scale and fault tolerant recipes for deployment of related enablers. Of course we plan to start from the ContextBroker case.
Our idea, but we would appreciate a feedback, since we may not be aware of full internal details of ContextBroker and implications of the way we may use it, its the following:
Define a compose/docker recipe that support federation of contextBroker instances (as described in the documentation here: https://fiware-orion.readthedocs.io/en/develop/user/federation/index.html)
Include in the recipe the configuration of a load balancer with virtual IP that balance the requests to the different private IPs of the contextBroker.
Explore additional configuration options on top, that could be, for example, geographical "sharding" depending on the client IP.
Of course each instance of context broker would have it's own "database" instance. An alternative, could be positioning the "synch" layer of high availability at the data base level, leveraging on the "replication" functionalities of mongo db. But I am not sure this is a good idea.
Any feedback is appreciated :)
Not sure about the deployment (edit the question post to add a diagram will help) but if each CB instance plays the role of an independent logical node with its own context data (I guess so given that you mention federation between different CB nodes) my recomendation for a production deployment is to set up each node in High Availability (HA) way.
I mean, instead of having just one CB in each node, use an active-active CB-CB configuration, with a load balancer in front of them. Both CBs will use the same MongoDB database. In order to get HA also in the DB layer you would need to use MongoDB replica sets.

How to reconnect partitioned nodes in erlang cluster

Looking for some solutions to handle Erlang cluster partitions. Basically, whenever cluster participant is reachable again it should be added back to the cluster.The easiest solution is probably to use erlang node monitoring.
Are there any other / better solutions, maybe more dynamic which does not require fixed nodes list?
There are a few 3rd party libraries that don't have to be configured using a fixed node list. The two that I am familiar with are redgrid and erlang-redis_sd_epmd, there are probably others, but i'm just not familiar with them.
Both of these do have an external dependancy on redis which may or may not be desirable depending on what you decide is needed.
redgrid is the simpler implementation, but doesn't have a ton of features. Basically the erlang nodes connect to redis, and all erlang nodes connected to redis then establish connections to each other. You can associate meta-data with a node and retrieve it on another node.
erlang-redis_sd_epmd is a bit more complex, but allows a lot more configuration. For example instead of just automatically connecting all nodes, a node can publish services that it can perform, and a connecting node can look up nodes based on the services provided.
Not an off the shelf solution, but if you're already doing custom mods to ejabberd you can try integrating this code which resolves mnesia conflicts after cluster partitions.
https://github.com/uwiger/unsplit

Resources