How to deploy ContextBroker in production? - scalability

we are analysing the FIWARE NGSI architecture to provide easy to scale and fault tolerant recipes for deployment of related enablers. Of course we plan to start from the ContextBroker case.
Our idea, but we would appreciate a feedback, since we may not be aware of full internal details of ContextBroker and implications of the way we may use it, its the following:
Define a compose/docker recipe that support federation of contextBroker instances (as described in the documentation here: https://fiware-orion.readthedocs.io/en/develop/user/federation/index.html)
Include in the recipe the configuration of a load balancer with virtual IP that balance the requests to the different private IPs of the contextBroker.
Explore additional configuration options on top, that could be, for example, geographical "sharding" depending on the client IP.
Of course each instance of context broker would have it's own "database" instance. An alternative, could be positioning the "synch" layer of high availability at the data base level, leveraging on the "replication" functionalities of mongo db. But I am not sure this is a good idea.
Any feedback is appreciated :)

Not sure about the deployment (edit the question post to add a diagram will help) but if each CB instance plays the role of an independent logical node with its own context data (I guess so given that you mention federation between different CB nodes) my recomendation for a production deployment is to set up each node in High Availability (HA) way.
I mean, instead of having just one CB in each node, use an active-active CB-CB configuration, with a load balancer in front of them. Both CBs will use the same MongoDB database. In order to get HA also in the DB layer you would need to use MongoDB replica sets.

Related

Can I have some keyspaces replicated to some nodes?

I am trying to build multiple API for which I want to store the data with Cassandra. I am designing it as if I would have multiple hosts but, the hosts I envisioned would be of two types: trusted and non-trusted.
Because of that I have certain data which I don't want to end up replicated on a group of the hosts but the rest of the data to be replicated everywhere.
I considered simply making a node for public data and one for protected data but that would require the trusted hosts to run two nodes and it would also complicate the way the API interacts with the data.
I am building it in a docker container also, I expect that there will be frequent node creation/destruction both trusted and not trusted.
I want to know if it is possible to use keyspaces in order to achieve my required replication strategy.
You could have two Datacenters one having your public data and the other the private data. You can configure keyspace replication to only replicate that data to one (or both) DCs. See the docs on replication for NetworkTopologyStrategy
However there are security concerns here since all the nodes need to be able to reach one another via the gossip protocol and also your client applications might need to contact both DCs for different reads and writes.
I would suggest you look into configuring security perhaps SSL for starters and then perhaps internal authentication. Note Kerberos is also supported but this might be too complex for what you need at least now.
You may also consider taking a look at the firewall docs to see what ports are used between nodes and from clients so you know which ones to lock down.
Finally as the above poster mentions, the destruction / creation of nodes too often is not good practice. Cassandra is designed to be able to grow / shrink your cluster while running, but it can be a costly operation as it involves not only streaming data from / to the node being removed / added but also other nodes shuffling around token ranges to rebalance.
You can run nodes in docker containers, however note you need to take care not to do things like several containers all accessing the same physical resources. Cassandra is quite sensitive to io latency for example, several containers sharing the same physical disk might render performance problems.
In short: no you can't.
All nodes in a cassandra cluster from a complete ring where your data will be distributed with your selected partitioner.
You can have multiple keyspaces and authentication and authorziation within cassandra and split your trusted and untrusted data into different keyspaces. Or you an go with two clusters for splitting your data.
From my experience you also should not try to create and destroy cassandra nodes as your usual daily business. Adding and removing nodes is costly and needs to be monitored as your cluster needs to maintain repliaction and so on. So it might be good to split cassandra clusters from your api nodes.

Kubernetes With DPDK

I'm trying to figure out if Kubernetes will work for a certain use case. I understand the networking/clustering concept, and even the load balancing and how that can be used with things like nginx. However, assuming this is not deployed on a public cloud and things like ELB won't be available, could it still be used for a high-speed networking application using DPDK? For example, if we assume the cluster networking provided by k8s is only used for the control/management path, and the containers themselves handle the NIC directly with DPDK, is this something it's commonly used for?
Secondly, I understand the replication controller and petsets feature I think, but I'm not really clear on whether the intent of those features is for high availability or not. It seems that the "pod fails and the RC replaces it on a different node" isn't necessarily for HA, and there aren't really guarantees on how fast it builds a new pod. Am I incorrect?
For the second question, if the replication controller has size large than 1, it is highly available.
For example, you have an service "web-svc" in front of the replication controller "web-app", with size 3, then your request will be load balanced to one of the 3 pod:
web-svc ----> {web-app-pod1, web-app-pod2, web-app-pod3}
If some of the 3 pods fail, kubernetes will replace them with new ones.
And pet set is similar to replication controller, but used for stateful applications like database.

Restrict service detection in OpenNMS based on "hostname"

I am able to restrict a service detection based on the ipaddress , but suppose if I want to use another parameter like hostname or node_label for service detection , then how do I configure that?
I need to know exact snippet config for hostname in default-foreign-source.xml
P.S : I am using the Discovery demon i.e auto-discovery of nodes
Any help would be appreciated.
The OpenNMS model is as follows:
node --> interface --> service
So OpenNMS has no way of associating a node label with a service. There is a BusinessServiceMonitor in development that will help deal with more complicated models, but it isn't in release code at the moment.
This is why you aren't able to associate as you want.
You might get around this by labeling (ifAlias) interfaces with tags and matching categories to tags to exclude the service.
Also, you should never edit provisioning XML configuration files directly. OpenNMS utilizes caching for those configs for performance purposes and you can break your system (unlikely but possible).
I would also get away from using discovery. It limits the ability for you to separate groups of nodes out as distinct requisitions, which give you the ability to apply different sets of provisioning policies (filters, ability to monitor or not monitor services or data collections) to different groups of nodes. Discovery operates only against the default foreign source policy so you lose that kind of flexibility.

Valid CoreOS multi tenancy scenario?

I'm currently tinkering with a scenario for using CoreOS. It's probably not the 1st class use case. But I'd like to get a pointer if it's valid though. As I'm really at the beginning of getting a grip on CoreOS I hope that my "use case" is not totally off.
Imagine a multi tenant application where every tenant should get it's own runtime environment. Let's take a web app running on Node.js and PostgreSQL for data storage as given. Each tenant environment would be be running on CoreOS in their respective containers. Data persistance is left out for now. For me it's currently more about the general feasibility.
So why CoreOS?
Currently I try to stick with the idea of separated environments per tenant. To optimise the density of DB and web server instances per hardware host I thought CoreOS might be the right choice instead of "classic" virtualisation.
Another reason is that a lot of tenants might not need more than a single, smallish DB instance and a single, smallish web server. But there might be other tenants that need some constantly scaled out deployments. Others might need a temporary scale out during burst times. CoreOS sounds like a good fit here as well.
On the other side there must be a scalable messaging infrastructure (RabbitMQ) in behind that will handle a lot of messages. This infrastructure will be used by all tenants and needs to dynamically scalable at best. Probably there will be a "to be scaled" Elasticsearch infrastructure as well. Viewed through my current "CoreOS for everything goggles" this seems a good fit as well.
In case this whole scenario is generally valid, I currently cannot see how it would be possible to route the traffic for a general available web site to the different tenant containers.
Imagine the app is running at app.greatthing.tld. A user can login and should be presented the app served for it's tenant. Is this something socketplane and/or flannel are there to solve? Or how would a solution look like to get the tenant served by the right containers? I think it's kind of a general issue. But at least in the context of a CoreOS containerized environment I cannot see how to deal with this at all.
CoreOS takes care of scheduling your container in the cluster with their own tools such as fleetctl/etcd/systemd and also takes care of persistent storage when resheduled to a different container using flocker (experimental). They have their own load balancers.

Flume automatic scalability and failover

My company is considering using flume for some fairly high volume log processing. We believe that the log processing needs to be distributed, both for volume (scalability) and failover (reliability) reasons, and Flume seems the obvious choice.
However, we think we must be missing something obvious, because we don't see how Flume provides automatic scalability and failover.
I want to define a flow that says for each log line, do thing A, then pass it along and do thing B, then pass it along and do thing C, and so on, which seems to match well with Flume. However, I want to be able to define this flow in purely logical terms, and then basically say, "Hey Flume, here are the servers, here is the flow definition, go to work!". Servers will die, (and ops will restart them), we will add servers to the cluster, and retire others, and flume will just direct the work to whatever nodes have available capacity.
This description is how Hadoop map-reduce implements scalability and failover, and I assumed that Flume would be the same. However, the documentation sees to imply that I need to manually configure which physical servers each logical node runs on, and configure specific failover scenarios for each node.
Am I right, and Flume does not serve our purpose, or did I miss something?
Thanks for your help.
Depending on whether you are using multiple masters, you can code your configuration to follow a failover pattern.
This is fairly detailed in the guide: http://archive.cloudera.com/cdh/3/flume/UserGuide/index.html#_automatic_failover_chains
To answer your question, bluntly, Flume does not yet have an ability to figure out a failover scheme automatically.

Resources