We have a star schema with a fact table and a few dimensions. The fact table tracks quarterly premium by account. There is a dimension table that stores the account information like accountnum, AccountName, and Broker.
There can be multiple brokers per account. For example, Account 123 can have Broker A and Broker B. The dimension is setup as a type 2 SCD, so if anything changes with the account, it creates a new row with the respective Begin and End Dates.
What i've been asked to do is come up with a design so taht we can report changes in the Account Brokers from Quarter to Quarter and show what the prior Broker was and what the new Broker is. So if Account 123 had Broker A in Q1 and now has Broker B in Q2, it would show that in Q2 Broker A went to Broker B and Broker B came from Broker A. So i need to be able to show where it came from and where it went to.
I wasn't sure how to implement this in a dimensional model or maybe create a one-off flat table to store the changes to make it easy to report by quarter.
thanks
Scott
Related
I'm looking for a way to get a list of all topics known to a broker. There are some quite similar question's, but they didn't help me to figure it out for my use case.
I've got 3 Raspberry Pi's with multiple sensors (temperature, humidity) which are connected over an MQTT network. Every Pi has it's own database containing time series of measurements and other system variables(like CPU).
Now I'm looking for a way for the following szenario:
I want to monitor my system and detect anomalies. For that I want to get all sensor-time series in the last x seconds and process them in a python script. My system to do the monitoring calculations can be every Pi.
Example: I'm on RPI2 and want to monitor the whole distributed network. There's no given knowledge about the sensors attached to the Pi's. Now from my python script running on RP2 I would initalise a MQTT client and subscribe every sensor data on the broker. I know about the wildcard # but I'm not sure how to use it in that case. My magic command would look like the following pseudo code:
1) client subscribe to all sensor data - #/sensor/#
2) get list with all topics
3) client subscribe to all topics from given list list/#
4) analyse data for anomalies every x seconds
First, your wildcard topic patterns are not valid. Topic patterns can only contain a single '#' character and it can only appear at the end of a topic e.g. foo/bar/# is valid, #/foo is not. You can use the + character which is a single level wildcard character.
This means a topic pattern of +/sensor/# will match each of the following:
rpi1/sensor/foo
rpi1/sensor/bar/temp
but not
rpi1/foo/sensor/bar
Next brokers do not have a list of topics that exist. Topics only really exist at the instant that a message is published to one, the broker then checks the patterns that subscribing clients have requested and checks that topic against the list and delivers it to the clients that match.
Thirdly when bridging brokers in loops like that you have to be very careful with the bridge filters to make sure that messages don' end up a constant loop.
The solution is probably to designate a "master" broker and bridge all the others one way to that broker and then have the client subscribe to either '#' to get everything or something more like '+/sensor/#' to just see the sensor readings.
One of the new features of MQTT 5 is the shared subscriptions feature, which allows client-side load balancing between multiple workers, so that multiple workers can be responsible for handling messages, but every message is only ever sent to a single server.
By default, this works with a round-robin approach, but I am in the need of a slightly more advanced scenario:
What I want is some kind of routing, so that one of the messages' properties gets used as some kind of routing key. I.e., I want multiple workers to be responsible for the messages, but all messages with value X in their routing key property should always go to the same worker, and all messages with Y should do as well. The workers for X and Y may be different, but all messages with X should always go to the same one.
Question 1: Is this even possible with MQTT 5? If so, what is the term I need to look for? I tried googling for this, but wasn't really successful (mainly, I guess, because I don't know exactly what to look for).
Now, supposed this is possible: How can I then handle cases where nodes join or leave? Then I still want only a single node to be responsible, so it would be great if the assignment was not statically, but could be adjusted dynamically (or even better, would adjust itself automatically). However, what I strictly need to avoid is that two messages with X ever go to different servers at the same time.
Question 2: Supposed, this is not possible – what alternatives do I have to MQTT 5?
You don't at a protocol level. That is the whole point of a shared subscription to distribute the incoming messages evenly across all the subscribers.
This also goes against the pub/sub paradigm, that messages are published to a topic not an individual subscriber.
If you want to route messages differently publish them to different topics. There is nothing to stop you republishing a message on a separate topic based on it's meta data once it's been received by a client if needed.
Suppose we have 3 lightning nodes and a underlying bitcoin network, we need a way to create a 3 of 3 multisig address, where one node will send some satoshis to the multisig and the 2nd node will be able to withdraw the satoshis after some time lock is over with approval of 3 nodes
This won't be possible without patching a lightning node implementation and breaking the protocol. Currently payment channels are 2-2 multisig wallets. Also with the payment channel construction that we currently use such an escrow service will create a huge implementation overhead. With eltoo channels multiparty channels might become a thing but eltoo requires a Bitcoin softfork
I would like some feedback on this problem and my proposed solution to catching up after missed MQTT messages please:
[Update 1] Simplified problem diagram and added solution diagram. Added mention of QoS
Scenario:
Client A publishes messages that we wish Client B to receive, even if connections are temporarily dropped then restored.
Config
Client A: connect with clean=false. Publish stateful messages with retain = true, non-stateful messages published with retain = false
Client B: connect with clean=false
What will happen
Each time Client A publishes to topic "foo", previous messages are replaced on the broker. Ex. Client A publishes 111, 222, 333. Client B connects after the messages are published. Client B will receive only, 333. Thus, messages 111 and 222 were missed because each message replaced the previous one on that same topic (different topics do not replace each other).
Proposed solution
I envision two types of messages. Stateful and non-stateful. Stateful messages would be things like, voltage, temperature, gps location, pressure. Non-stateful messages would be things like a chat message where history is more likely to be important for context. Missed stateful messages are more likely to be tolerable while non-stateful messages might not be tolerable.
All messages are published with QoS 1 in my case.
For the stateful messages I am thinking Client A will publish with retain = true.
For the non-stateful messages, I am thinking Client A will publish with retain = false (because what good is the last message if we don't have the full historical context of previous messages). When Client B connects/reconnects, I will publish a catch-up (arbitrary name) message containing all the ids of the messages it received, which when Client A receives it, will respond by publishing the whole history of messages minus those in the id list (ids maintained in Client A db). This might work for me if the total aggregate message history isn't too big.
The alternative might be for Client B to send read receipts for each message received.
For me, these two solutions will require a database of messages and some custom logic
This is a follow-up question to this one which I tried answering but was asked to instead form it as an independent, follow-up question.
I want to display delivered and read receipts to users in my messaging platform. I am using Eclipse's Paho library with Mosquitto as the broker. Since Mosquitto does not store messages, which is the best way/plugin to
Display delivered receipts - how to use QoS2 acknowledgement receipts to do this?
Display read receipts - suggest me way to do this
How to store messages so that users can view their chat history? Any architectural insights in mysql will be very helpful.
The quick answers to your questions:
High QOS (1/2) is not end to end delivery confirmation, it is only confirmation between the broker and a client. e.g. a publisher publishing at QOS 2 the confirmation is only between the publisher and the broker, not then onward to the subscriber (who may be subscribed at a different QOS anyway). The only way to do this is to send a separate message from the receiving end back to the sender. Also there may be more than one subscriber to any given topic, so you have to think how this would work.
Again, the only way to do this is with a separate message sent when the message is read
You will have to implement this yourself. The only thing that may help is something like the built in support for storing messages in a database present in some brokers (this is not part of the spec, so totally propitiatory to the implementation) e.g. hivemq
Hardlib's answers are 100% on target, but I'll add some thoughts on implementation.
I think a common misunderstanding with MQTT is that it is really a M2M (machine-to-machine) protocol, not a system for exchanging messages between users. That isn't to say you can't use it for messaging (facebook did) but that exists in a layer on top of MQTT. Put another way, MQTT is designed to route messages between machines with little care about the content of those messages. What this means is that user-level niceties (delivery confirmations etc.) aren't really part of it but instead something that you implement on top of MQTT.
So here are some thoughts about how to implement what you propose on top of MQTT:
Consider a situation in which you have two clients (X & Z) which both have access to the same broker (Y). To have client X confirm it has received a message from client Z, simply have client X send a message to a topic (lets say confirmations/z) that client Z is subscribed to. This is trivial to implement in Python or whatever you are writing your application in. (For example, I use that basic procedure to measure round-trip time on my broker.)
However, given that QoS can guarantee that a broker has received the message (and it could be retained or otherwise held for other clients), I would question if this is really necessary unless it is critical that client Z know exactly when client X receives the message.
Depending on your needs there are any number of ways of providing a history for a topic. See the answers here and here for details on MySQL. But if all you need is a local history of a chat or a record of the activity on a few topics, consider simply outputting payloads with timestamps to a text file or JSON. MySQL feels like overkill unless you are dealing with a high volume of messages or need to compose complex queries.