Axis id and Feedback id in CANopen - can-bus

The error message in the CANopen protocol is as follows:
Based on the error code and error register, we can understand the problem message in the network, but I had two questions:
1- What are the two expressions, Axis id, and Feedback id?
2- How can we adjust these items?

Related

Unique identifier for all fix messages related to a sinqle request in quickfix

We need to get a unique identifier for all related fix messages in quickfixj.
scenario: if B lies between A and C and forwards fix messages from A to C and vice versa, we need to get a unique Id for all related messages to cache them in B.
Is there a uniqueId for all fix messages as mentioned above? if yes, does getting that unique identifier same (eg: message.getString(int field)) for all message types, or getting it depends on message type?
Do you mean a unique identifier per Order, for example? If yes, then that would be 11/ClOrdID for a NewOrderSingle (and some other message types). But you'll have other identifiers for other message types, e.g. quotes, market data snapshots, ...
There is no global unique identifier per se, so you would need to make one up. For example a concatenation of SenderCompID and MsgSeqNum and SendingTime should be unique. If you are sure that you will not reset the sequence number intra-day you could probably even leave out the SendingTime.

Device Delete event Handling in Rule chain being able to reduce the total device count at Customer Level

I am using total count of devices as the "server attributes" at customer entity level that is in turn being used for Dashboard widgets like in "Doughnut charts". Hence to get the total count information, I have put a rule chain in place that handles "Device" Addition / Assignment event to increment the "totalDeviceCount" attribute at customer level. But when the device is getting deleted / UnAssigned , I am unable to get access to the Customer entity using "Enrichment" node as the relation is already removed at the trigger of this event. With this I have the challenge of maintaining the right count information for the widgets.
Has anyone come across similar requirement? How to handle this scenario?
Has anyone come across similar requirement? How to handle this scenario?
What you could do is to count your devices periodically, instead of tracking each individual addition/removal.
This you can achieve using the Aggregate Latest Node, where you can indicate a period (say, each minute), the entity or devices you want to count, and to which variable name you want to save it.
This node outputs a POST_TELEMETRY_REQUEST. If you are ok with that then just route that node to Save Timeseries. If you want an attribute, route that node to a Script Transformation Node and change the msgType to POST_ATTRIBUTE_REQUEST.

Informatica Mapping: Joiner must have exactly two inputs

I get the following message when I try to validate the mapping (see Warning attached):
...Joiner jnr_Normal_jnr_Master_ZC_OR_Delay_Reason must have exactly two inputs.
WARNING: Joiner transformation jnr_Normal_jnr_Master_ZC_OR_Delay_Reason Condition field OR_CASE_ID1 is unconnected.
I have a joiner (jnr_Master_ZC_OR_Delay_Reason) and expression (exp_Text) that I would like to join. I tried to do this with a normal joiner (jnr_Normal_jnr_Master_ZC_OR_Delay_Reason). However, the data from the jnr_Master_ZC_OR_Delay_Reason does not connect to this jnr_Normal_jnr_Master_ZC_OR_Delay_Reason. See Joiners-Two Inputs attached.
Should I be using a different transformation to join the joiner and expression?
I tried to use Sorting but I still get the same error message. Am I using the Sorting correctly? Please see the attached images.enter image description here
enter image description here
If you want to join flows that originate from the same source (let's call that a self-join), you need to have the data sorted on both branches of the flow and check the Sorted Input property on the Joiner Transformation (jnr_Normal_jnr_Master_ZC_OR_Delay_Reason in this case).
A self-join is only allowed if both flows are sorted. Depending on your flow, it may be enough to sort data only once, before the flow gets split.
Now, if you enable the Sorted Input property but the data will not be sorted, you will get an error while session execution.

Twilio as a proxy for many-to-many SMS conversations

What is the best way to proxy marketplace messaging using SMS?
User Model:
each conversation has owner_id and renter_id, if a message is received from one it should be proxied to the other.
If the owner is connected to many conversations, what is the best way to make sure the messages are directed to the proper recipient?
Update:
It looks like twilio recommends purchasing a phone number for each conversation.
This would require owning N phone numbers where N is greater than the conversations grouped by unique user/recipient.
For example with Airbnb data model, would need to know the owner with the largest number of unique renters... This seems like a lot of potential overhead. please correct me if i'm wrong.
This concept will definitely require multiple Twilio numbers if you want to give a friction less experience (no PINs to enter ) , but you will only ever need to have as many numbers as people who a single user can contact.
This is explained in more detail here . And you only need to work out a starting number and rest can be dynamic .
Say, if you maximum number of property any owner owns is N and he rents out on all 365 days to different renters , it means the owner has N*365 renters in their "address book", you would only ever need 365N numbers, even if you had 100,000 users. If based on historical data , you could work out maximum of N and maximum of rental days ( say M) , you have the required phone numbers = N*M . This could be the starting point and doesnt have to be a fixed constant value .
As a fail safe - add a handler to when you cross a threshold - say 90% of your number pool of N*M numbers , then use the Twilio REST API to add numbers dynamically to this pool .

Synchronizing a current message id in a conversation between Alice and Bob

I'm faced with this situation:
Host A and B are exchanging messages in a conversation through a broker.
When host B receives a messages it sends back a delivery token to Host A so that it can show the user that B has received his messages. This may also happen the other way around.
At any point A or B may be offline and the broker will hold on to the messages until they come online and then deliver them.
Each host stores it's own and the other hosts messages in a database table:
ID | From | To | Msg | Type | Uid
I figured using the naive table primary key id would have been a bad choice to identify the messages (as it's dependent in order of insertion) so I defined a custom unique id field (uid).
My question is:
How can I make sure that the current message id stays synchronized between host A and B so that only one message has that id? So that I can use delivery token id to identify which message was received, and it wouldn't be possible if I had more than one message with the same Id.
If I do this naively incrementing it every time we send/receive a message at first it looks ok:
Host A sends message with ID 1 and increases it's current ID to 2
Host B receives a message and increases it's current ID to 2
Host B sends message with ID 2 and increases it's current ID to 3
Host A receives message and increases it's current ID to 3
...
But it may very easily break:
Host A sends message with ID 1 and increases it's current ID to 2
Host B sends a message (before receiving the previous one) with ID 1
clash.. two messages with ID 1 received by both hosts
I thought of generating a large UUID every time (with extremely low chance of collision) but it introduces a large overhead as every message would need both to carry and store one.
Unfortunately any solution regarding the broker is not viable because I can't touch the code of the broker.
This is a typical problem of Distributed Systems (class exercise?). I suppose you are trying to keep the same ID in order to determine an absolute order among all messages exchanged between Alice and Bob. If this is not the case, the solution provided in the comment by john1020 should be enough. Other possibility is to have ID stored in one node that can be accessed by both A and B and a distributed locks mechanism synchronizes access. In that way, you always define an order even in face of collisions. But this is not always possible and sometimes not efficient.
Unfortunately, there is no way of keeping an absolute order (except having that unique counter with distributed locks). If you have one ID that can be modified by both A and B, you will have a problem of eventual consistency and risk of collisions. A collision is basically the problem you described.
Now, imagine both Bob and Alice send a message at the same time, both set ID in 2. What would be the order in which you would store the messages? Actually it doesn't matter, it's like the situation when two people spoke at the phone at the same time. There is a collision.
However, what is interesting is to identify messages that actually have a sequence or cause-effect: so you could keep an order between messages that are caused by other messages: Bob invites Alice to dance and Alice says yes, two messages with an order.
For keeping such order you can apply some techniques like vector clocks (based on a Leslie Lamport's timestamps vector algorithm): https://en.wikipedia.org/wiki/Vector_clock . You can also read about AWS' DynamoDB: http://the-paper-trail.org/blog/consistency-and-availability-in-amazons-dynamo/
Also you can use the same mechanism Cassandra uses for distributed counters. This is a nice description: http://www.datastax.com/wp-content/uploads/2011/07/cassandra_sf_counters.pdf

Resources