What is the best way to proxy marketplace messaging using SMS?
User Model:
each conversation has owner_id and renter_id, if a message is received from one it should be proxied to the other.
If the owner is connected to many conversations, what is the best way to make sure the messages are directed to the proper recipient?
Update:
It looks like twilio recommends purchasing a phone number for each conversation.
This would require owning N phone numbers where N is greater than the conversations grouped by unique user/recipient.
For example with Airbnb data model, would need to know the owner with the largest number of unique renters... This seems like a lot of potential overhead. please correct me if i'm wrong.
This concept will definitely require multiple Twilio numbers if you want to give a friction less experience (no PINs to enter ) , but you will only ever need to have as many numbers as people who a single user can contact.
This is explained in more detail here . And you only need to work out a starting number and rest can be dynamic .
Say, if you maximum number of property any owner owns is N and he rents out on all 365 days to different renters , it means the owner has N*365 renters in their "address book", you would only ever need 365N numbers, even if you had 100,000 users. If based on historical data , you could work out maximum of N and maximum of rental days ( say M) , you have the required phone numbers = N*M . This could be the starting point and doesnt have to be a fixed constant value .
As a fail safe - add a handler to when you cross a threshold - say 90% of your number pool of N*M numbers , then use the Twilio REST API to add numbers dynamically to this pool .
Related
I've set up receiving of Teams CallRecords into Splunk and now stuck in a process of understanding them. I thought that one CallRecord represents one unique Teams call (like Mr. A dialed Mr. B, Mr. B answered, they talked and eventually hung up - that's a CallRecord) and documentation suggests that it is so: "callRecord resource type represents a single peer-to-peer call or a group call between multiple participants", "id - String - Unique identifier for the call record. Read-only."
But what I see is many CallRecords with same id but different versions ("version" field). These records might have different DateTimes of start and end, different lastModifiedDateTime, some versions have null values of organizer* and participant* fields. I saw quantity of versions from 1 to 66.
So here are my questions:
Does the one CallRecord represent one unique conversation? If yes, what would be it's unique identificator? id+version? Then why there are records with same id, different versions and same other data except lastModifiedFateTime (so these records are roughly the same and it will result in double accounting in final report)? And why there are records with null fields of organizer*?
Does the set of all CallRecords with same id and different versions represent one call? If I merge all such records into one I get multivalue fields of startDateTime, endDateTime and other DateTimes - which values of them I need to use for accounting - min(startDateTime) and max(endDateTime) or what?
Maybe there is some deep dive Microsoft documentation on this versioning? Frankly, I've completely lost here.
While I'm using Ruby/Rails to solve this particular problem, the specific issue is not unique to Ruby.
I'm building an app that can send group/mms messages to multiple people, and then processes those texts when the others reply.
The app will have a different number for each record, and each record can be involved in multiple group conversations.
For example, record_1 can be involved in a conversation with user_1, user_2, but can also be involved in a separate conversation with user_2, user_3, and record_2 can have a separate conversation with user_1, user_2.
When I send a message the fields resemble:
{
from: "1234566789",
to: [
"1111111111",
"2222222222",
...
],
body: "..."
}
Where the from is my app number, and the to [] is an array of phone numbers for everyone else involved in the conversation.
When one of the other participants replies to the group message, I'll get a webhook from my text messaging provider that has the from as that person's phone number and the to [] would include my app number and everyone else's numbers.
The identifier for a conversation is the unique combination of the phone numbers involved.
However, having an array of ["1234567890", "1111111111", "2222222222"] is difficult to work with, and I would like a string representation that I can index in my database and quickly find.
If I have a to: ["1234567890", "1111111111", "2222222222] array of the phone numbers, I'm thinking about using Digest::MD5.hexdigest to.sort.to_s.
This would give me a unique identifier such as 49a5a960c5714c2e29dd1a7e7b950741, that I can index in my DB and use to uniquely reference conversations.
Are there any concerns with using the MD5 hash to solve my specific problem? Anytime I have the same numbers involved in a conversation, I want it to produce the same hash. Does MD5 guarantee the same result given the same ordered input?
Is there another approach to uniquely identify conversations by the participants?
Yes, MD5 does give you that guarantee, unless someone is trying to attack your system. It is possible to create colliding MD5 hashes but they will never happen by accident.
So if in your situation the hash will only ever be benign (i.e. created by your code, not created by someone trying to mount an attack of some kind), then using MD5 is fine.
Or you could switch to using SHA256 instead of MD5 which doesn't have this risk associated with it.
I have 2 Micro Services one for Orders and one for Customers
Exactly like below example
http://microservices.io/patterns/data/database-per-service.html
Which works without any problem.
I can list Customers data and Orders data based on input CustomerId
But now there is new requirement to develop a new screen
Which shows Orders of input Date and show CustomerName beside each Order information
When going to implementation
I can fetch the list of Ordersof input Date
But to show the corresponding CustomerNames based on a list of CustomerIds
I make a multiple API calls to Customer microservice , each call send CustomerId to get CustomerName
Which lead us to more latency
I know above solution is a bad one
So any ideas please?
The point of a microservices architecture is to split your problem domain into (technically, organizationally and semantically) independent parts. Making the "microservices" glorified (apified) tables actually creates more problems than it solves, if it solves any problem at all.
Here are a few things to do first:
List architectural constraints (i.e. the reason for doing microservices). Is it separate scaling ability, organizational problems, making team independent, etc.
List business-relevant boundaries in the problem domain (i.e. parts that theoretically don't need each other to work, or don't require synchronous communication).
With that information, here are a few ways to fix the problem:
Restructure the services based on business boundaries instead of technical ones. This means not using tables or layers or other technical stuff to split functions. Services should be a complete vertical slice of the problem domain.
Or as a work-around create a third system which aggregates data and can create reports.
Or if you find there is actually no reason to keep the microservices approach, just do it in a way you are used to.
New requirement needs data from cross Domain
Below are the ways
Update the customer Id and Name in every call . Issue is latency as
there would be multiple round trips
Have a cache of all CustomerName with ID in Order Service ( I am
assuming there a finite customers ).Issue would be , when to refresh
cache or invalidate cache , For that you may need to expose some
rest call to invalidate fields. For new customers which are not
there in cache go and fetch from DB and update cache for future . )
Use CQRS way in which all the needed data( Orders customers etc ..) goes to a separate table . Now in this schema you can create a composite SQL query . This will remove the round trips etc ...
I build a social network with Neo4j, it includes:
Node labels: User, Post, Comment, Page, Group
Relationships: LIKE, WRITE, HAS, JOIN, FOLLOW,...
It is like Facebook.
example: A user follow B user: when B have a action such as like post, comment, follow another user, follow page, join group, etc. so that action will be sent to A. Similar, C, D, E users that follow B will receive the same notification.
I don't know how to design the data model for this problem and I have some solutions:
create Notification nodes for every user. If a action is executed, create n notification for n follower. Benefit: we can check that this user have seen notification, right? But, number of nodes quickly increase, power of n.
create a query for every call API notification (for client application), this query only get a action list of users are followed in special time (24 hours or a 2, 3 days). But Followers don't check this notification seen or yet, and this query may make server slowly.
create node with limited quantity such as 20, 30 nodes per user.
Create unlimited nodes (include time of action) on 24 hours and those nodes has time of action property > 24 hours will be deleted (expire time maybe is 2, 3 days).
Who can help me solve this problem? I should chose which solution or a new way?
I believe that the best approach is the option 1. As you said, you will be able to know if the follower has read or not the notification. About the number of notification nodes by follower: this problem is called "supernodes" or "dense nodes" - nodes that have too many connections.
The book Learning Neo4j (by Rik Van Bruggen, available for download in the Neo4j's web site) talk about "Dense node" or "Supernode" and says:
"[supernodes] becomes a real problem for graph traversals because the graph
database management system will have to evaluate all of the connected
relationships to that node in order to determine what the next step
will be in the graph traversal."
The book proposes a solution that consists in add meta nodes between the follower and the notification (in your case). This meta node should got at most a hundred of connections. If the current meta node reaches 100 connections a new meta node must be created and added to the hierarchy, according to the example of figure, showing a example with popular artists and your fans:
I think you do not worry about it right now. If in the future your followers node becomes a problem then you will be able to refactor your database schema. But at now keep things simple!
In the series of posts called "Building a Twitter clone with Neo4j" Max de Marzi describes the process of building the model. Maybe it can help you to make best decisions about your model!
I'm faced with this situation:
Host A and B are exchanging messages in a conversation through a broker.
When host B receives a messages it sends back a delivery token to Host A so that it can show the user that B has received his messages. This may also happen the other way around.
At any point A or B may be offline and the broker will hold on to the messages until they come online and then deliver them.
Each host stores it's own and the other hosts messages in a database table:
ID | From | To | Msg | Type | Uid
I figured using the naive table primary key id would have been a bad choice to identify the messages (as it's dependent in order of insertion) so I defined a custom unique id field (uid).
My question is:
How can I make sure that the current message id stays synchronized between host A and B so that only one message has that id? So that I can use delivery token id to identify which message was received, and it wouldn't be possible if I had more than one message with the same Id.
If I do this naively incrementing it every time we send/receive a message at first it looks ok:
Host A sends message with ID 1 and increases it's current ID to 2
Host B receives a message and increases it's current ID to 2
Host B sends message with ID 2 and increases it's current ID to 3
Host A receives message and increases it's current ID to 3
...
But it may very easily break:
Host A sends message with ID 1 and increases it's current ID to 2
Host B sends a message (before receiving the previous one) with ID 1
clash.. two messages with ID 1 received by both hosts
I thought of generating a large UUID every time (with extremely low chance of collision) but it introduces a large overhead as every message would need both to carry and store one.
Unfortunately any solution regarding the broker is not viable because I can't touch the code of the broker.
This is a typical problem of Distributed Systems (class exercise?). I suppose you are trying to keep the same ID in order to determine an absolute order among all messages exchanged between Alice and Bob. If this is not the case, the solution provided in the comment by john1020 should be enough. Other possibility is to have ID stored in one node that can be accessed by both A and B and a distributed locks mechanism synchronizes access. In that way, you always define an order even in face of collisions. But this is not always possible and sometimes not efficient.
Unfortunately, there is no way of keeping an absolute order (except having that unique counter with distributed locks). If you have one ID that can be modified by both A and B, you will have a problem of eventual consistency and risk of collisions. A collision is basically the problem you described.
Now, imagine both Bob and Alice send a message at the same time, both set ID in 2. What would be the order in which you would store the messages? Actually it doesn't matter, it's like the situation when two people spoke at the phone at the same time. There is a collision.
However, what is interesting is to identify messages that actually have a sequence or cause-effect: so you could keep an order between messages that are caused by other messages: Bob invites Alice to dance and Alice says yes, two messages with an order.
For keeping such order you can apply some techniques like vector clocks (based on a Leslie Lamport's timestamps vector algorithm): https://en.wikipedia.org/wiki/Vector_clock . You can also read about AWS' DynamoDB: http://the-paper-trail.org/blog/consistency-and-availability-in-amazons-dynamo/
Also you can use the same mechanism Cassandra uses for distributed counters. This is a nice description: http://www.datastax.com/wp-content/uploads/2011/07/cassandra_sf_counters.pdf