So this is more of a methodology question than a coding question. I want to ask this before I actually start coding in order to choose the best route. I have a messaging app. When the app launches I query in the background all the messages from the backend where current_user_id is equal to recipient_id. Now I have all of the messages stored the user needs to see so I locally store them into a sqlite database.
Great, but what about when the user gets new messages? How can i structure a query to receive those without having to query the entire table again? Also how do I set this up as a continual process? Is the phone always requesting update information from the backend while its in the foreground?
Thanks. I really appreciate your help. I'm currently using iOS and as stated SQLite. Also my backend is AWS node.js.
It looks like your goal is to ultimately synchronize data between two sources over a network with a constraint that the client is updated in a reasonable amount of time. You have a design choice to make between a push vs pull architecture.
Push architectures have the servers push data to clients when an event occurs.
Pull architectures have the device periodically poll the server for changes. This can be achieved through timed events.
There are hybrid approaches too.
Each have their advantages and disadvantages as some require constant polling. Others require constant connection based protocols which presents more scaling challenges.
Related
I am trying to create automated edits to the database in firebase. Is there a way to do that on the server-side? I am new to iOS development and swift so any help would be greatly appreciated.
Also, I've tried Zapier but the service is not specific enough for my needs.
Yes - Firebase has quite a flexible set of options for server-side updates and it is simple enough to schedule a cronjob to connect to firebase and perform some scheduled update or edits.
The most generic approach is to use the REST API to perform your updates although there are specific libraries to support Node and other platforms.
It is worth being aware of the recent major upgrade to version 3 of Firebase which introduced quite a few significant changes - it can be easy to confuse the older examples floating around with the new API so be aware of the differences as you put together your first proof of concept examples.
I assume that you are looking to run on your own server although another alternative is to use a container hosting environment ( Google Apps etc ).
If you have your own server and are looking to integrate I would suggest starting with:
https://firebase.google.com/docs/server/setup#prerequisites
Then perhaps a quick look at:
https://firebase.googleblog.com/docs/web/quickstart.html
and
https://www.firebase.com/docs/rest/
If you are just getting started I would suggest a first task being to authenticate, retrieve and update a Firebase record.
You can configure server auth keys through the FB console and use these as part of you authentication process.
If you are unfamiliar with JWT then it is worth spending a little time getting up to speed on this and working through the examples at https://www.firebase.com/docs/rest/guide/user-auth.html
Further to your comment:
So the first approach that comes to mind is to run some kind of scheduled job in your Cron which would connect using the REST API, perform some kind of query on the existing data to identify those records that require an update and remove or modify them.
Giving a little more though you could extend this approach without having to run at a recurring period less than the minimal anticipated deletion time you could run the scheduler just to clean up at some longer period but filter your results to the client so that you are not including stale data. This approach is discussed a little at Firebase chat - removing old messages
Getting the right solution to your particular scenario will depend a lot on how well you structure your data which can be counter-intuitive; particularly for users who have come from an RDBMS background.
There may be an inclination to keep the data slim and unpolluted with old irrelevant data however Firebase is quite good at managing large minimally structured data and the overhead of this bloat may not be as bad a thing as you may think.
If the filtering itself isn't sufficient and you don't have a server that you can CRON a cleanup process then you can implement a firebase worker process in Node or similar and have this running on a container service such as Heroku or Google Apps. See Firebase push notifications - node worker for some ideas on how to approach this.
When asked Google advised that they didn't advise on where best to host worker services but they did mention both Google App Engine and Heroku.
Another approach if you don't want to implement and host a watcher/worker process is to simply include some code in the client that checks for and removes stale data periodically.
The firebase Queue is very cool but may be a bit of an overkill for simply expiring stale data.
I'm building a simple chat app on iOS for fun (and to have projects to gain experience from), using socketsIO and a node backend. I am trying to figure out the best design for messages. I was planning to use a mongoDB database where each conversation would have its message data stored. Whenever the client sends a new message to the server, the server adds it to the appropriate conversation in the database.
I was also hoping to create a user Sign Up/Log In system which would add you to the database.
However, I've googled around quite a bit and I am really not sure if creating a database made up of conversations (that get updated whenever a sentMessage event is triggered) and user data is the right way to go.
Additionally, I've seen some people talk about saving the chats on the actual devices themselves, not in a database? What is the common design pattern for a chat app like this?
for the design I would use socket.io for emitting messages as well. It has a great community behind it, I woul also use MongoDb because everything is using JSON format and it's integrated so well with Node due to it using JavaScript.
Now the part you are interested about, is REDIS. Redis is a database that sits in RAM on the web and should be used with mongodb if you're going to be having higher traffic / need quick speed / less hanging and waiting.
REDIS would be your temporary save for the chat with a session because doing disk write/read/querying is a lot on the machine (looking at you MongoDB), If you plan on saving the chat with every message. Doing so MongoDb would just not scale all the well in the long run and is not as fast as REDIS. Mind you REDIS database will only hold the temporary chat log of let's say the last 1 million chat session or some limit (it's all in RAM so the size is limited can't have Terabytes or hundreds of Gigabytes of RAM on 1 server).
so the data flow would look something like
user sends message
server receives messsage via HTTP(S) post/put - Ajax/Observable
Server will use socket.io to emit the message to the designated user while saving the message to REDIS with a specific key/session/message.
designated user get's the update on their screen via io event.
-- inbetween there should be a check on the REDIS db of whether it is getting full. if it's full remove the last 10,000 inactive messages (could be from 1 year ago if the server hasn't gotten full yet) to make some space.
Saving the chat on the phone is an okay idea as it would save the users data/bandwidth and they could potentially look at their message while offline.
a solution is using SQL Lite which is a lightweight library that will sit inside your app acting as a database which you can perform queries on if your familiar with RDBMS you will have no problem implementing it. But now you gotta find a good way to manage saving data to REDIS/SQL-LITE/MongoDb.
We've been tasked with implementing push notifications in our iOS and Android app. One of the features of the app is chat messaging, so we would like to push notify our users when they receive a message. The messages can be generated from the web app, so regardless of the origin, the chat messages get inserted into a Chat SQL Table via C# Web Services.
In my research I found PushSharp would be a good fit for our C# backend -- trying to avoid having to pay for a push notification service if we can. What I'm having a difficult time visualizing is how to trigger the push notification when a new message gets inserted to the DB table.
What's the best practice? I assume manually polling for new records is not.
Any advice would be appreciated.
M.
Probably it's too late but for the new guys that just came here occasionally, I suggest to try debezium, it consumes events for each row-level change made to the database. Only committed changes are visible, so your application doesn't have to worry about transactions or changes that are rolled back.
There are a couple of solutions available to you. Some depend on the level of control you have on the table. Here are a couple of ideas :
Use a daemon to run a script that periodically checks for new entries and sends pushes when necessary. The script can rely on a tuple id field (probably the primary key) to record the last field it checked and then pick up from there periodically. You can use supervise or monit to set that up but there are many other solutions out there that might be better fitted for your server.
A more simple solution would be to create a cronjob entry that triggers the script mentioned above periodically.
If you don't control the original table, you can create a TRIGGER in MySQL that inserts a record in a separate table that you can control entirely and can poll
If you don't want to poll (which is in fact not preferable if you have a lot of data to go through at a high rate), you'll have to look into message queue systems (like RabbitMQ) or into PUBSUB (I personally like Redis PUB/SUB).
Without more information about what your current architecture is, it's difficult to give you more details or point you to a better solution.
I have two Rails apps (one for web, and one for backend) accessing to the same PGSQL database. I would like to notify the other app if one app changes a table in the database.
How should I go about it?
I think this depends on:
How reliable you need it to be.
How fast you need the notifications to be delivered.
The FAYE solution suggested by #techvineet provides a good fast but unreliable option. (N.b. I don't mean it'll fail often, but it likely will occasionally, maybe 1/1000, If that causes you a problem, then avoid)
If you need something 100% reliable, and speed isn't important, you could write audit events to the database, and then poll that table from each app, if these are committed in the same transaction as the actual work is done, you should be safe... But it'll be as slow as your polling cycle.
Lastly, you if you want something fast AND reliable, then you could look at using something like ActiveMQ or RabbitMQ to give you reliable messaging between the applications to notify changes. You'll need a worker process in each app to listen to changes and deal with them appropriately.
My last comment would be that this 'smells' a little. The fact that you're trying to do this makes me think the architecture of your app might need looking at in the longer term. An obvious way of doing it might be to encapsulate all the business logic into an app which exposes an API, and then calling that API from both front and back end applications.
You can try using Faye http://faye.jcoglan.com/, which is a publish and subscribe messaging server. It can be integrated with Rails https://github.com/jamesotron/faye-rails.git. Messages can be transferred from one app to another by subscribing to the messages and publishing.
Hope this will help.
I am new to DynamoDB. I am very much confused about provisioned throughput. I am creating a iPhone game in which the users can chat within the game. I am having a Chat table. The Chat table contains GameID, UserID and Message. How do I find the size of the item to calculate throughput. The size of the item entirely depends on the Message right? How to calculate the size of an item?
Amazon tells that we can either modify the throughput by using UpdateTable API or by manually from the console. If I want to change it form code, how will I know that the provisioned throughput has been exceeded for a certain table? How to check that from code?
I am also confused about the CloudWatch. How to understand this?
Could anyone please help me? Please don't point me to the documentation.
Thanks.
I will do my best to help with the confusion.
DynamoDB is a key:value database
CloudWatch is Amazon's products monitoring tool
Provisioned throughput is roughly the number Items KB you plan to Read/Write per seconds
Whenever you exceed your provisioned throughput,
DynamoDB answers with ProvisionedThroughputExceededException
DynamoDB notifies CloudWatch
What Cloudwatch does is basically record and aggregates data-points. For most applications, it will only keep track of aggregated data over each consecutive 5min periods.
You can then access these data for "manual" monitoring or set up "alarms".
There was a really interesting question on SO a couple of weeks earlier on DynamoDB auto-scaling using alarms. You might be interested in reading it: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/ErrorHandling.html
Knowing this, you can start building your application.
As for every DynamoDB services, one needs credentials to access it. Even though they can be restricted to a specific table or set of action, it is very dangerous to bundle them in an application. Would you give MySQL or MongoDB or credentials, even Read Only to any untrusted people ?
May I suggest you do build your application to rely on a server of your own ? This server being trusted and build by you, you could safely perform any authorization check there and grant it full access to your table.
I hope this helps. Feel free to ask for more precisions.