I want the user to send some data to the database. But the downloaded data is a little high.
These are my rules:
{
"rules": {
".read": false,
".write": true
}
}
In one day, the downloaded data has become 7.2 mb. The size of my database is around 320kb. This is my code where I'm using it.
String id = mDatabase.child(username).push().getKey();
Record r = new Record(calendar.getTimeInMillis(),sensorValue);
mDatabase.child(username).child(id).setValue(r);
No, Firebase doesn't download all the data any time you start your application if you don't need it (depends on the data you are querying).
So the correct answer is that it depends on the code of your app. For example if you have code like
db.ref('users/user_id').once((snap) => { ... })
it means that only particular data, placed under user_id field is downloaded.
But if, for example, you write such code:
db.ref('users').once((snap) => { ... })
the whole users object will be downloaded.
Hope, now it is clear for you that bandwidth depends on how much data is queried from your app code perspective.
A common source of unexpected bandwidth usage in the Firebase Realtime Database (and in Cloud Firestore, Firebase's other database), is the fact that any data shown in the Firebase console also counts towards your quota.
So in your case, if you've kept the database open in the Firebase console, and refreshed it 10 times over the day, or if you've opened the Firebase Database console 10 times in a day, you're already well on your way to explaining the bandwidth usage that you have.
Related
My app has bits of information that exist for 24 hrs. This information has the potential to be voted on by other users. The number of votes is recorded in the database. If someone votes on a piece of information, I want it to be updated only if the path currently exists, as the data may have reached the 24 hour limit and been deleted since the display time and the vote time.
The problem with using something like Datasnapshot.hasChild is that I will need to write 2 separate read and write instructions. The data may exist for the read instruction but then may reach the 24 hour mark and be deleted before the write instruction.
This is the structure of my database, the node status is duplicated in another part of the database in order to reduce the amount of reads. If this node becomes nil, the other one still exists, but this node is here so that I can do one write to obtain all the newest statuses that are less than 24 hours old.
I would like a rule that does not allow the value of votes to change if the key of status has changed, or if status is no longer a node.
There are a couple of ways to approach this.
One way is to leverage Firebase Rules. Before doing that, let me first say this to keep the Firebasers happy:
Rules are not filters
However, you can craft a rule that will prevent a write if a certain node does not exist. I would need to know your specific structure and use case to suggest a solution but Rules are pretty well covered in the Firebase Rules Docs andReferencing Data In Other Paths is a place to start.
But, a super clean an easy option is the following code that will only write to a node if it exists. One read and write.
func onlyWriteIfNodeExists() {
let ref = your_firebase.ref.child("may_not_exist")
ref.observeSingleEvent(of: .value, with: { snapshot in
if snapshot.exists() {
snapshot.ref.setValue("updated value")
} else {
print("node didn't exist")
}
})
}
and the structure would be
firebase_root
may_not_exist: "some value"
so if the node may_not_exist exists, "some value" will be replaced with "updated value" otherwise will will print "node didn't exist" to console.
That being said, if the intention is to not allow users to vote on items that don't exist, the UI should reflect that. In other words, if the app presents topics to vote on and a topic goes out of scope, the app should receive an event of that and perhaps remove it from the UI or draw a line through the topic heading to indicate it's no longer available.
Currently I am retrieving daily subscriber information with the following request:
var videoOptions = {
'part': 'snippet,contentDetails,statistics',
'id': videoIds
};
// Send request
youtube.videos.list(videoOptions, (err, videoDetails) => {});
My question is there a way to get historical subscriber information either through the Data API or Analytics API?
I see there is a way to see subs gained or lost over a period of time but I don't know what the base is to compare against:
https://www.googleapis.com/youtube/analytics/v1/reports?ids=channel%3D%3D{channelID}&start-date=2017-07-31&end-date=2017-08-01&&metrics=subscribersLost%2CsubscribersGained
There is currently no way to retrieve the historical number of subscribers in the past. The only way you can track your subscriber change is to perform channels.list, setting mySubscribers property to true and do it the next day. No method to check for history. This is also confirmed in this SO post.
Depending on how large your subscriber base is, you could try checking the list of when users started subscribing at https://www.youtube.com/subscribers
Perhaps this can offer some insight, and the number of rows/users returned can be counted to give an indication of historical activity.
I have one problem, i fetch data from one URL and set it to Table but if data is almost 10 to 15 values. then i get data in table easily means table data populated in less time.
But if Data is almost 500 to 600 values then one have to wait till all data come as i have used ProgressView so user have to wait till all response doesn't come.
is there is any way to resolve this, to set some data earlier and then after all data that i have got afterwards.
Any help will be appreciated
you should use pagination support in your tableView and in your backend as well, please see this example:
https://easyiostechno.wordpress.com/2014/11/14/pagination-of-table-views/
Basically it's a bad practice to fetch large data at once and keep user waiting. You should ideally fetch data only when it's necessary. For your case I would suggest you to use paging mechanism.
Below is just rough idea about paging which you can use:
When you load your data from webservice, send two parameters named
PAGE_COUNT and PREVIOUS_PAGE_COUNT.
For first time send PAGE_COUNT = nuber_of_values_you_want_to_fetch_initially and PREVIOUS_PAGE_COUNT
= 0
When user scrolls down showing loader at the bottom of table and again hit webservice but with PREVIOUS_PAGE_COUNT = nuber_of_values_you_want_to_fetch_initially + PAGE_COUNT
This approach will need some modification from back-end also like checking for initial page count and then fetching next records from
database.
I hope this will help you.
My iOS application fetches some photos, tags and comments from web server. I want it to fetch only changed or new added data. I don't want it to fetch repeated data again and again.
I use SDWebImage for pictures. But text are based on SQL text.
How could I understand the result of the SQL is changed or not? What kind of technique
should I use?
Is there a third party library for client side SQL catching?
I think it is not iOS related question technically. You should query always with the last queried timestamp.
Like:
SELECT * FROM comments WHERE last_modified > last_queried_timestamp;
last_modified field should store the timestamp of the last modification date or the timestamp of creation and the last_queried_timestamp parameter is the timestamp of the last date when you queried from the server.
This way you will not get twice the same changes.
(Unless you want it)
I'm creating an app that uses core data to store information from a web server. When there's an internet connection, the app will check if there are any changes in the entries and update them. Now, I'm wondering which is the best way to go about it. Each entry in my database has a last updated timestamp. Which of these 2 will be more efficient:
Go through all entries and check the timestamp to see which entry needs to be updated.
Delete the whole entity and re-download everything again.
Sorry if this seems like an obvious question and thanks!
I'd say option 1 would be most efficient, as there is rarely a case where downloading everything (especially in a large database with large amounts of data) is more efficient than only downloading the parts that you need.
I recently did something similiar.
I solve the problem, by assigning an unique ID and a global 'updated timestamp' and thinking about 'delta' change.
I explain better, I have a global 'latest update' variable stored in user preferences, with a default value of 01/01/2010.
This is roughly my JSON service:
response: {
metadata: {latestUpdate: 2013...ecc}
entities: {....}
}
Then, this is what's going on:
pass the 'latest update' to the web service and retrieve a list of entities
update the core data store
if everything went fine with core data, the 'latestUpdate' from the service metadata became my new 'latest update variable' stored in user preferences
That's it. I am only retrieving the needed change, and of course the web service is structured to deliver a proper list. Which is: a web service backed by a database, can deal with this matter quite well, and leave the iphone to be a 'simple client' only.
But I have to say that for small amount of data, it is still quite performant (and more bug free) to download the whole list at each request.
As per our discussion in the comments above, you can model your core data object entries with version control like this
CoreDataEntityPerson:
name : String
name_version : int
image : BinaryData
image_version : int
You can now model the server xml in the following way:
<person>
<name>michael</name>
<name_version>1</name_version>

<image_version>1</image_version>
</person>
Now, you can follow the following steps :
When the response arrives and you parse it, you initially create a new object from entity and fill the data directly.
Next time, when you perform an update on the server, you increase the version count of an entry by 1 and store it.
E.g. lets say the name michael is now changed to abraham, then version count of name_version on server will be 2
This updated version count will come in the response data.
Now, while storing the data in the same object, if you find the version count to be same, then the data update of that entry can be skipped, but if you find the version count to be changed, then the update of that entry needs to be done.
This way you can efficiently perform check on each entry and perform updates only on the changed entries.
Advice:
The above approach works best when you're dealing with large amount of data updation.
In case of simple text entries for an object, simple overwrite of data on all entries is efficient enough. And this also keeps the data reponse model simple.