How to Decrease Request Size to insert data to multiple table? (Back4App) - back4app

I was inserting the data (Back4app) to two tables in the same time. But it took a lot of requests. I've tried it for 3 hours, and I've inserted 3.28k data. (There's no other action in that program) In the end, it took 6.88k requests. Is it ok about it? And how to decrease the request size?

Related

ActiveRecord query for all of last day's data and every 100th record prior

I have a process that generates a new record every 10 minutes. It was great for some time, however, now Datum.all returns 30k+ records, which are unnecessary as the purpose is simply to display them on a chart.
So as a simple solution, I'd like to provide all available data generated in the past 24 hours, but low res data (every 100th record) prior to the last 24 hours (right back to the beginning of the dataset).
I suspect the solution is some combination of this answer which selects every nth record (but was provided in 2010), and this answer which combines two ActiveRecord objects
But I cannot work out how to get a working implementation that obtains all the required data into one instance variable
You can use OR query:
Datum.where("created_at>?", 1.day.ago).or(Datum.where("id%100=0"))

Optimize array data size while pagination

I know how to implement pagination with UITableview but my question is we always append data of next page with existing complete data array so every next page array is increasing array size.
For example - We get 50 records in first page and we request for next page and we again get 50 records and then we will append that records in existing complete array so complete array is now having 100 records. I am requesting data with around 100 pages so my array will have 5000 records as we know holding some starting page array data is not good idea as we hardly come back for starting page after visited 100 pages .
Is there any way to optimize array size? please help me on this as i searched a lot but didn't find good answer for this.
I would be very grateful for help and sorry for my bad english.
I think you can achieve that by writing the "old" data to a local storage, and retrieve and insert back into your array.
So, imagine that you've already fetched, lets say 200 items. So when the user scrolls down, and you fetched the next page (the next 20 items), you "cut" from your array the items from 0 to 99 and write to a file. Now your array has 120 items. Then, when the user continues scrolling and again reached 220 (array.count >= 220), repeat the same logic, and so on.
Now the most interesting part. If the user scrolls back and the index of the top visible cell is <100, you read the previously written data from the file (and remove from the file) and insert into your array at 0 position.
And of course it'd be better to clear all that kind of files on the app launch.
Of course the numbers I wrote below are magic numbers and you should play with them to find the right ones that best fit your needs.

SELECT queries performance impact when the Clickhouse table is continuously populated with INSERT INTO

The Clickhouse table, MergeTree Engine, is continuously populated with “INSERT INTO … FORMAT CSV” queries, starting empty. The average input rate is 7000 rows per sec. The insertion is happening in batches of few thousand rows. This has severe performance impact when SELECT queries are executed concurrently. As described in the Clickhouse documentation, the system needs at most 10 minutes to merge the data of a specific table (re-index). But this is not happening as the table is continuously populated.
This is also evident in the file system. The table folder has thousands of sub-folders and the index is over-segmented. If the data ingestion stops, after a few minutes the table is fully merged, and the number of sub-folders becomes a dozen.
In order to encounter the above weakness, the Buffer Engine was used to buffer the table data ingestion for 10 minutes. Consequently, the buffer maximum number of rows is on average 4200000.
The initial table is remaining at most 10 minutes behind as the buffer is keeping the most recently ingested rows. The table is finally merged, and the behaviour is the same as in case where the table has stopped to be populated for a few minutes.
But the Buffer table, which corresponds to the combination of the buffer and the initial table, is getting severely slower.
From the above appears that, if the table is continuously populated, it is not merging, and indexing suffers. Is there a way to avoid this weakness?
The number of sub-folders in the table data directory is not so representative value.
Indeed, each sub-folder contains a data part consisting of sorted (indexed) rows. If several data parts are merged into a new bigger one the new sub-folder appears.
However, source data parts are not removed instantly after the merge. There is a <merge_tree> setting old_parts_lifetime defining a delay after which the parts will be removed, by default it set to 8 minutes. Also, there is cleanup_delay_period setting defining how often a background cleaner checks and removes outdated parts, it is 30 seconds by default.
So, it is normal to have such amount of sub-folders for about 8 minutes and 30 seconds after the ingestion starts. If it is unacceptable to you, you can change these settings.
It makes sense to check the amount of active parts in a table only (i.e. parts which have not been merged into a bigger one). To do so, you could run the following query: SELECT count() FROM system.parts WHERE database='db' AND table='table' AND active.
Moreover, ClickHouse does such checks internally if the amount of active parts in a partition is greater than parts_to_delay_insert=150, it will slow down INSERTs, but if it is greater than parts_to_throw_insert=300 it will abort insertions.

How to display bunch of data in tableview

In my app I have 800 000 data in server which I have to display to user. User can also search from those data. I really got confused what to do here now. How to achieve this functionality.? I am trying to load first 50 data to table and then at top part there is search bar from that user can search data but user can search by writing approximate word also (i.e if user wrote "bcd" then it will return all data having "bcd" combination). Can anyone suggest me something that will help me to get out of this situation.
You have to do pagination here without it you can't get that much data if you do that then your application will be crash. Fetch some data from the server like 30 or 40 and when you reach at 30 request for next 30 data. Then you can meet the application need.
You need to use pagination in your application .without pagination if you got 8 lakhs data in one shot then your application might be crash.
every time send request to server like"abc"
server get first 10 data from result and return those data.
now for second request server will return 11 to 20 records from resultant data
I am developer with SIMpalm. i would like to suggest you below answer.
why can't you take two array on for displaying in table view other contain all results ,when you search then search result in the Array which contains all results.and add them to the array which shown in the table.
You will have to use pagination, I don't see any other way you can do this without eating up lot of memory or the elegant way and worst case sporadic crash.
You can do the pagination in the browse and the search both. To avoid delay's for user you can preload data. e.g. for pages of 200 records, when user reaches to 150 you start fetching data for next page.
Also if your local/web server is taking more than min to load. you have serious problem on the server, That needs to be fixed. No user will wait for min to reload or get the new data.
I am not expert on the servers/networking but it should not take more than 10-15 secs.
Think about search logic as very similar to the browsing all data.
Search/Browse both needs paging
Browse returns all the data in pages
Search returns specific data in pages
Search/Browse proloads data after user reaches certain point

Load Large Data from multiple tables in parallel using multithreading

I'm trying load data about 10K records from 6 different tables from my Ultralite DB.
I have created different functions for 6 different tables.
I have tried to load these in parallel using NSInvokeOperations, NSOperations, GCD, Subclassing NSOperation but nothing is working out.
Actually, loading 10K from 1 table takes 4 Sec, and from another 5 Sec, if i keep these 2 in queue it is taking 9 secs. This means my code is not running in parallel.
How to improve performance problem?
There may be multiple ways of doing it.
What i suggest will be :
Set the number of rows for table view to be exact count (10k in your case)
Table view is optimised to create only few number of cells at start(follows pull model). so cellForRowAtIndexPath will be called only for few times at start.
Have an array and fetch only 50 entries at start. Have a counter variable.
When user scrolls table view and count reaches after 50 fetch next 50 items(it will take very less time) and populate cells with next 50 data.
keep on doing same thing.
Hope it works.
You should fetch records in chunks(i.e. fetch 50-60 records at a time in a table). And then when user reach end of the table load another 50 -60 records. Try hands with this library: Bottom Pull to refresh more data in a UITableView
Regarding parallelism go with GCD, and reload respective table when GCD's success block called.
Ok you have to use Para and Time functions look them up online for more info

Resources