ExpandableListView update every 2 seconds - task

I know that this problem is due to update the data every 2sec.
The point is that I have an adapter that shows me ExpandableListView and parallel downloading data from the site. I have a function (timer) who takes every 2 seconds update my data (notifydatasetChanged).
For this reason, if I change the value in seekbar I cut my slider.
Does anybody have an idea how to update the data? 1 idea -> update only some data 2 idea -> another way of renovation
Please help :)
I don't know what I can do to update only row in expandable list view some row but not all.

Related

How to Add Value to Log Before Resetting (CoreData & Swift)

Background info: I have a simple tally counter/habit-tracking app that populates a tableview with custom cells - the counters. Tapping on a cell brings up a detailed view of a specific counter that has the name, value, and the time period for that counter (daily/weekly/monthly/yearly/total). I have also stored in CoreData the startDate and endDate for each counter, so each counter resets after a certain time.
What I would like to do: Each time a counter resets (e.g. after a day or a week), I would like its current total value to be added to a log (preferably some sort of array) specific to that counter. This will then populate another tableview so that, after a few weeks, I can look back and see the previous weekly totals and compare it to the current week.
My data structure:
Entity: Counter
Attributes: Value (Int), startDate(NSDate), endDate(NSDate), timePeriod(Int) (Note: For timePeriod, each integer from 0 to 4 represents daily/weekly/monthly/yearly/no reset.)
My question: How can I implement this? Do I create another entity with a date and value attribute that is created each time a counter resets? I'm having trouble visualizing how to do this with CoreData.
P.S. I don't think using tableview and fetch requests is what I'm looking for.
Thanks so much for your help, and ask me if you need any clarification!
The simplest way is to not "reset" the counter but create a new one. You can then easily display the past counters. That would avoid creating a new entity.
To make it even simpler to distinguish them from active counters you could add a flag like active or archived. You could use that flag for a convenient predicate in your fetched results controller.

zabbix trigger based on one week old data

Iam very much new to Zabbix. i have tried my hands on triggers. what i was able to make out was it can set triggers on some constant threshold. what i need is that it should compare with the data which i exactly one week old for that exact time and if the change is above some particular % threshold then trigger an alert.
i had tried some steps like keeping the current data and one week old data in and external database and then querying that data with zabbix ODBC drivers but then i was stuck when i was not able to compare two items.
if i may be confusing stating my issue. let me know and i will be more clear with my problem
you can use the last() function for this.
For example if we sample our data every 5 minutes and we want to compare the last value with the value 10 minutes ago we can use
(item1.last(#1)/item2.last(#3)) > 1.2 - this will trigger an alert if the latest value is greater by 20% than the value 10 minutes ago.
From the documentation it is not very clear to me if you can use seconds or if they will be ignored (for example item.last(60) - to get the value 1 minute ago), but you can read more about the last function here:
https://www.zabbix.com/documentation/2.4/manual/appendix/triggers/functions

Load Large Data from multiple tables in parallel using multithreading

I'm trying load data about 10K records from 6 different tables from my Ultralite DB.
I have created different functions for 6 different tables.
I have tried to load these in parallel using NSInvokeOperations, NSOperations, GCD, Subclassing NSOperation but nothing is working out.
Actually, loading 10K from 1 table takes 4 Sec, and from another 5 Sec, if i keep these 2 in queue it is taking 9 secs. This means my code is not running in parallel.
How to improve performance problem?
There may be multiple ways of doing it.
What i suggest will be :
Set the number of rows for table view to be exact count (10k in your case)
Table view is optimised to create only few number of cells at start(follows pull model). so cellForRowAtIndexPath will be called only for few times at start.
Have an array and fetch only 50 entries at start. Have a counter variable.
When user scrolls table view and count reaches after 50 fetch next 50 items(it will take very less time) and populate cells with next 50 data.
keep on doing same thing.
Hope it works.
You should fetch records in chunks(i.e. fetch 50-60 records at a time in a table). And then when user reach end of the table load another 50 -60 records. Try hands with this library: Bottom Pull to refresh more data in a UITableView
Regarding parallelism go with GCD, and reload respective table when GCD's success block called.
Ok you have to use Para and Time functions look them up online for more info

Data sorting and update of UIcollectionViewCells. Is this a lost cause?

I have core data entries displayed in a collectionView, sorted from 1 2 3 ... n. New batches of entries are added as the user flips through the first n. Data is built from a JSON response obtained from a web server.
Because the first entry of the fetch request is associated to cell 0 - via the datasource delegate -, it's not possible to add a new batch at the bottom of the collection view. If it's added from cell 0, old cell contents are replaced by new ones, or in short the whole page seems to be replaced by new stuff, and the data the user was looking at is offset by the number of new entry. If the batch is large, it's simply buried. Furthermore, if the update is done from cell 0, all entries are made visible, which takes time and memory.
There are several options that I considered:
1) data-redorder, meaning instead of getting the fetch result as 1 2 3 4 ... n, I need the opposite, n ... 3 2 1 (nothing to do with a fetch using reverse order sorting) straight from the fetch request. I'm not sure it's possible? is there a CD gotcha allowing to re-order the fetch result before it is presented to the UICollectionViewDataSource delegate ?
2)Change the Index path/viewCell association in "collectionView cellForItemAtIndexPath:", Use (numberOfItemsInSection - IndexPath.Item). It creates several edges cases, as entries can be removed/updated in the view (hence numberOfItemsInSection changes). So I'd rather avoid it if I can...
3) adding new data from cell 0, ruled out for the reason I explained. There may be a solution: has anyone achieved a satisfactory result by setting a view offset? For example, if 20 new entries are added, then the content of cell 0 is moved to cell 20. So, we just need to tell the view controller to display from cell 20 onwards. Any image flipping or side effects I might expect?
4) download a big chunk of the data, and simply using the built-in core data faulting mechanism. But that's below optimal, because I'm not sure exactly how much I should download - user dependent - and the initial request (JSON+Core Data) might take too long. That's why lazy fetching is here for anyway.
Any advice someone facing the same problem could share ?
Thanks !

using triggers to update Values

I'm trying to enhance the performance of a SQL Server 2000 job. Here's the scenario:
Table A has max. of 300,000 rows. If I update/delete the 100th row (Based on the insertion time) all the rows which has been added after that row, should update their values. Row no. 101, should update its value based on row no. 100 and row no. 102 should update its value based on the row no.101's updated value. e.g.
Old Table:
ID...........Value
100..........220
101..........(220/2) = 110
102..........(110/2)=55
......................
Row No. 100 updated with new value: 300.
New Table
ID...........Value
100..........300
101..........(300/2) = 150
102..........(150/2)=75
......................
The actual values calculation is more complex. the formula is for simplicity.
Right now, a trigger is defined for update/delete statements. When a row is updated or deleted, the trigger adds the row's data to a log table. Also, a SQL Job is created in code-behind after update/delete which fires a stored procedure that finally, iterates through all the next rows of table A and updates their values. The process takes ~10 days to be accomplished for 300,000 rows.
When the SP gets fired, it updates the next rows' values. I think this causes the trigger to run again for each SP update and add these rows to the log table too. Also, The task should be done in DB-side as requested by customer.
To solve the problem:
Modify the stored procedure and call it directly from the trigger. The stored procedure then drops the trigger and updates the next rows' values and then creates the trigger again.
There will be multiple instances of the program running simultaneously. if another user modifies a row while the SP is being executed, the system will not fire the trigger and I'll be in trouble! Is there any workaround for this?
What's your opinion about this solution? Is there any better way to achieve this?
Thank you.
First, about the update process. I understand, your procedure is simply calling itself, when it comes to updating the next row. With 300K rows this is certainly not going to be very fast, even without logging (though it would most probably take much fewer days to accomplish). But what is absolutely beyond me is how it is possible to update more than 32 rows that way without reaching the maximum nesting level. Maybe I've got the sequence of actions wrong.
Anyway, I would probably do that differently, with just one instruction:
UPDATE yourtable
SET #value = Value = CASE ID
WHEN #id THEN #value
ELSE #value / 2 /* basically, your formula */
END
WHERE ID >= #id
OPTION (MAXDOP 1);
The OPTION (MAXDOP 1) bit of the statement limits the degree of parallelism for the statement to 1, thus making sure the rows are updated sequentially and every value is based on the previous one, i.e. on the value from the row with the preceding ID value. Also, the ID column should be made a clustered index, which it typically is by default, when it's made the primary key.
The other functionality of the update procedure, i.e. dropping and recreating the trigger, should probably be replaced by disabling and re-enabling it:
ALTER TABLE yourtable DISABLE TRIGGER yourtabletrigger
/* the update part */
ALTER TABLE yourtable ENABLE TRIGGER yourtabletrigger
But then, you are saying the trigger shouldn't actually be dropped/disabled, because several users might update the table at the same time.
All right then, we are not touching the trigger.
Instead I would suggest adding a special column to the table, the one the users shouldn't be aware of, or at least shouldn't care much of and should somehow be made sure never to touch. That column should only be updated by your 'cascading update' process. By checking whether that column was being updated or not you would know whether you should call the update procedure and the logging.
So, in your trigger there could be something like this:
IF NOT UPDATE(SpecialColumn) BEGIN
/* assuming that without SpecialColumn only one row can be updated */
SELECT TOP 1 #id = ID, #value = Value FROM inserted;
EXEC UpdateProc #id, #value;
EXEC LogProc ...;
END
In UpdateProc:
UPDATE yourtable
SET #value = Value = #value / 2,
SpecialColumn = SpecialColumn /* basically, just anything, since it can
only be updated by this procedure */
WHERE ID > #id
OPTION (MAXDOP 1);
You may have noticed that the UPDATE statement is slightly different this time. I understand, your trigger is FOR UPDATE (= AFTER UPDATE), which means that the #id row is already going to be updated by the user. So the procedure should skip it and start from the very next row, and the update expression can now be just the formula.
In conclusion I'd like to say that my test update involved 299,995 of my table's 300,000 rows and took approximately 3 seconds on my not so very fast system. No logging, of course, but I think that should give your the basic picture of how fast it can be.
Big theoretical problem here. It is always extremely suspicious when updating one row REQUIRES updating 299,900 other rows. It suggests a deep flaw in the data model. Not that it is never appropriate, just that it is required far far less often than people think. When things like this are absolutely necessary, they are usually done as a batch operation.
The best you can hope for, in some miraculous situation, is to turn that 10 days into 10 minutes, but never even 10 seconds. I would suggest explaining thoroughly WHY this seems necessary, so that another approach can be explored.

Resources