swap first node with last node using queue and deque only - linked-list

what I am trying to do is let's say I have 1,2,3,4,5 so what I need is to deque which will remove the first one so it's 2,3,4,5 then put it in a temp linked list then, I queue the 1 which will now be 2,3,4,5,1 but I also want the 5 to be in the first
ex:
5,2,3,4,1
I am new to this so I apologies if it is hard to explain it

Related

Google Sheets: Using "match" with a variable starting point

I'm new to using the "match" function. I have a list of URLs that I'm importing using the "importxml" function. In doing so, some of my cells are blank or don't contain data that I need. The data that I need seems to shift around on occasion. For instance, it may start in cell A135 one day and then shift to cell A139 another day. Once I find the first cell then I know that I need the next 100 cells. I've got all of that figured out.
=match("/football/players",A1:A,0)+1
This gets me the row of where my data starts. For example, it returns a value of "135" to cell B1
Here's where I'm stuck: Once I get the first 100 entries then there's another gap of variable length and then another list of 100 entries. I need something like:
=match("/football/players",A&(B1):A,0)+B1
This, obviously, doesn't work. I'm stuck on how to handle the "A&(B1)" part. I may need to use something completely different.
I think I need to somehow use the "INDIRECT" function, but I'm not sure how to write the syntax.

How do you pull data from one sheet to another and have one just pulling the last row of data automatically when one is entered? I need rows A1-AO

I'm looking to pull data from a master copy and the only data that I need pulled needs to go onto another spreadsheet but only the most current. I need the whole row and it contains words and numbers. The rows that are needed are A1-AO. How can this be done?
=ARRAYFORMULA(INDEX(A1:AO500, MATCH(1,--ISBLANK(A1:A500),0)-1))
Upto the first 500 rows, the last non blank row will be chosen and returned.
=ARRAYFORMULA(INDEX('Sheet1'!A1:AO500, MATCH(1,--ISBLANK('Sheet1'!A1:A500),0)-1))
EDIT: Killed bugs with the MATCH formula

Run workflow several times to load data from source to single target

I am having a relational source having 60 records in it.
I want to run a workflow for this mapping, for the first time it should load only 20(1-20) records , 2nd time remaining 20(21-30) records, 3rd time remaining 20(31-60) records to a single source.
how can we do this?
One solution would be to:
use a mapping variable eg. $$rowAmount with initial value of 20
add a Sequence to generate row numbers
use a Filter with a condition RowId>$$rowAmount-20 AND RowId<$$rowAmount
use SETVARIABLE function to increase $$rowAmount by 20 and store in repository

Load Large Data from multiple tables in parallel using multithreading

I'm trying load data about 10K records from 6 different tables from my Ultralite DB.
I have created different functions for 6 different tables.
I have tried to load these in parallel using NSInvokeOperations, NSOperations, GCD, Subclassing NSOperation but nothing is working out.
Actually, loading 10K from 1 table takes 4 Sec, and from another 5 Sec, if i keep these 2 in queue it is taking 9 secs. This means my code is not running in parallel.
How to improve performance problem?
There may be multiple ways of doing it.
What i suggest will be :
Set the number of rows for table view to be exact count (10k in your case)
Table view is optimised to create only few number of cells at start(follows pull model). so cellForRowAtIndexPath will be called only for few times at start.
Have an array and fetch only 50 entries at start. Have a counter variable.
When user scrolls table view and count reaches after 50 fetch next 50 items(it will take very less time) and populate cells with next 50 data.
keep on doing same thing.
Hope it works.
You should fetch records in chunks(i.e. fetch 50-60 records at a time in a table). And then when user reach end of the table load another 50 -60 records. Try hands with this library: Bottom Pull to refresh more data in a UITableView
Regarding parallelism go with GCD, and reload respective table when GCD's success block called.
Ok you have to use Para and Time functions look them up online for more info

Data sorting and update of UIcollectionViewCells. Is this a lost cause?

I have core data entries displayed in a collectionView, sorted from 1 2 3 ... n. New batches of entries are added as the user flips through the first n. Data is built from a JSON response obtained from a web server.
Because the first entry of the fetch request is associated to cell 0 - via the datasource delegate -, it's not possible to add a new batch at the bottom of the collection view. If it's added from cell 0, old cell contents are replaced by new ones, or in short the whole page seems to be replaced by new stuff, and the data the user was looking at is offset by the number of new entry. If the batch is large, it's simply buried. Furthermore, if the update is done from cell 0, all entries are made visible, which takes time and memory.
There are several options that I considered:
1) data-redorder, meaning instead of getting the fetch result as 1 2 3 4 ... n, I need the opposite, n ... 3 2 1 (nothing to do with a fetch using reverse order sorting) straight from the fetch request. I'm not sure it's possible? is there a CD gotcha allowing to re-order the fetch result before it is presented to the UICollectionViewDataSource delegate ?
2)Change the Index path/viewCell association in "collectionView cellForItemAtIndexPath:", Use (numberOfItemsInSection - IndexPath.Item). It creates several edges cases, as entries can be removed/updated in the view (hence numberOfItemsInSection changes). So I'd rather avoid it if I can...
3) adding new data from cell 0, ruled out for the reason I explained. There may be a solution: has anyone achieved a satisfactory result by setting a view offset? For example, if 20 new entries are added, then the content of cell 0 is moved to cell 20. So, we just need to tell the view controller to display from cell 20 onwards. Any image flipping or side effects I might expect?
4) download a big chunk of the data, and simply using the built-in core data faulting mechanism. But that's below optimal, because I'm not sure exactly how much I should download - user dependent - and the initial request (JSON+Core Data) might take too long. That's why lazy fetching is here for anyway.
Any advice someone facing the same problem could share ?
Thanks !

Resources