I am trying to migrate from RxJava1 to RxJava2. I am replacing all code parts where I previously had Observable<Void> to Compleatable. However I ran into one problem with order of stream calls. When I previously was dealing with Observables and using maps and flatMaps the code worked 'as expected'. However the andthen() operator seems to work a little bit differently. Here is a sample code to simplify the problem itself.
public Single<String> getString() {
Log.d("Starting flow..")
return getCompletable().andThen(getSingle());
}
public Completable getCompletable() {
Log.d("calling getCompletable");
return Completable.create(e -> {
Log.d("doing actuall completable work");
e.onComplete();
}
);
}
public Single<String> getSingle() {
Log.d("calling getSingle");
if(conditionBasedOnActualCompletableWork) {
return getSingleA();
}else{
return getSingleB();
}
}
What I see in the logs in the end is :
1-> Log.d("Starting flow..")
2-> Log.d("calling getCompletable");
3-> Log.d("calling getSingle");
4-> Log.d("doing actuall completable work");
And as you can probably figure out I would expect line 4 to be called before line 3 (afterwards the name of andthen() operator suggest that the code would be called 'after' Completable finishes it's job). Previously I was creating the Observable<Void> using the Async.toAsync() operator and the method which is now called getSingle was in flatMap stream - it worked like I expected it to, so Log 4 would appear before 3. Now I tried changing the way the Compleatable is created - like using fromAction or fromCallable but it behaves the same. I also couldn't find any other operator to replace andthen(). To underline - the method must be a Completable since it doesn't have any thing meaning full to return - it changes the app preferences and other settings (and is used like that globally mostly working 'as expected') and those changes are needed later in the stream. I also tried to wrap getSingle() method to somehow create a Single and move the if statement inside the create block but I don't know how to use getSingleA/B() methods inside there. And I need to use them as they have their complexity of their own and it doesn't make sense to duplicate the code. Any one have any idea how to modify this in RxJava2 so it behaves the same? There are multiple places where I rely on a Compleatable job to finish before moving forward with the stream (like refreshing session token, updating db, preferences etc. - no problem in RxJava1 using flatMap).
You can use defer:
getCompletable().andThen(Single.defer(() -> getSingle()))
That way, you don't execute the contents of getSingle() immediately but only when the Completablecompletes and andThen switches to the Single.
Related
Interested why does set method defined on Cell, on the last line explicitly drops old value.
Shouldn't it be implicitly dropped (memory freed) anyways when the function returns?
use std::mem;
use std::cell::UnsafeCell;
pub struct Cell<T> {
value: UnsafeCell<T>
}
impl<T> Cell<T> {
pub fn set(&self, val: T) {
let old = self.replace(val);
drop(old); // Is this needed?
} // old would drop here anyways?
pub fn replace(&self, val: T) -> T {
mem::replace(unsafe { &mut *self.value.get() }, val)
}
}
So why not have set do this only:
pub fn set(&self, val: T) {
self.replace(val);
}
or std::ptr::read does something I don't understand.
It is not needed, but calling drop explicitly can help make code easier to read in some cases. If we only wrote it as a call to replace, it would look like a wrapper function for replace and a reader might lose the context that it does an additional action on top of calling the replace method (dropping the previous value). At the end of the day though it is somewhat subjective on which version to use and it makes no functional difference.
That being said, the real reason is that it did not always drop the previous value when set. Cell<T> previously implemented set to overwrite the existing value via unsafe pointer operations. It was later modified in rust-lang/rust#39264: Extend Cell to non-Copy types so that the previous value would always be dropped. The writer (wesleywiser) likely wanted to more explicitly show that the previous value was being dropped when a new value is written to the cell so the pull request would be easier to review.
Personally, I think this is a good usage of drop since it helps to convey what we intend to do with the result of the replace method.
I am having a situation, where once I get pagingData <T: UIModel>, I need to get additional data from a different API. The second Api requires arguments that are there in first API response. Currently I am collecting in UI Layer in lifecyclescope as,
loadResults().collectLatest {
PagingResultAdapter.submitData(lifecycle, it)
// Extracting the data inside PagingData and setting in viewmodel.
it.map { uiModel ->
Timber.e("Getting data inside map function..")
viewModel.setFinalResults(uiModel)
}
}
}
But the problem is, the map{} function on pagingData won't run during data fetching. List is populated, ui is showing the items in recyclerview. But the map function not running..(I am not able see the log)
The UI layer loadResults() function in-turn calls the viewmodel.loadResults() with UI level variables. In terms of paging everything is working fine, but I cannot transform the pagingdata into UIModel in any layer.
Official site suggests to use map{} function only.
https://developer.android.com/topic/libraries/architecture/paging/v3-transform#basic-transformations
But I am not getting at which layer I should apply map{} and also before collecting or after collecting..Any help is good..
PagingData.map is a lazy transformation that runs during collection when you call .submitData(pagingData). Since you are only submitting the original un-transformed PagingData your .map transform will never run.
You should apply the .map to the PagingData you will actually end up submitting in order to have it run. Usually this is done from the ViewModel, so that the results are also cached in case you end up in a config change or cached scenario like when navigating between fragments.
You didn't share your ViewModel / place you are creating your Pager, but assuming this happens at a different layer you would have something like:
MyViewModel.kt
fun loadResults() = Pager(...) { ... }
.flow
.map {
Timber.e("Getting data inside map function..")
setFinalResults(uiModel)
it
}
.cachedIn(viewModelScope)
MyUi.kt
viewModel.loadResults().collectLatest {
pagingDataAdapter.submitData(it)
}
NOTE: You should use the suspending version of .submitData since you are using Flow / Coroutines, because it is able to propagate cancellation direction instead of relying on launched job + eager cancellation via the non-suspending version. There shouldn't be any visible impact, but it is more performant.
Try with:
import androidx.paging.map
.flow.map { item ->
item.map { it.yourTransformation() }
}
I suppose even after reading the javadocs multiple times I don't get the difference between map and flatMap apart from the synchronous vs asynchronous transforms.In the following code I don't get any events (it behaves as if the subscribe() was not there.
final Flux<GroupedFlux<String, TData>> groupedFlux =
flux.groupBy(Event::getPartitionKey);
groupedFlux.subscribe(g -> g.delayElements(Duration.ofMillis(100))
.map(this::doWork)
.doOnError(throwable -> log.error("error: ", throwable))
.onErrorResume(e -> Mono.empty())
.subscribe());
However a flapMap() works. This works fine -
final Flux<GroupedFlux<String, TData>> groupedFlux =
flux.groupBy(Event::getPartitionKey);
groupedFlux.subscribe(g -> g.delayElements(Duration.ofMillis(100))
.flatMap(this::doWork)
.doOnError(throwable -> log.error("error: ", throwable))
.onErrorResume(e -> Mono.empty())
.subscribe());
Why is that?
EDIT:
Added sample code for the doWork method as suggested in a comment.
private Mono<Object> doWork(Object event) {
// do some work and possibly return Mono<Object> or
return Mono.empty();
}
I think it's because your doWork method returns a Mono. The map operation implicitly wraps your returned object inside a Mono, so you get a Mono<Mono>. Since your original flow subscribes to the wrapper Mono, but the one inside that one is not subscribed to it never produces anything. In contrast flatMap needs the wrapping to be explicit.
Try modifing your doWork method to return not a Mono and do the explicit Mono.just in the flatMap operation.
I have an application which is written entirely using the FRP paradigm and I think I am having performance issues due to the way that I am creating the streams. It is written in Haxe but the problem is not language specific.
For example, I have this function which returns a stream that resolves every time a config file is updated for that specific section like the following:
function getConfigSection(section:String) : Stream<Map<String, String>> {
return configFileUpdated()
.then(filterForSectionChanged(section))
.then(readFile)
.then(parseYaml);
}
In the reactive programming library I am using called promhx each step of the chain should remember its last resolved value but I think every time I call this function I am recreating the stream and reprocessing each step. This is a problem with the way I am using it rather than the library.
Since this function is called everywhere parsing the YAML every time it is needed is killing the performance and is taking up over 50% of the CPU time according to profiling.
As a fix I have done something like the following using a Map stored as an instance variable that caches the streams:
function getConfigSection(section:String) : Stream<Map<String, String>> {
var cachedStream = this._streamCache.get(section);
if (cachedStream != null) {
return cachedStream;
}
var stream = configFileUpdated()
.filter(sectionFilter(section))
.then(readFile)
.then(parseYaml);
this._streamCache.set(section, stream);
return stream;
}
This might be a good solution to the problem but it doesn't feel right to me. I am wondering if anyone can think of a cleaner solution that maybe uses a more functional approach (closures etc.) or even an extension I can add to the stream like a cache function.
Another way I could do it is to create the streams before hand and store them in fields that can be accessed by consumers. I don't like this approach because I don't want to make a field for every config section, I like being able to call a function with a specific section and get a stream back.
I'd love any ideas that could give me a fresh perspective!
Well, I think one answer is to just abstract away the caching like so:
class Test {
static function main() {
var sideeffects = 0;
var cached = memoize(function (x) return x + sideeffects++);
cached(1);
trace(sideeffects);//1
cached(1);
trace(sideeffects);//1
cached(3);
trace(sideeffects);//2
cached(3);
trace(sideeffects);//2
}
#:generic static function memoize<In, Out>(f:In->Out):In->Out {
var m = new Map<In, Out>();
return
function (input:In)
return switch m[input] {
case null: m[input] = f(input);
case output: output;
}
}
}
You may be able to find a more "functional" implementation for memoize down the road. But the important thing is that it is a separate thing now and you can use it at will.
You may choose to memoize(parseYaml) so that toggling two states in the file actually becomes very cheap after both have been parsed once. You can also tweak memoize to manage the cache size according to whatever strategy proves the most valuable.
I'm running on Magento 1.4, but also have verified the problem in 1.7.
Working with an instance of Varien_Data_Collection provides the use of Varien_Data_Collection::removeItemByKey. In my case, I'm removing items from the collection, and later trying to get the updated size of that collection, like so:
$body=$this->getTable()->getBody();
echo $body->getHeight(); // Outputs 25
$body->deleteRow(1);
echo $body->getHeight(); // Still outputs 25
...
My_Model_Body_Class extends Mage_Core_Model_Abstract {
/* #var $_rows Varien_Data_Collection */
protected $_rows;
public function deleteRow($index) {
$this->_rows->removeItemByKey($index);
return $this;
}
public function getHeight() {
return $this->_rows->getSize();
}
}
...
Code limited for brevity.
So if you call my deleteRow method, the item will in fact be removed from the collection, but subsequent calls to get the size of that collection will always return the original count. Therefore, if I have 25 items in the collection, and remove 1, then a call to getSize on the collection returns 25.
I traced this back to the parent class, in Varien_Data_Collection::getSize:
/**
* Retrieve collection all items count
*
* #return int
*/
public function getSize()
{
$this->load();
if (is_null($this->_totalRecords)) {
$this->_totalRecords = count($this->getItems());
}
return intval($this->_totalRecords);
}
We see that the count hinges on the NULL status of the _totalRecords property. So it looks like a bug in core code. Am I reading this correctly? Should I just rely on a call to count on the items?
We see that the count hinges on the NULL status of the _totalRecords
property. So it looks like a bug in core code. Am I reading this
correctly?
Whether to interpret said behaviour as bug or feature, lies in the eyes of the beholder.
This behaviour is not 1.4 specific, btw; it works the same way up to the current CE version (1.8.1).
public function getSize()
{
$this->load();
if (is_null($this->_totalRecords)) {
$this->_totalRecords = count($this->getItems());
}
return intval($this->_totalRecords);
}
Most people for sure expect a method named getSize() to always return the current size, so they may call it a bug, perhaps.
But if you take a closer look at the Varien_Data_Collection class, you'll notice, that getSize() is not the only method that looks somewhat.. "weird".
Take the addItem() and removeItemByKey() methods, for example.
Why don't they increment/decrement the _totalRecords property, when getSize() uses it?
Lazy Loading
The reason for these "weird" behaviours is, that Varien_Data_Collection basically is designed for the usage of the Lazy Loading pattern. That means, it allows to delay loading of the collection, until the data is really needed.
To accomplish this, Varien_Data_Collection implements the IteratorAggregate and Countable interfaces. Their implementation points are the getIterator() and count() methods:
public function getIterator()
{
$this->load();
return new ArrayIterator($this->_items);
}
public function count()
{
$this->load();
return count($this->_items);
}
As you can see, both of these methods call load() first.
The result of this is, that whenever you use foreach or the PHP function count() on the collection, the load() method automatically will be called first.
Now, for a default Varien_Data_Collection nothing special will happen, when load() is called, because load() only calls loadData() and loadData() only returns $this.
But when it comes to its heavily used child class Varien_Data_Collection_Db, then the call to load() will result in loading the collection and setting _totalRecords to the SQL count of loaded records.
For performance reasons the collection usually will be loaded once only (if it hasn't been loaded yet).
So you see, depending on which context you use Varien_Data_Collection in, it perfectly makes sense, why getSize() behaves this way.
Should I just rely on a call to count on the items?
If your collection is not bound to complex or slow I/O, I'd say: yes.
You can simply use:
$n = count($this->_rows->getItems());
Or even this way:
$n = count($this->_rows->getIterator());