I want to implement a drag sorted list, functions like drag-sort-recyclerview/gridview,but use jetpack compose.
use this library it's help-out
and this is an Example of my usage
implementation("org.burnoutcrew.composereorderable:reorderable:0.6.1")
val state = rememberReorderState()
val list=notes.toMutableList()
LazyColumn(
state = state.listState,
modifier = Modifier.reorderable(state, { a, b -> list.move(a, b) })
) {
items(list, { it.id }) { noteIndex ->
Note(
Modifier
.draggedItem(state.offsetByKey(noteIndex.id))
.detectReorderAfterLongPress(state),
note = noteIndex,
onNoteClick = onNoteClick,
onNoteCheckedChange = onNoteCheckedChange
)
}
}
I have two Publishers A and B. They are imbalanced as in A will emit 3 values, then complete, B will only emit 1 value, then complete (A actually can emit a variable number, B will remain 1 if that helps):
A => 1, 2, 3
B => X
B also runs asynchronously and will likely only emit a value after A already emitted its second value (see diagram above). (B might also only emit any time, including after A already completed.)
I'd like to publish tuples of A's values combined with B's values:
(1, X) (2, X) (3, X)
combineLatest is not up for the job as it will skip the first value of A and only emit (2, X) and (3, X). zip on the other hand will not work for me, because B only emits a single value.
I am looking for an elegant way to accomplish this. Thanks!
Edit and approach to a solution
A bit philosophical, but I think there is fundamental question if you want to go the zip or combineLatest route. You definitely need some kind of storage for the faster publisher to buffer events while you wait for the slower to start emitting values.
One solution might be to create a publisher that collects events from A until B emits and then emits all of the collected events and continues emitting what A gives. This is actually possible through
let bufferedSubject1 = Publishers.Concatenate(
prefix: Publishers.PrefixUntilOutput(upstream: subject1, other: subject2).collect().flatMap(\.publisher),
suffix: subject1)
PrefixUntilOutput will collect everything until B emits (subject2) and then switch to just regularly passing the output of it.
However if you run
let cancel = bufferedSubject1.combineLatest(subject2)
.sink(receiveCompletion: { c in
print(c)
}, receiveValue: { v in
print(v)
})
you are still missing the first value from A (1,X) -- this seems to be a bit like a race condition: Will bufferedSubject1 have all values emitted first or does subject2 provide a value to combineLatest first?
What I think is interesting is that without any async calls, the behavior seems to be undefined. If you run the sample below, sometimes™️ you get all values emitted. Sometimes you are missing out on (1,X). Since there is no async calls and no dispatchQueue switching here, I would even assume this is a bug.
You can "dirty fix" the race condition by providing a delay or even just a receive(on: DispatchQueue.main) between bufferedSubject1 and combineLatest, so that before we continue the pipeline, we hand back control to the DispatchQueue and let subject2 emit to combineLatest.
However, I would not deem that elegant and still looking for a solution that uses zip semantics but without having to create an infinite collection of the same value (which does not play well with sequential processing and unlimited demand, the way I see it).
Sample:
var subject1 = PassthroughSubject<Int, Never>()
var subject2 = PassthroughSubject<String, Never>()
let bufferedSubject1 = Publishers.Concatenate(prefix: Publishers.PrefixUntilOutput(upstream: subject1, other: subject2).collect().flatMap(\.publisher),
suffix: subject1)
let bufferedSubject2 = Publishers.Concatenate(prefix: Publishers.PrefixUntilOutput(upstream: subject2, other: subject1).collect().flatMap(\.publisher),
suffix: subject2)
let cancel = bufferedSubject1.combineLatest(subject2)
.sink(receiveCompletion: { c in
print(c)
}, receiveValue: { v in
print(v)
})
subject1.send(1)
subject1.send(2)
subject2.send("X")
subject2.send(completion: .finished)
subject1.send(3)
subject1.send(completion: .finished)
Ok, this was an interesting challenge and though it seemed deceptively simple, I couldn't find a simple elegant way.
Here's a working approach (though hardly elegant) that seems to not suffer from the race condition of using PrefixUntilOutput/Concatenate combo.
The idea is to use combineLatest, but one that emits as soon as the first publisher emits, with the other value being nil so that we don't lose the initial values. Here's a convenience operator that does that that I called combineLatestOptional:
extension Publisher {
func combineLatestOptional<Other: Publisher>(_ other: Other)
-> AnyPublisher<(Output?, Other.Output?), Failure>
where Other.Failure == Failure {
self.map { Optional.some($0) }.prepend(nil)
.combineLatest(
other.map { Optional.some($0) }.prepend(nil)
)
.dropFirst() // drop the first (nil, nil)
.eraseToAnyPublisher()
}
}
Armed with the above, the second step in the pipeline uses Scan to collect values into an accumulator until the other publisher emits the first value. There are 4 states of the accumulator that I'm representing this state with a State<L, R> type:
fileprivate enum State<L, R> {
case initial // before any one publisher emitted
case left([L]) // left emitted; right hasn't emitted
case right([R]) // right emitted; left hasn't emitted
case final([L], [R]) // final steady-state
}
And the final operator combineLatestLossless is implemented like so:
extension Publisher {
func combineLatestLossless<Other: Publisher>(_ other: Other)
-> AnyPublisher<(Output, Other.Output), Failure>
where Failure == Other.Failure {
self.combineLatestOptional(other)
.scan(State<Output, Other.Output>.initial, { state, tuple in
switch (state, tuple.0, tuple.1) {
case (.initial, let l?, nil): // left emits first value
return .left([l]) // -> collect left values
case (.initial, nil, let r?): // right emits first value
return .right([r]) // -> collect right values
case (.left(let ls), let l?, nil): // left emits another
return .left(ls + [l]) // -> append to left values
case (.right(let rs), nil, let r?): // right emits another
return .right(rs + [r]) // -> append to right values
case (.left(let ls), _, let r?): // right emits after left
return .final(ls, [r]) // -> go to steady-state
case (.right(let rs), let l?, _): // left emits after right
return .final([l], rs) // -> go to steady-state
case (.final, let l?, let r?): // final steady-state
return .final([l], [r]) // -> pass the values as-is
default:
fatalError("shouldn't happen")
}
})
.flatMap { status -> AnyPublisher<(Output, Other.Output), Failure> in
if case .final(let ls, let rs) = status {
return ls.flatMap { l in rs.map { r in (l, r) }}
.publisher
.setFailureType(to: Failure.self)
.eraseToAnyPublisher()
} else {
return Empty().eraseToAnyPublisher()
}
}
.eraseToAnyPublisher()
}
}
The final flatMap creates a Publishers.Sequence publisher from all the accumulated values. In the final steady-state, each array would just have a single value.
The usage is simple:
let c = pub1.combineLatestLossless(pub2)
.sink { print($0) }
zip on the other hand will not work for me, because B only emits a single value.
Correct, so fix it so that that’s not true. Start a pipeline at B. Using flatmap turn its signal into a publisher for a sequence of that signal, repeated. Zip that with A.
Example:
import UIKit
import Combine
func delay(_ delay:Double, closure:#escaping ()->()) {
let when = DispatchTime.now() + delay
DispatchQueue.main.asyncAfter(deadline: when, execute: closure)
}
class ViewController: UIViewController {
var storage = Set<AnyCancellable>()
let s1 = PassthroughSubject<Int,Never>()
let s2 = PassthroughSubject<String,Never>()
override func viewDidLoad() {
super.viewDidLoad()
let p1 = s1
let p2 = s2.flatMap { (val:String) -> AnyPublisher<String,Never> in
let seq = Array(repeating: val, count: 100)
return seq.publisher.eraseToAnyPublisher()
}
p1.zip(p2)
.sink{print($0)}
.store(in: &storage)
delay(1) {
self.s1.send(1)
}
delay(2) {
self.s1.send(2)
}
delay(3) {
self.s1.send(3)
}
delay(2.5) {
self.s2.send("X")
}
}
}
Result:
(1, "X")
(2, "X")
(3, "X")
Edit
After stumbling on this post I wonder if the problem in your example is not related to the PassthroughSubject:
PassthroughSubject will drop values if the downstream has not made any demand for them.
and in fact using :
var subject1 = Timer.publish(every: 1, on: .main, in: .default, options: nil)
.autoconnect()
.measureInterval(using: RunLoop.main, options: nil)
.scan(DateInterval()) { res, interval in
.init(start: res.start, duration: res.duration + interval.magnitude)
}
.map(\.duration)
.map { Int($0) }
.eraseToAnyPublisher()
var subject2 = PassthroughSubject<String, Never>()
let bufferedSubject1 = Publishers.Concatenate(prefix: Publishers.PrefixUntilOutput(upstream: subject1, other: subject2).collect().flatMap(\.publisher),
suffix: subject1)
let cancel = bufferedSubject1.combineLatest(subject2)
.sink(receiveCompletion: { c in
print(c)
}, receiveValue: { v in
print(v)
})
subject2.send("X")
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
subject2.send("Y")
}
I get this output :
(1, "X")
(2, "X")
(3, "X")
(3, "Y")
(4, "Y")
(5, "Y")
(6, "Y")
And that seems to be the desired behavior.
I don't know if it is an elegant solution but you can try to use Publishers.CollectByTime :
import PlaygroundSupport
import Combine
PlaygroundPage.current.needsIndefiniteExecution = true
let queue = DispatchQueue(label: "com.foo.bar")
let cancellable = letters
.combineLatest(indices
.collect(.byTimeOrCount(queue, .seconds(1), .max))
.flatMap { indices in indices.publisher })
.sink { letter, index in print("(\(index), \(letter))") }
indices.send(1)
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
indices.send(2)
indices.send(3)
}
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
letters.send("X")
}
DispatchQueue.main.asyncAfter(deadline: .now() + 3.3) {
indices.send(4)
}
DispatchQueue.main.asyncAfter(deadline: .now() + 3.5) {
letters.send("Y")
}
DispatchQueue.main.asyncAfter(deadline: .now() + 3.7) {
indices.send(5)
indices.send(6)
}
Output :
(X, 1)
(X, 2)
(X, 3)
(Y, 3)
(Y, 4)
(Y, 5)
(Y, 6)
Algorithmically speaking, you need to:
wait until B emits an event, collect all elements that A emits
store the element you just received from B
emit the pair of elements emitted so far by A
emit the rest of elements that A emits after B emitted it's element
An implementation of the above algoritm can be done like this:
// `share` makes sure that we don't cause unwanted side effects,
// like restarting the work `A` does, as we subscribe multiple
// times to this publisher
let sharedA = a.share()
// state, state, state :)
var latestB: String!
var cancel = sharedA
// take all elements until `B` emits
.prefix(untilOutputFrom: b.handleEvents(receiveOutput: { latestB = $0}))
// wait on those elements
.collect()
// uncollect them
.flatMap { $0.publisher }
// make sure we deliver the rest of elements from `A`
.append(sharedA)
// now, pair the outputs together
.map { ($0, latestB) }
.sink(receiveValue: { print("\($0)") })
Maybe there's a way to avoid the state (latestB), and use a pure pipeline, couldn't yet find it, though.
P.S. As an added bonus, if B is expected to emit more than one element, than with a simple change we can support this scenario too:
let sharedA = a.share()
let sharedB = b.handleEvents(receiveOutput: { latestB = $0}).share()
var latestB: String!
var cancel = sharedA.prefix(untilOutputFrom: sharedB)
.collect()
.flatMap { $0.publisher }
.append(sharedA)
.map { ($0, latestB)}
.sink(receiveValue: { print("\($0)") })
I want to clear all elements within a Skip list, like this:
Module mod = current()
Skip skip = create()
put(skip, 1, "test")
put(skip, 2, mod)
clearSkip(skip) // Removes all elements
example script for deleting Skips of custom types, here: type OutLinkInfo:
struct OutLinkInfo {}
OutLinkInfo createOutLinkInfo_() { DxlObject d = new(); OutLinkInfo x = (addr_ d) OutLinkInfo; return(x) }
DxlObject DxlObjectOf(OutLinkInfo x) { return((addr_ x) DxlObject) }
void deleteOutLinkInfo(OutLinkInfo &x) { DxlObject d = DxlObjectOf(x); delete(d); x = null; return() }
Skip deleteOutLinkInfo(Skip sk)
{
OutLinkInfo x = null OutLinkInfo
for x in sk do { deleteOutLinkInfo(x) }
delete(sk); sk = null
return(sk)
}
You can use the setempty(Skip) function, although this specific overload is undocumented as far as I know.
Here is roughly the code that I want to change:
final List<Objects> list = evaluators.parallelStream()
.map(evaluator -> evaluator.evaluate())
.flatMap(List::stream)
.collect(Collectors.toList());
I want to change the evaluator.evaluate() method to return a Pair<List, List> instead. Something like:
final Pair<List<Object>, List<String>> pair = evaluators.parallelStream()
.map(evaluator -> evaluate())
...?
Such that if evaluatorA returned Pair<[1,2], [a,b]> and evaluatorB returned Pair<[3], [c,d]> then the end result is a Pair<[1,2,3], [a,b,c,d]>.
Thanks for your help.
I ended up implementing a customer collector for the Pair of Lists:
...
.collect(
// supplier
() -> Pair.of(new ArrayList<>(), new ArrayList<>()),
// accumulator
(accumulatedResult, evaluatorResult) -> {
accumulatedResult.getLeft().addAll(evaluatorResult.getLeft());
accumulatedResult.getRight().addAll(evaluatorResult.getRight());
},
// combiner
(a, b) -> {
a.getLeft().addAll(b.getLeft());
a.getRight().addAll(b.getRight());
}
);
I'm trying to build a Stream that gets an Avro Topic, do a simple transformation and then sends it back again in Avro format to an other Topic and I'm kind of stuck on the final serialization part.
I have an AVRO schema created, I'm importing it and using it to create the Specific Avro Serde. But I don't know how to serialize the movie object back to AVRO using this serde.
This is the stream class:
class StreamsProcessor(val brokers: String, val schemaRegistryUrl: String) {
private val logger = LogManager.getLogger(javaClass)
fun process() {
val streamsBuilder = StreamsBuilder()
val avroSerde = GenericAvroSerde().apply {
configure(mapOf(Pair("schema.registry.url", schemaRegistryUrl)), false)
}
val movieAvro = SpecificAvroSerde<Movie>().apply{
configure(mapOf(Pair("schema.registry.url", schemaRegistryUrl)), false)
}
val movieAvroStream: KStream<String, GenericRecord> = streamsBuilder
.stream(movieAvroTopic, Consumed.with(Serdes.String(), avroSerde))
val movieStream: KStream<String, StreamMovie> = movieAvroStream.map {_, movieAvro ->
val movie = StreamMovie(
movieId = movieAvro["name"].toString() + movieAvro["year"].toString(),
director = movieAvro["director"].toString(),
)
KeyValue("${movie.movieId}", movie)
}
// This where I'm stuck, the call is wrong because movieStream is not a <String, movieAvro> object
movieStream.to(movieTopic, Produced.with(Serdes.String(), movieAvro))
val topology = streamsBuilder.build()
val props = Properties()
props["bootstrap.servers"] = brokers
props["application.id"] = "movies-stream"
val streams = KafkaStreams(topology, props)
streams.start()
}
}
Thanks
The type of your result stream is KStream<String, StreamMovie> and thus the used value Serde should be of type SpecificAvroSerde<StreamMovie>.
Why do you try to use SpecificAvroSerde<Movie>? If Movie is the desired output type, you should create Movie object in your map step instead of a StreamMovie object and change the value type of the result KStream accordingly.
Compare https://github.com/confluentinc/kafka-streams-examples/blob/5.4.1-post/src/test/java/io/confluent/examples/streams/SpecificAvroIntegrationTest.java