I'm trying to find a way to realistically-ish stub an ExTwitter method that returns a Cursor so that I can test recursively fetching data, rate limiting, etc.
I've created a behavior with the relevant callback, and I'm trying to use Mox to stub the friends method. It seems like I'm not understanding the way pattern matching works with list arguments though because the second stub is overriding the first rather than matching both calls sequentially.
#twitter_client
|> stub(:friends, fn _handle, [cursor: -1, count: _count] ->
%ExTwitter.Model.Cursor{
items: [active_user, inactive_user],
next_cursor: 1,
previous_cursor: -1
}
end)
#twitter_client
|> stub(:friends, fn _handle, [cursor: 1, count: _count] ->
%ExTwitter.Model.Cursor{
items: [active_user, inactive_user],
next_cursor: 0,
previous_cursor: 1
}
end)
Please note that stub/3 will overwrite any previous calls to stub/3 in the stub doc.
Related
I'm having a bit of a mental block using the iOS Combine framework.
I'm converting some code from "manual" fetching from a remote API to using Combine. Basically, the API is SQL and REST (in actual fact it's Salesforce, but that's irrelevant to the question). What the code used to do is call a REST query method that takes a completion handler. What I'm doing is replacing this everywhere with a Combine Future. So far, so good.
The problem arises when the following scenario happens (and it happens a lot):
We do a REST query and get back an array of "objects".
But these "objects" are not completely populated. Each one of them needs additional data from some related object. So for each "object", we do another REST query using information from that "object", thus giving us another array of "objects".
This might or might not allow us to finish populating the first "objects" — or else, we might have to do another REST query using information from each of the second "object", and so on.
The result was a lot of code structured like this (this is pseudocode):
func fetchObjects(completion: #escaping ([Object] -> Void) {
let restQuery = ...
RESTClient.performQuery(restQuery) { results in
let partialObjects = results.map { ... }
let group = DispatchGroup()
for partialObject in partialObjects {
let restQuery = ... // something based on partialObject
group.enter()
RESTClient.performQuery(restQuery) { results in
group.leave()
let partialObjects2 = results.map { ... }
partialObject.property1 = // something from partialObjects2
partialObject.property2 = // something from partialObjects2
// and we could go down yet _another_ level in some cases
}
}
group.notify {
completion([partialObjects])
}
}
}
Every time I say results in in the pseudocode, that's the completion handler of an asynchronous networking call.
Okay, well, I see well enough how to chain asynchronous calls in Combine, for example by using Futures and flatMap (pseudocode again):
let future1 = Future...
future1.map {
// do something
}.flatMap {
let future2 = Future...
return future2.map {
// do something
}
}
// ...
In that code, the way we form future2 can depend upon the value we received from the execution of future1, and in the map on future2 we can modify what we received from upstream before it gets passed on down the pipeline. No problem. It's all quite beautiful.
But that doesn't give me what I was doing in the pre-Combine code, namely the loop. Here I was, doing multiple asynchronous calls in a loop, held in place by a DispatchGroup before proceeding. The question is:
What is the Combine pattern for doing that?
Remember the situation. I've got an array of some object. I want to loop through that array, doing an asynchronous call for each object in the loop, fetching new info asynchronously and modifying that object on that basis, before proceeding on down the pipeline. And each loop might involve a further nested loop gathering even more information asynchronously:
Fetch info from online database, it's an array
|
V
For each element in the array, fetch _more_ info, _that's_ an array
|
V
For each element in _that_ array, fetch _more_ info
|
V
Loop thru the accumulated info and populate that element of the original array
The old code for doing this was horrible-looking, full of nested completion handlers and loops held in place by DispatchGroup enter/leave/notify. But it worked. I can't get my Combine code to work the same way. How do I do it? Basically my pipeline output is an array of something, I feel like I need to split up that array into individual elements, do something asynchronously to each element, and put the elements back together into an array. How?
The way I've been solving this works, but doesn't scale, especially when an asynchronous call needs information that arrived several steps back in the pipeline chain. I've been doing something like this (I got this idea from https://stackoverflow.com/a/58708381/341994):
An array of objects arrives from upstream.
I enter a flatMap and map the array to an array of publishers, each headed by a Future that fetches further online stuff related to one object, and followed by a pipeline that produces the modified object.
Now I have an array of pipelines, each producing a single object. I merge that array and produce that publisher (a MergeMany) from the flatMap.
I collect the resulting values back into an array.
But this still seems like a lot of work, and even worse, it doesn't scale when each sub-pipeline itself needs to spawn an array of sub-pipelines. It all becomes incomprehensible, and information that used to arrive easily into a completion block (because of Swift's scoping rules) no longer arrives into a subsequent step in the main pipeline (or arrives only with difficulty because I pass bigger and bigger tuples down the pipeline).
There must be some simple Combine pattern for doing this, but I'm completely missing it. Please tell me what it is.
With your latest edit and this comment below:
I literally am asking is there a Combine equivalent of "don't proceed to the next step until this step, involving multiple asynchronous steps, has finished"
I think this pattern can be achieved with .flatMap to an array publisher (Publishers.Sequence), which emits one-by-one and completes, followed by whatever per-element async processing is needed, and finalized with a .collect, which waits for all elements to complete before proceeding
So, in code, assuming we have these functions:
func getFoos() -> AnyPublisher<[Foo], Error>
func getPartials(for: Foo) -> AnyPublisher<[Partial], Error>
func getMoreInfo(for: Partial, of: Foo) -> AnyPublisher<MoreInfo, Error>
We can do the following:
getFoos()
.flatMap { fooArr in
fooArr.publisher.setFailureType(to: Error.self)
}
// per-foo element async processing
.flatMap { foo in
getPartials(for: foo)
.flatMap { partialArr in
partialArr.publisher.setFailureType(to: Error.self)
}
// per-partial of foo async processing
.flatMap { partial in
getMoreInfo(for: partial, of: foo)
// build completed partial with more info
.map { moreInfo in
var newPartial = partial
newPartial.moreInfo = moreInfo
return newPartial
}
}
.collect()
// build completed foo with all partials
.map { partialArr in
var newFoo = foo
newFoo.partials = partialArr
return newFoo
}
}
.collect()
(Deleted the old answer)
Using the accepted answer, I wound up with this structure:
head // [Entity]
.flatMap { entities -> AnyPublisher<Entity, Error> in
Publishers.Sequence(sequence: entities).eraseToAnyPublisher()
}.flatMap { entity -> AnyPublisher<Entity, Error> in
self.makeFuture(for: entity) // [Derivative]
.flatMap { derivatives -> AnyPublisher<Derivative, Error> in
Publishers.Sequence(sequence: derivatives).eraseToAnyPublisher()
}
.flatMap { derivative -> AnyPublisher<Derivative2, Error> in
self.makeFuture(for: derivative).eraseToAnyPublisher() // Derivative2
}.collect().map { derivative2s -> Entity in
self.configuredEntity(entity, from: derivative2s)
}.eraseToAnyPublisher()
}.collect()
That has exactly the elegant tightness I was looking for! So the idea is:
We receive an array of something, and we need to process each element asynchronously. The old way would have been a DispatchGroup and a for...in loop. The Combine equivalent is:
The equivalent of the for...in line is flatMap and Publishers.Sequence.
The equivalent of the DispatchGroup (dealing with asynchronousness) is a further flatMap (on the individual element) and some publisher. In my case I start with a Future based on the individual element we just received.
The equivalent of the right curly brace at the end is collect(), waiting for all elements to be processed and putting the array back together again.
So to sum up, the pattern is:
flatMap the array to a Sequence.
flatMap the individual element to a publisher that launches the asynchronous operation on that element.
Continue the chain from that publisher as needed.
collect back into an array.
By nesting that pattern, we can take advantage of Swift scoping rules to keep the thing we need to process in scope until we have acquired enough information to produce the processed object.
I am using lua 5.3 from my C/C++ game to allow certain parts of its behavior to be scripted.
From the C++ program, every frame I call the lua function main in the following manner:
lua_getfield(VMState, LUA_GLOBALSINDEX, "main");
int result = lua_pcall(VMState, 0, 0, 0);
I expect the script to define a function called main, which does a bunch of stuff. For example, I can have a script that does something like this:
local f = function()
draw_something({visible = true, x = 0, y = 0})
end
main = function()
f()
end
draw_something() is a callback to the C code, which does something interesting with the parameters passed:
lua_getfield(VMState, 1, "visible");
bool visible = (bool)lua_toboolean(VMState, 2); lua_pop(VMState, 1);
if (!visible)
return;
// Do some other stuff
Of interest, is that by the time this callback is called, the anonymous table I passed as a parameter to do_something in the lua side, is now at stack position 1, so I can call lua_getfield() from the C side, to access the "visible" field, and do something with it.
This works pretty well, and I've done lots of stuff like this for years.
Now, I want to convert the lua call to f to a coroutine, so I do something like this from the lua side:
local f = function()
draw_something({visible = true, x = 0, y = 0})
end
local g = coroutine.create(function()
while true do
f()
coroutine.yield()
end
end
main = function()
coroutine.resume(g)
end
The result should be the same. However, it now turns out that by moving the call to draw_something() inside a coroutine, the parameter I had passed to the function, which should have been a table, is now a thread? (lua_istable() returns 0, while lua_isthread() returns 1).
Interestingly, it doesn't matter how many parameters I pass to my function: 0, 1, 4, 50, from inside the callback I'm only getting one parameter, and it is a thread.
For some reason, this is happening with some functions that I exported, but not all. I can't see any difference in the way I'm exporting the different functions though.
Is there any reason why lua would switch my parameters to a thread?
I found the answer.
It turns out that the lua_State that is passed to you on a lua_CFunction is not guaranteed to be the same to the one you first got on lua_newstate()
I suppose that each coroutine might get its own separate lua_State. If you always do stuff on the lua_State you got on lua_newstate(), you might have problems with coroutines, so you have to ensure you always use the lua_State you got passed on your lua_CFunction.
I have a sequence made up of multiple operators. There are total of 7 places where errors can be generated during this sequence processing. I'm running into an issue where the sequence does not behave as I expected and I'm looking for an elegant solution around the problem:
let inputRelay = PublishRelay<Int>()
let outputRelay = PublishRelay<Result<Int>>()
inputRelay
.map{ /*may throw multiple errors*/}
.flatmap{ /*may throw error*/ }
.map{}
.filter{}
.map{ _ -> Result<Int> in ...}
.catchError{}
.bind(to: outputRelay)
I thought that catchError would simply catch the error, allow me to convert it to failure result, but prevent the sequence from being deallocated. However, I see that the first time an error is caught, the entire sequence is deallocated and no more events go through.
Without this behavior, I'm left with a fugly Results<> all over the place, and have to branch my sequence multiple times to direct the Result.failure(Error) to the output. There are non-recoverable errors, so retry(n) is not an option:
let firstOp = inputRelay
.map{ /*may throw multiple errors*/}
.share()
//--Handle first error results--
firstOp
.filter{/*errorResults only*/}
.bind(to: outputRelay)
let secondOp = firstOp
.flatmap{ /*may throw error*/ }
.share()
//--Handle second error results--
secondOp
.filter{/*errorResults only*/}
.bind(to: outputRelay)
secondOp
.map{}
.filter{}
.map{ _ -> Result<Int> in ...}
.catchError{}
.bind(to: outputRelay)
^ Which is very bad, because there are around 7 places where errors can be thrown and I cannot just keep branching the sequence each time.
How can RxSwift operators catch all errors and emit a failure result at the end, but NOT dispose the entire sequence on first error?
The first trick to come to mind is using materialize. This would convert every Observable<T> to Observable<Event<T>>, so an Error would just be a .next(.error(Error)) and won't cause the termination of the sequence.
in this specific case, though, another trick would be needed. Putting your entire "trigger" chain within a flatMap, as well, and materializeing that specific piece. This is needed because a materialized sequence can still complete, which would cause a termination in case of a regular chain, but would not terminate a flatMapped chain (as complete == successfully done, inside a flatMap).
inputRelay
.flatMapLatest { val in
return Observable.just(val)
.map { value -> Int in
if value == 1 { throw SomeError.randomError }
return value + value
}
.flatMap { value in
return Observable<String>.just("hey\(value)")
}
.materialize()
}
.debug("k")
.subscribe()
inputRelay.accept(1)
inputRelay.accept(2)
inputRelay.accept(3)
inputRelay.accept(4)
This will output the following for k :
k -> subscribed
k -> Event next(error(randomError))
k -> Event next(next(hey4))
k -> Event next(completed)
k -> Event next(next(hey6))
k -> Event next(completed)
k -> Event next(next(hey8))
k -> Event next(completed)
Now all you have to do is filter just the "next" events from the materialized sequence.
If you have RxSwiftExt, you can simply use the errors() and elements() operators:
stream.elements()
.debug("elements")
.subscribe()
stream.errors()
.debug("errors")
.subscribe()
This will provide the following output:
errors -> Event next(randomError)
elements -> Event next(hey4)
elements -> Event next(hey6)
elements -> Event next(hey8)
When using this strategy, don't forget adding share() after your flatMap, so many subscriptions don't cause multiple pieces of processing.
You can read more about why you should use share in this situation here: http://adamborek.com/how-to-handle-errors-in-rxswift/
Hope this helps!
Yes, it's a pain. I've thought about the idea of making a new library where the grammar doesn't require the stream to end on an error, but trying to reproduce the entire Rx ecosystem for it seems pointless.
There are reactive libraries that allow you to specify Never as the error type (meaning an error can't be emitted at all,) and in RxCocoa you can use Driver (which can't error) but you are still left with the whole Result dance. "Monads in my Monads!".
To deal with it properly, you need a set of Monad transformers. With these, you can do all the mapping/flatMapping you want and not worry about looking at the errors until the very end.
Okay, I'm using Meck and I'm lost. My first language (that I've been writing for about 7 months) is Ruby, so I can't seem to wrap my brain around Meck mocking yet. I do get Ruby mocking though. Hoping someone can help me. Also, I've only been writing Erlang for a week.
Updated Code (but mocking still isn't working)...
I have a Erlang console_io prompter module that looks like this:
-module(prompter).
-export([prompt/1, guess/0]).
prompt(Message) ->
console_io:gets(Message).
gets() ->
{_, [Input]} = io:fread("Enter: ", "~s"),
Input.
guess() ->
Guess_Input = gets(),
Guess_List = convert_guess_to_list(Guess_Input).
convert_guess_to_list(Guess_Input) ->
re:split(Guess_Input, "", [{return, list}, trim]).
My test now looks like this:
-module(prompter_test).
-include_lib("eunit/include/eunit.hrl").
guess_1_test() ->
meck:new(prompter),
meck:expect(prompter, gets, fun() -> "aaaa" end),
?assertEqual(prompter:guess(), ["a","a","a","a"]),
?assert(meck:validate(prompter)),
meck:unload(prompter).
The error I'm getting is this:
Eshell V5.9.3.1 (abort with ^G)
1> prompter_test: guess_1_test (module 'prompter_test')...*failed*
in function prompter:guess/0
called as guess()
in call from prompter_test:guess_1_test/0 (test/prompter_test.erl, line 10)
in call from prompter_test:guess_1_test/0
**error:undef
I want to mock (stub?) the gets function in my test so that gets will return "aaaa" and then when I assert on get_guess() it should equal ["a", "a", "a", "a"].
How do I do this?
There are two problems:
The prompter module has two exported functions, but you only mock one of them (gets) with meck:expect. By default, Meck creates a new module that only contains the functions that you explicitly mock. You can change that by using the passthrough option:
meck:new(prompter, [passthrough]),
When you mock the gets function, all module-prefixed calls (i.e. prompter:gets()) are intercepted, but Meck has no way (yet?) of intercepting internal calls (e.g. the gets() call in the guess function), so you would still get the unmocked version of the function. There is no completely satisfactory way to avoid this. You could change the call in guess to prompter:gets(), or you could move gets into a separate module and mock that.
The first line says to create a new mocked module, my_library_module:
meck:new(my_library_module),
Next, we mock the function fib in my_library_module to return 21 when 8 is passed in:
meck:expect(my_library_module, fib, fun(8) -> 21 end),
We have some eunit assertions to test our mocked function. The code_under_test:run call is what you want to replace with the the function using your mocked module, and the 21 is the result you are expecting from the function call:
?assertEqual(21, code_under_test:run(fib, 8)), % Uses my_library_module
?assert(meck:validate(my_library_module)),
Then we unload the mocked module:
meck:unload(my_library_module).
If you wanted to write the same test for your module, you could write:
my_test() ->
meck:new(console_io),
meck:expect(console_io, gets, fun() -> "aaaa" end),
?assertEqual(["a", "a", "a", "a"], console_io:get_guess()), % Uses console_io
?assert(meck:validate(console_io)),
meck:unload(console_io).
I have skimmed through the Mochiweb code, but have not found any sign of the State variable.
Does something similar to gen_server's State variable exist in Mochiweb?
I need to store some small amount of state-related server-side (not session-related) data on the server and I do not want to use ETS or Mnesia for that.
I think you have somewhat a misunderstanding of what gen_server state is.
First, let me explain briefly how mochiweb works.
Mochiweb doesn't produce a gen_server process per client. Instead, it just spawns a new process using proc_lib:spawn/3 and creates a parametrized module, which is, basically, a tuple of the following kind:
{mochiweb_request, #Port<0.623>, get, "/users", {1, 1}, []}
which is
{mochiweb_request, Socket, Method, RawPath, HTTPVersion, Headers}
This tuple is used as an argument to a function that you pass as a loop parameter to mochiweb_http:start/1. So, when this "loop" function is called, it will look like this:
handle_request(Req) ->
%% The pattern matching below just shows what Req really is
{mochiweb_request, _, _, _, _, _} = Req,
...
Now, to explanation of gen_server state.
Basically, gen_server is a process with approximately the following structure. Of course, IRL it's more complicated, but this should give you the general idea:
init(Options)
State = ...
loop(Module, State).
loop(Module, State)
NewState = receive
{call, Msg, From} -> Module:handle_call(Msg, From, State)
{cast, Msg} -> Module:handle_cast(Msg, State)
Info -> Module:handle_info(Info, State)
end,
loop(Module, NewState).
So, state is just an argument that you drag through all the function calls and change inside your loop. It doesn't actually matter if your process is a gen_server or not, it doesn't have what lifetime it has. In the following example the term [1, 2, 3] is a state too:
a() ->
b([1, 2, 3], now()).
b(State, Timestamp) ->
Result = do_something(Timestamp)
c(State, Result).
c(State, Payload) ->
exit({State, Payload}).
Now, back to mochiweb.
If you need to create a state of your own, you can just add an extra function argument:
handle_request(Req) ->
User = Req:get(path),
UserData = load_user_data(User),
handle_request(Req, UserData).
handle_request(Req, UserData) ->
...
Now UserData is a state too. You can loop this process, or let it respond and end right away – but you won't lose UserData as long as you pass it as an argument.
Finally, if you really want to make this process a gen_server (which is really unreasonable in most cases), you can use gen_server:enter_loop/3 function that will make your current process a gen_server. And The 3rd argument of this function will be your state that will be stored inside the started gen_server.