Followed steps from this link https://substrate.dev/docs/en/tutorials/create-your-first-substrate-chain/setup
but getting errors after >cargo build --release
error[E0277]: the trait bound <System as Trait>::BlockNumber: From<i32> is not satisfied
--> /Users/ion/.cargo/registry/src/github.com-1ecc6299db9ec823/frame-executive-2.0.0/src/lib.rs:471:37
|
471 | header.number().saturating_sub(1.into())
| ^^^^ the trait From<i32> is not implemented for <System as Trait>::BlockNumber
|
= note: required because of the requirements on the impl of Into<<System as Trait>::BlockNumber> for i32
help: consider further restricting the associated type
|
449 | pub fn offchain_worker(header: &System::Header) where ::BlockNumber: From {
Actually, there were 2 other couple errors, but I managed to get past those. Any tips on how to resolve this?
Related
I have the following in my Google cloud storage
Advertiser | Event
__________________
100 | Click
101 | Impression
100 | Impression
100 | Impression
101 | Impression
My output of the pipeline should be something like
Advertiser | Count
100 | 3
101 | 2
First I used groupByKey, the output is like
100 Click, Impression, Impression
101 Impression, Impression
How to proceed from here?
Instead of a GroupByKey, you may want to use a combine function, which is a composite that optimizes before and after the group by key. Your pipeline can look something like this:
Python
collection_contents = [(100, 'Click'),
(101, 'Impression'),
(100, 'Impression'),
(100, 'Impression'),
(101, 'Impression']
input_collection = pipeline | beam.Create(collection_contents)
counts = input_collection | Count.PerKey()
This should output a collection with the shape you are looking for. The Count series of transforms is available in the apache_beam.transforms.combiners.combine.Count module.
Java
The same transforms exist for Java in the org.apache.beam.sdk.transforms package:
PCollection<KV<Integer, Integer>> resultColl = inputColl.apply(Count.perKey())
This counting pattern has been described in the 'word count' sample of Apache Beam.
Find the sample at Github apache beam sample: wordcount.py. The counting starts at line 95.
I've inherited a binary file format with the following specification:
| F | E | D | C | B | A | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0
0:| Status bit | ------ 15 - bit unsigned integer -----------
1:| Status bit | ---- uint:10 ---- | ---- uint:5 ----
Bit matching in Erlang is awesome. So I'd love to do something like this:
<<StatBit1:1, ValA:15/unsigned>> = <<2#1000000000101010:16>>.
<<StatBit2:1, ValB:10/unsigned, ValC:5/unsigned>> = <<2#0000001010100111:16>>.
The problem is that the file I need to process is saved in 8-bit-little-endian
convention. So the very first 8-bits of the file in the example above would be
00101010 then 1000000 e.t.c.
{ok, S} = file:open("datafile", [read, binary, raw]).
{ok, <<Byte1:8, Byte2:8, Byte3:8, Byte4:8>>} = file:read(S,4).
io:format(
" ~8.2.0B | ~8.2.0B | ~8.2.0B | ~8.2.0B ~n ",
[Byte1, Byte2, Byte3, Byte4]).
# 00101010 | 1000000 | 10100111 | 00000010
# ok
So I resort to reading and swapping the bytes:
<<StatBit1:1, ValA:15/unsigned>> = <<Byte2:8, Byte1:8>>.
<<StatBit2:1, ValB:10/unsigned, ValC:5/unsigned>> = <<Byte4:8, Byte3:8>>.
Alternatively I can read 16 bit little-endian and then "parse" it:
{ok, S} = file:open("datafile", [read, binary, raw]).
{ok, <<DW1:16/little, DW2:16/little>>} = file:read(S,4).
<<StatBit1:1, ValA:15/unsigned>> = <<DW1:16>>.
<<StatBit2:1, ValB:10/unsigned, ValC:5/unsigned>> = <<DW2:16>>.
Both solutions make me equally frustrated. I still suspect that there is a nice way of
dealing with that type of situations. Is there?
I'd first look into changing the application generating these files to write the data in network (big-endian) order. If that's not possible, then you're stuck with byte swapping like you're already doing. You could wrap the swapping into a function to keep it out of your decoding logic:
byteswap16(F) ->
case file:read(F, 2) of
{ok, <<B1:8,B2:8>>} -> {ok, <<B2:8,B1:8>>};
Else -> Else
end.
Alternatively, perhaps you could preprocess the file. You mentioned in your comment that the files are huge, so maybe this isn't practical for your case, but if each file fits comfortably in memory you could use file:read_file/1 to read the whole file and then preprocess the contents using a binary comprehension:
byteswap16(Filename) ->
{ok,Bin} = file:read_file(Filename),
<< <<B2:8,B1:8>> || <<B1:8,B2:8>> <= Bin >>.
Both these solutions assume the entire file is written in 16-bit little endian format.
As an explanation of why the binary syntax (as it is) can't solve your problem, consider that the bits in your file really is in order 7, ...0, F, E, ...8. The status bit is in F, but if you say "the next field is 15 bits long, and is a little-endian unsigned integer", you'll get bits 7,...0,F,E,...9 (the next 15 bits) which will then be interpreted as little-endian. You can't express the fact that you'd like to skip bit F and use E-8 instead, and then go back and pick up bit F for the status. If you could byte swap the file first, e.g. with "dd if=infile of=outfile conv=swab", you'd make your life a whole lot easier.
Did you try something like:
[edit] make some correction, but I can't test this on my tab.
decode(<<A:8, 1:1, B:7>>) -> {status1, B*256+A};
decode(<<A:3, C:5, 0:1, B:7>>) -> {status2, B*8+A, C}.
Does anyone know if this is possible with SpecFlow? In the spirit of not having more than one assertion per test, I was hoping SpecFlow would treat each "Then" as a separate test.
I have a Scenario with multiple "Then" steps that looks something like this (snippet):
When a summary for day <day number> is requested
Then the summary well id should be "134134"
And the summary well name should be "My Well --oops!"
And the summary violated rules should be <violations>
And the summary variance should be <variance>
Examples:
| day number | violations | variance |
| 0 | BadTotal,HpOnLp | -33 |
| 3 | BadTotal | -133.33 |
| 5 | | 0 |
The second assertion "My Well --oops!" should fail. What I want is for SpecFlow to test the assertions that follow
I get:
When a summary for day 0 is requested
-> done: DprSummarySteps.WhenASummaryForDayIsRequested(0) (0.0s)
Then the summary well id should be "134134"
-> done: DprSummarySteps.ThenTheSummaryWellIdShouldBe("134134") (0.0s)
And the summary well name should be "My Well --oops!"
-> error: Assert.AreEqual failed. Expected:<My Well --oops!>. Actual:<My Well>.
And the summary violated rules should be BadTotal,HpOnLp
-> skipped because of previous errors
And the summary variance should be -33
-> skipped because of previous errors
Assert.AreEqual failed. Expected:<My Well --oops!>. Actual:<My Well>.
What I want:
When a summary for day 0 is requested
-> done: DprSummarySteps.WhenASummaryForDayIsRequested(0) (0.0s)
Then the summary well id should be "134134"
-> done: DprSummarySteps.ThenTheSummaryWellIdShouldBe("134134") (0.0s)
And the summary well name should be "My Well --oops!"
-> error: Assert.AreEqual failed. Expected:<My Well --oops!>. Actual:<My Well>.
And the summary violated rules should be BadTotal,HpOnLp
-> done: DprSummarySteps.ThenTheViolatedRulesShouldBe("BadTotal,HpOnLp") (0.0s)
And the summary variance should be -33
-> done: DprSummarySteps.ThenTheVarianceShouldBe(-33) (0.0s)
Assert.AreEqual failed. Expected:<My Well --oops!>. Actual:<My Well>.
I don't believe that you can do this but my question to you would be why do you want that? With any unit test it will fail at the first assertion. you wouldn't expect a test to continue execution after an assertion failed and this is no different. Surely knowing that the test failed for some reason is enough. In this specific case you might be able to make separate independent assertions which provide useful info, but in the general case assertions that come after one that has failed might be completely meaningless.
If you want to have each assertion independent of the others then you need to break it into several scenarios with each of you current And steps as its own Then step. You might be able to use a Background step to do the common setup.
I'm not sure that this will help you though as it seems like your assertions are all related to the examples so you end up needing to repeat the examples.
You can do this now if you upgrade to Specflow 2.4 and are using Nunit, see something like https://github.com/nunit/docs/wiki/Multiple-Asserts
Assert.Multiple(() =>
{
Assert.AreEqual(5.2, result.RealPart, "Real part");
Assert.AreEqual(3.9, result.ImaginaryPart, "Imaginary part");
});
I have identified one way to do this. So what happens internally in specflow, before running any step, the TestExecutionEngine checks the LastExecutionStatus of Scenario, and it does not proceed if it is not OK.. So what we can do is, in [AfterStep] hook, add the following line :
typeof(ScenarioContext).GetProperty("ScenarioExecutionStatus").SetValue(this.ScenarioContext, ScenarioExecutionStatus.OK);
Replace this.ScenarioContext with whatever object with type ScenarioContext exist. This will make the current status as OK and will allow you to proceed to next step.
Keep in mind this won't allow you to catch all assert failures in a single step.
I'm writing a hook to be run before the execution of every step. The hook function basically manipulates the arguments given to the step.
Here is the code I'm using (the last two lines are for testing):
/** #BeforeStep */
public function beforeStep($event) {
$step_node = $event->getStep();
$args = $step_node->getArguments();
print_r($args);
die();
}
$step_node is an instance of StepNode
$args is supposed to be an array of arguments relating to that step.
For any given step I test this on, the argument array is always empty. I also tried printing out the arguments using the AfterStep hook and the array is still empty.
Am I missing something as to how behat grabs arguments and deals with steps?
getArguments() returns an array of Behat\Gherkin\Node\TableNode, allowing access to table rows. For example :
Given the following users:
| name | followers |
| everzet | 147 |
| avalanche123 | 142 |
| kriswallsmith | 274 |
| fabpot | 962 |
You can try parsing the arguments from step_node.getText() but it would probably be better to use a transformation. This will allow you to process any arguments before the step is run.
One example from the Behat Mink documentation :
/**
* #Transform /^user (.*)$/
*/
public function castUsernameToUser($username)
{
return new User($username);
}
I have an Erlang application that is getting a little too resource-hungry to stay on one node. I'm in the process of making gen_servers move from one process to another - which turns out to be relatively easy. I'm at the last hurdle: getting the factory process that creates these gen_servers to spawn them on the remote node instead of the local one. The default behavior of start_link is clearly to start locally only, but I don't see any option to change that.
It would seem that I'm going to have to be inventive with the solution and wanted to see if anyone out there had already implemented something like this with any success. IOW, what's the recommended solution?
EDIT
I'm looking at the chain of calls that are triggered by calling:
gen_server:start_link(?Module, Args, [])
gen_server:start_link/3:
start_link(Mod, Args, Options) ->
gen:start(?MODULE, link, Mod, Args, Options).
gen:start/5:
start(GenMod, LinkP, Mod, Args, Options) ->
do_spawn(GenMod, LinkP, Mod, Args, Options).
gen:do_spawn/5:
do_spawn(GenMod, link, Mod, Args, Options) ->
Time = timeout(Options),
proc_lib:start_link(?MODULE, init_it,
[GenMod, self(), self(), Mod, Args, Options],
Time,
spawn_opts(Options));
proc_lib:start_link/5:
start_link(M,F,A,Timeout,SpawnOpts) when is_atom(M), is_atom(F), is_list(A) ->
Pid = ?MODULE:spawn_opt(M, F, A, ensure_link(SpawnOpts)),
sync_wait(Pid, Timeout).
Which finally gets us to the interesting bit. There is a spawn_opt/4 that matches:
spawn_opt(M, F, A, Opts) when is_atom(M), is_atom(F), is_list(A) ->
...
...
BUT, there is one that would actually be useful to me:
spawn_opt(Node, M, F, A, Opts) when is_atom(M), is_atom(F), is_list(A) ->
...
...
It boggles my mind that this isn't exposed. I realize that there is a risk that a careless programmer might try to gen_server:start_link a process on a erlang node that happens to be running on Mars, blocking the call for half an hour, but surely, that's the programmers' lookout. Am I really stuck with modifying OTP or writing some sort of ad-hoc solution?
We don't start_link a server on the remote node directly. For a good program structure and simplicity, we start a separate application on the remote node, and delegate the creation of remote processes to a certain process running in the remote application.
Since linking to a process is mainly for the purpose of supervising or monitoring, we prefer doing the linking with local supervisors instead of remote processes. If you need the aliveness status of any remote process, I recommend erlang:monitor and erlang:demonitor.
A typical distributed set-up:
Node1
+---------------+ Node2
| App1 | +---------------+
| Supervisor1 | Proc Creation Request | App2 |
| Processes | -----------------------> | Supervisor2 |
| ...... | | |
| ...... | | | Create Children
| ...... | Monitor | V
| ...... | -----------------------> | Processes |
+---------------+ | ...... |
+---------------+
Maybe rpc module helps you. Especially function async_call.