What does 'winning validator' mean in Sawtooth Network? - hyperledger

Need help understanding the below statements quoted from this documentation:
local : A transaction must be signed by the same key as the block. This rule takes a list of transaction indices in the block and enforces the rule on each. This rule is useful in combination with the other rules to ensure a client is not submitting transactions that should only be injected by the winning validator.
Question:
What does winning validator mean here?

The "winning" validator is the one that gets to publish the next block on the blockchain. The losing validators do not get to publish the next block on the blockchain and instead accepts the published block from the winning validator.

Related

Should I specify full key names when using Lua in Redis Cluster, or can I just pass the hashtags?

I have Lua script which I'm considering migrating to Redis Cluster
Should I specify full key names when call eval?
Or can I get away just by specifying hashtags?
For example, I wish to pass only {UNIQUE_HASH_TAG} instead of {UNIQUE_HASH_TAG}/key1, {UNIQUE_HASH_TAG}/key2 ... etc
I have lots of keys, and logic is pretty complicated - sometimes I end up generating key names dynamically but within the same hash tag.
Would I violate some specifications by passing just hash tags instead of key names?
Should I specify full key names
That's the recommended practice.
Would I violate some specifications
No, the specs do not state that key names need to be explicitly passed. The KEYS/ARGV mechanism was put in place in preparation for the cluster but before the cluster actually came to be. At that time, hash tags were not a part of the cluster's design so the recommendation was to avoid hard-coding/dynamically generating key names in scripts as there's no assurance they'll be in the same cluster hash slot.
Your approach is perfectly valid and would work as expected. I do want to emphasize that this only makes sense if you're managing a lot of the so-called {UNIQUE_HASH_TAG}s - otherwise you'll be hitting just a few slots which could become a scalability challenge.
EDIT: All that said, you really should always explicitly pass the key names to the script rather than tricking. While this isn't currently blocked, this unspecified behavior may change in the future and result in breakage of your code.

Dereferencing Microdata item type URLs: "should not" vs. "must not"

In W3C’s HTML Microdata, it says
(and it’s currently the same in WHATWG’s HTML Living Standard):
Except if otherwise specified by that specification, the URLs given as the item types should not be automatically dereferenced.
Note: A specification could define that its item type can be derefenced to provide the user with help information, for example. In fact, vocabulary authors are encouraged to provide useful information at the given URL.
And this is directly followed by:
Item types are opaque identifiers, and user agents must not dereference unknown item types, or otherwise deconstruct them, in order to determine how to process items that use them.
I’m confused about this. In the first paragraph it says that URLs in the itemtype attribute "should not be automatically dereferenced" (should, not must; so according to this paragraph, user agents are allowed to dereference). But in the last paragraph it says that user agents "must not dereference unknown item types".
Is this a contradiction or do they mean something different?
Maybe it is only about known vs. unknown (although in the first paragraph it doesn’t mention "known" at all, so I’d assume it applies to all vocabularies, whether known or not)? But why should it make a difference if a user agents knows a vocabulary? And what exactly means it to "know" a vocabulary in the first place?
Or maybe the "in order to determine how to process items that use them" part is the crux of the matter here? So user agents are allowed to dereference for any reason except if they try to determine how to process the items?
I think, but I do not know for sure ...
The distinction is known vs unknown. As for what "known" means, remember that this is just data, and user agents are not necessarily browsers. For example, a particular data set could, in theory at least, be interpreted to control real world machinery.
The first part is saying, if the UA knows the data type, then it shouldn't need to dereference it because the UA will always know what the resource obtained by that dereference will be. So it's just network traffic overhead. The same as UAs shouldn't dereference DTDs because they should already know what the DTD resource will contain. It's a should because it's impossible to say that for an arbitrary known-to-the-UA data type, there is no circumstance where a dereference might yield a useful result.
The later part is saying, if the UA doesn't know the data type, there's no protocol defined by which dereferencing will yield a meaningful resource, so the UA would at best merely be guessing. There is no value to any system to do the dereference and some network cost, so it must not do it.

unifying model for 'static' / 'historical' /streaming data in F#

An API I use exposes data with different characteristics :
'Static' reference data, that is you ask for them, get one value which supposedly does not change
'historical' values, where you can query for a date range
'subscription' values, where you register yourself for receiving updates
From a semantic point of view, though, those fields are one and the same, and only the consumption mode differs. Reference data can be viewed as a constant function yielding the same result through time. Historical data is just streaming data that happened in the past.
I am trying to find a unifying model against which to program all the semantic of my queries, and distinguish it from its consumption mode.
That mean, the same quotation could evaluated in a "real-time" way which would turn fields into their appropriate IObservable form (when available), or in 'historical' way, which takes a 'clock' as an argument and yield values when ticked, or a 'reference' way, which just yield 1 value (still decorated by the historical date at which the query is ran..)
I am not sure which programming tools in F# would be the most natural fit for that purpose, but I am thinking of quotation, which I never really used.
Would it be well suited for such a task ?
you said it yourself: just go with IObservable
your static case is just OnNext with 1 value
in your historical case you OnNext for each value in your query (at once when a observer is registered)
and the subscription case is just the ordinary IObservable pattern - you don't need something like quotations for this.
I have done something very similar (not the static case, but the streaming and historical case), and IObservable is definitely the right tool for the job. In reality, IEnumerable and IObservable are dual and most things you can write for one you can also write for the other. However, a push based model (IObservable) is more flexible, and the operators you get as part of Rx are more complete and appropriate than those as part of regular IEnumerable LINQ.
Using quotations just means you need to build the above from scratch.
You'll find the following useful:
Observable.Return(value) for a single static value
list.ToObservable() for turning a historical enumerable to an observable
Direct IObservable for streaming values into IObservable
Also note that you can use virtual schedulers for ticking the observable if this helps (most of the above accept a scheduler). I imagine this is what you want for the historical case.
http://channel9.msdn.com/Series/Rx-Workshop/Rx-Workshop-Schedulers

Is it acceptable to store the previous state as a global variable?

One of the biggest problems with designing a lexical analyzer/parser combination is overzealousness in designing the analyzer. (f)lex isn't designed to have parser logic, which can sometimes interfere with the design of mini-parsers (by means of yy_push_state(), yy_pop_state(), and yy_top_state().
My goal is to parse a document of the form:
CODE1 this is the text that might appear for a 'CODE' entry
SUBCODE1 the CODE group will have several subcodes, which
may extend onto subsequent lines.
SUBCODE2 however, not all SUBCODEs span multiple lines
SUBCODE3 still, however, there are some SUBCODES that span
not only one or two lines, but any number of lines.
this makes it a challenge to use something like \r\n
as a record delimiter.
CODE2 Moreover, it's not guaranteed that a SUBCODE is the
only way to exit another SUBCODE's scope. There may
be CODE blocks that accomplish this.
In the end, I've decided that this section of the project is better left to the lexical analyzer, since I don't want to create a pattern that matches each line (and identifies continuation records). Part of the reason is that I want the lexical parser to have knowledge of the contents of each line, without incorporating its own tokenizing logic. That is to say, if I match ^SUBCODE[ ][ ].{71}\r\n (all records are blocked in 80-character records) I would not be able to harness the power of flex to tokenize the structured data residing in .{71}.
Given these constraints, I'm thinking about doing the following:
Entering a CODE1 state from the <INITIAL> start condition results
in calls to:
yy_push_state(CODE_STATE)
yy_push_state(CODE_CODE1_STATE)
(do something with the contents of the CODE1 state identifier, if such contents exist)
yy_push_state(SUBCODE_STATE) (to tell the analyzer to expect SUBCODE states belonging to the CODE_CODE1_STATE. This is where the analyzer begins to masquerade as a parser.
The <SUBCODE1_STATE> start condition is nested as follows: <CODE_STATE>{ <CODE_CODE1_STATE> { <SUBCODE_STATE>{ <SUBCODE1_STATE> { (perform actions based on the matching patterns) } } }. It also sets the global previous_state variable to yy_top_state(), to wit SUBCODE1_STATE.
Within <SUBCODE1_STATE>'s scope, \r\n will call yy_pop_state(). If a continuation record is present (which is a pattern at the highest scope against which all text is matched), yy_push_state(continuation_record_states[previous_state]) is called, bringing us back to the scope in 2. continuation_record_states[] maps each state with its continuation record state, which is used by the parser.
As you can see, this is quite complicated, which leads me to conclude that I'm massively over-complicating the task.
Questions
For states lacking an extremely clear token signifying the end of its scope, is my proposed solution acceptable?
Given that I want to tokenize the input using flex, is there any way to do so without start conditions?
The biggest problem I'm having is that each record (beginning with the (SUB)CODE prefix) is unique, but the information appearing after the (SUB)CODE prefix is not. Therefore, it almost appears mandatory to have multiple states like this, and the abstract CODE_STATE and SUBCODE_STATE states would act as groupings for each of the concrete SUBCODE[0-9]+_STATE and CODE[0-9]+_STATE states.
I would look at how the OMeta parser handles these things.

Password validation (regex?)

I need to write some validation rules for a user password with the following requirements. C# ASP.NET MVC.
Passwords must be 6 - 8 characters
Must include at least one character
each from at least three of the
following categories:
Upper-case letters
Lower-case letters
Numeric digits
Non-alpha-numeric characters (e.g.,!##$%...)
Must not contain any
sequence of 3 or more characters in
common with the username Must not
repeat any of the previous 1 passwords
Must be changed if the password is
believed to be compromised in any way
Currently i've written a bunch of really messy validation rules using if statements and loops (especially the 3 characters in sequence with username part), which is currently functional but it just feels like its wrong. Is there a better approach I can take?
Thankyou
I wrote one very similar to what you are describing. They can be done as a regular expression, and when complete (at least for myself) it was a very rewarding accomplishment.
To accomplish this you are going to need to use a regex feature called lookaheads. See the information on the regular-expression.info site for all the gory details.
The second thing you will need is a real time regular expression tester to help you prototype your regex. I suggestion you check out Rubular. Create several passwords that should work, and some that shouldn't work and start from there as your starting point.
Edit:
To elaborate on my above comment. Not every one of your requirements can or should be solved via a regex. Namely, the requirements you listed as:
Must not contain any sequence of 3 or more characters in common with the username
Must not repeat any of the previous 1 passwords
Must be changed if the password is believed to be compromised in any way
Should probably be handled separately from the main password validation regex, as these are highly contextual. The "sequence of 3 or more characters in common with the username" can probably be handled on the client side. However, the other two items are probably best left handled on the server side.

Resources