Why is tuple formatting limited to 12 items in Rust? - printing

I just started a tutorial in Rust and I can't get my head around the limitation of tuple printing:
fn main() {
// Tuple definition
let short = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11);
let long = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12);
println!("{:?}", short); // Works fine
println!("{:?}", long); // ({integer}...{integer})` cannot be formatted using `:?` because it doesn't implement `std::fmt::Debug`
}
In my ignorant view the printing could be easily achieved by iterating over the entire tuple — this would allow displaying without size constraint. If the solution would be that simple it would be implemented, what am I missing here?

Printing tuples is currently implemented using a macro that only works up to 12 elements.
Functionality to statically iterate/manipulate tuples has been proposed, but has been postponed (see e.g. this RFC). There was some concerns about the implementation of these (e.g. you'd expect to be able to get the head & tail of a tuple, but there is actually no guarantee that a tuple will be stored in the same order as you specified, because the compiler is allowed to optimize for space, which means getting the tail wouldn't be a trivial operation).
As for why you need special support for that, consider the following tuple:
let mixed = (42, true, 3.14, "foo");
How you would iterate this tuple, given that all its elements have a different type? This can't simply be done using regular iterators and a for loop. You would need some new type-level syntax, which Rust is currently lacking.

Debug is only implemented on tuples up to 12 elements. This is why printing short works, but long fails.

Related

Dart - When To Use Collection-For-In vs .Map() on Collections

Both the collection-for-in operation and .map() method can return some manipulation of elements from a previous collection. Is there ever any reason to prefer using one over the other?
var myList = [1,2,3];
var alteredList1 = [for(int i in myList) i + 2]; //[3,4,5]
var alteredList2 = myList.map((e) => e + 2).toList(); //[3,4,5]
Use whichever is easier and more readable.
That's a deliberately vague answer, because it depends on what you are doing.
Any time you have something ending in .toList() I'd at least consider making it into a list literal. If the body of the map or where is simple, you can usually rewrite it directly to a list literal using for/in (plus if for where).
And then, sometimes it gets complicated, you need to use the same variable twice, or the map computation uses a while loop, or something else doesn't just fit into the list literal syntax.
Then you can either keep the helper function and do [for (var e in something) helperFunction(e)] or just do something.map((e) { body of helper function }).toList(). In many cases the latter is then more readable.
So, consider using a list literal if your iterable code ends in toList, but if the literal gets too convoluted, don't feel bad about using the .map(...).toList() approach.
Readability is all that really matters.
Not an expert but personally I prefer the first method. Some reasons:
You can include other elements (independent from the for loop) in the same list:
var a = [1, 2, 3];
bool include5 = true;
var b = [
1,
for (var i in a) i + 1,
if (include5) 5,
];
print(b); // [1, 2, 3, 4, 5]
Sometimes when mapping models to a list of Widgets the .map().toList() method will produce a List<dynamic>, implicit casting won't work. When you come across such an error just avoid the second method.

How can I filter Flux with state?

I'd like to apply filter on my Flux based on a state calculated from previous values. However, it is recommended to avoid using state in operators according to the javadoc
Note that using state in the java.util.function / lambdas used within Flux operators should be avoided, as these may be shared between several Subscribers.
For example, Flux#distinct filters items that appears earlier. How can we implement our own version of distinct?
I have found an answer to my question. Flux#distinct can take a Supplier which provides initial state and a BiPredicate which performs "distinct" check, so we can store arbitrary state in the store and decide whether to keep each element.
Following code shows how to keep the first 3 elements of each mod2 group without changing the order.
// Get first 3 elements per mod 2.
Flux<Integer> first3PerMod2 =
Flux.fromIterable(ImmutableList.of(9, 3, 7, 4, 5, 10, 6, 8, 2, 1))
.distinct(
// Group by mod2
num -> num % 2,
// Counter to store how many elements have been processed for each group.
() -> new HashMap<Integer, Integer>(),
// Increment or set 1 to the counter,
// and return whether 3 elements are published.
(map, num) -> map.merge(num, 1, Integer::sum) <= 3,
// Clean up the state.
map -> map.clear());
StepVerifier.create(first3PerMod2).expectNext(9, 3, 7, 4, 10, 6).verifyComplete();

How does it work the increase of Array Capacity in Swift?

I read in Apple Docs:
"When you add elements to an array and that array begins to exceed its reserved capacity, the array allocates a larger region of memory and copies its elements into the new storage. The new storage is a multiple of the old storage’s size."
So, I opened the Playground and created some examples. The first example seems correct:
var array = [1, 2, 3, 4, 5]
array.capacity //5
array.append(contentsOf: [6, 7, 8, 9, 10])
array.capacity //10
array.append(11)
array.capacity //20
But I didn't understand the second example:
var array = [1, 2, 3, 4, 5]
array.capacity //5
array.append(contentsOf: [6, 7, 8, 9, 10, 11])
array.capacity //12
array.append(12)
array.capacity //12
Why is the capacity 12 in the second example? I didn't understand even reading the documentation and searching in Google.
I recommend you to check Rob and Matt's answers here
Even though there's a reserveCapacity function, it's not advised
from the docs:
The Array type’s append(:) and append(contentsOf:) methods take care of this detail for you, but reserveCapacity(:) allocates only as much space as you tell it to (padded to a round value), and no more. This avoids over-allocation, but can result in insertion not having amortized constant-time performance.

Binary comprehension on Elixir

Is it possible and if so, how could I use binary comprehension on Elixir? I can do it on Erlang like so:
[One || <<One, _rest:3/binary>> <= <<1,2,3,4>>].
What in Erlang is:
1> [Red || <<Red:2/binary, _Blue:2/binary>> <= <<1, 2, 3, 4, 5, 6, 7, 8>> ].
[<<1,2>>,<<5,6>>]
In Elixir is:
iex(1)> for <<red::8, green::8, blue::16 <- <<1, 2, 3, 4, 5, 6, 7, 8>> >>, do: <<red, green>>
[<<1, 2>>, <<5, 6>>]
Note that the Elixir above is explicitly declaring sizes in bits, whereas the Erlang is using a type for make the calculation chop off a size in bytes. There is probably a cleaner way to do that in Elixir (at least I hope there is) and I might even hunt around for it -- but most of the time when I want to do this stuff extensively I stick to Erlang just for readability/universality.
Addendum
#aronisstav asked an interesting question: "Shouldn't there be a part matching the green pattern in the Erlang code?"
The answer is that there would be a Green variable in Erlang were that code to deal with bitstrings instead of binaries. Erlang's bit syntax provides ways to indicate a few arbitrary binary types which correspond to default sizes. Above I matched Red:2/binary which means I want to match a sequence of 2 bytes, and this is how we get the result [<<1,2>><<5,6>>]: two sequences of two bytes.
An Erlang example that is exactly equivalent to the Elixir code above would be:
[<<Red/bitstring, Green/bitstring>>
|| <<Red:8/bitstring, Green:8/bitstring, _Blue:2/binary>>
<= <<1, 2, 3, 4, 5, 6, 7, 8>> ].
But that is just silly to do, as Erlang's syntax for bytes is much more concise.
I found the solution in the documentation:
for << one, _rest :: binary - size(3) <- <<1,2,3,4>> >>, do: one

In which case System.UnicodeString.Format is used?

My environment is RADStudio XE4 Update1 on Windows 7 pro (32bit).
I found that in C++ Builder there is a System::UnicodeString::Format() static method.
Format() can be used as follows. However, the same thing can be carried out by using String().sprintf(), I think.
String str;
// --- (1) ---
str = String::Format(L"%2d, %2d, %2d", ARRAYOFCONST((10, 2, 3)));
ShowMessage(str); // 10, 2, 3
// --- (2) ---
str = String().sprintf(L"%2d, %2d, %2d", 10, 2, 3);
ShowMessage(str); // 10, 2, 3
My question is in which case the Format() is used better than by using other functions?
Is this just a matter of taste?
Internally, UnicodeString::Format() calls Sysutils::Format(), which is Delphi's formatting function. Remember that the majority of the RTL/VCL/FMX is written in Delphi, not C++.
Internally, UnicodeString::sprint() calls the C runtime's vsnwprintf() function instead.
Different frameworks, different interfaces, different formatting rules, similar results.
One benefit of using UnicodeString::Format() instead of UnicodeString::sprint() is that Format() performs type checking at runtime. If it detects a mismatch while comparing the formatting string to the input parameters, it will raise an exception. UnicodeString::sprintf() does not do that, it interprets the input values exactly as you specify in the formatting string, so if you get something wrong then you risk corrupting the output or outright crashing your code.
But, if you are careful with your parameters, the differences come down to a matter of taste.
.

Resources