What's the difference between 'a[,,] and 'a[][][]? They both represent 3-d arrays.
It makes me write array3d.[x].[y].[z] instead of array3d.[x, y, z].
Why I can't do the following?
> let array2d : int[,] = Array2D.zeroCreate 10 10;;
> let array1d = array2d.[0];;
error FS0001: This expression was expected to have type
'a []
but here has type
int [,]
The difference is that 'a[][] represents an array of arrays (of possibly different lengths), while in 'a[,], represents a rectangular 2D array. The first type is also called jagged arrays and the second type is called multidimensional arrays. The difference is the same as in C#, so you may want to look at the C# documentation for jagged arrays and multidimensional arrays. There is also an excelent documentation in the F# WikiBook.
To demonstrate this using a picture, a value of type 'a[][] can look like this:
0 1 2 3 4
5 6
7 8 9 0 1
While a value of type a[,] will always be a rectangle and may look for example like this:
0 1 2 3
4 5 6 7
8 9 0 1
To get a single "line" of a multidimensional array, you can use the slice notation:
let row = array2d.[0,*];;
See https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/arrays#array-slicing-and-multidimensional-arrays
As of F# 3.1 (2013) things are simpler:
As of F# 3.1, you can decompose a multidimensional array into subarrays of the same or lower dimension. For example, you can obtain a vector from a matrix by specifying a single row or column.
// Get row 3 from a matrix as a vector:
matrix.[3, *]
// Get column 3 from a matrix as a vector:
matrix.[*, 3]
See https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/arrays#array-slicing-and-multidimensional-arrays
Related
I am trying to output the reverse of the intersection of two arrays of different length. I can so far print the intersection, but not in inverse order. I already have some code so far. How do I modify this to be able to print the intersection between the 2 arrays in reverse order? The arrays in question are not sorted.
#include<iostream>
#include<stack>
using namespace std;
int main(){
int n1,n2,i,j;
cin>>n1>>n2;
int arr2[n2];
int arr1[n1];
stack <int> s;
for(i=0;i<n1;i++){
cin>>arr1[i];
}
for(i=0;i<n2;i++){
cin>>arr2[i];
}
for(i=0;i<n1;i++){
for(j=0;j<n2;j++){
if(arr1[i]==arr2[j]){
s.push(arr1[i]);
cout<<s.top()<<endl;
}
}
}
}
Sample input:
6 4
1 2 3 4 5 6
2 6 4 1
Sample output:
1
2
4
6
Your program is incorrect in that expressions like
int arr2[n2];
are erroneous and work only because of compiler extensions to the standard. Use std::vector instead of plain arrays if you don't know array length. Also, don't underestimate the importance of source code indendation.
Now to the main point. The proper way of achieving your goal is this:
copy the arrays to std::vectors (or just sort them, if you don't care if they are sorted),
sort these vectors,
apply std::set_intersection from <algorithm> standard library to these sorted vectors,
print out the contents of the resulting vector in reversed order.
See https://en.cppreference.com/w/cpp/algorithm/set_intersection for an example of code that uses std::set_intersection.
This question already has answers here:
What is the reduce() function doing, in Swift
(4 answers)
Closed 9 months ago.
I am reading iOS 13 Programming Fundamentals with Swift, got to the part about reduce() and I think I understand it more or less, but then there is reduce(into:) and this piece of code:
let nums = [1,2,3,4,5]
let result = nums.reduce(into: [[],[]]) { temp, i in
temp[i%2].append(i)
}
// result is now [[2,4],[1,3,5]]
So this code takes an array of Int and splits it into 2 arrays, even and odd. The problem is that I have no idea what's happening inside the brackets {}.
In the case of reduce, the first parameter is the first one of the iteration and then the closure is supposed to process all the items one after the other, similar to map() but more powerful (here one loop is enough to get the two arrays but with map() I would need 2 loops, according to the book).
I cannot understand the syntax here anyway, especially what does "temp" stand for and that use of "in". And how is "append()" appending the value to the proper array??
Inside the closure, "temp" is the result format which is [[][]] and "i" is each number. As you said it processes all numbers in a loop. When % is used it returns the division remainder, so for the odd numbers like "1,3,5", it returns "1" and for the even numbers "0", which means that "temp" appends these values to the array in these respective indexes.
So if we debug and replace the variables for constants the results would be:
temp[1].append(1) //1%2 = 1/2 left 1 [[][1]]
temp[0].append(2) //2%2 = 2/2 left 0 [[2][1]]
temp[1].append(3) //3%2 = 3/2 = 1 left 1 [[2][1,3]]
temp[0].append(4) //4%2 = 4/2 left 0 [[2,4][1,3]]
temp[1].append(5) //5%2 = 5/2 = 2 left 1 [[2,4][1,3,5]]
According to the documentation the closure is called sequentially with a mutable accumulating value initialized that when exhausted, is returned to the caller.
I am trying some things in Dafny. I want to code a simple datastructure that holds an uncompressed image in memory:
datatype image' = image(width: int, height: int, data: array<byte>)
newtype byte = b: int | 0 <= b <= 255
Actually using it:
method Main() {
var dat := [1,2,3];
var im := image(1, 3, dat);
}
datatype image' = image(width: int, height: int, data: array<byte>)
newtype byte = b: int | 0 <= b <= 255
leads Dafny to complain:
stdin.dfy(3,24): Error: incorrect type of datatype constructor argument (found seq, expected array)
1 resolution/type errors detected in stdin.dfy
I might also want to demand that the byte array is not null, and the size of the byte array is equal to width * height * 3 (to store three bytes representing the RGB value of that pixel).
What way should I enforce this? I looked into newtype, which lets you put some constraints on variables with a certain type, but this works only for numeric types.
Dafny supports both immutable sequences (which are like mathematical sequences of elements) and mutable arrays (which are, like in C and Java, pointers to elements). The error you're getting is telling you that you're calling the image constructor with a seq<byte> value where an array<byte> value is expected.
You can fix the problem by replacing your definition of dat with:
var dat := new byte[3];
dat[0], dat[1], dat[2] := 1, 2, 3;
However, the more typical thing, if you're using a datatype (which is immutable), would be to use a sequence. So, you probably want to instead change your definition of image to:
datatype image = image(width: int, height: int, data: seq<byte>)
Btw, note that Dafny allows you to name a type and one of its constructors the same, so there's no reason to name one of them with a prime (unless you want to, of course).
Another matter of style is to use a half-open interval in your definition of byte:
newtype byte = b: int | 0 <= b < 256
Since half-open intervals are prevalent in computer science, Dafny's syntax favors them. For example, for a sequence s, the expression s[52..57] denotes a subsequence of s of length 5 (that is, 57 minus 52) starting in s at index 52. One more thing, you can also leave out the type int of b if you want, since Dafny will infer it:
newtype byte = b | 0 <= b < 256
You also asked about the possibility of adding a type constraint, so that the sequence in your datatype will always be of length 3. As you discovered, you cannot do this with a newtype, because newtype (at least for now) only works with numeric types. You can (almost) use a subset type, however. This would be done as follows:
type triple = s: seq<byte> | |s| == 3
(In this example, the first vertical bar is like the one in the newtype declaration and says "such that", whereas the next two denote the length operator on sequences.) The trouble with this declaration is that types must be nonempty and Dafny isn't convinced that there are any values that satisfy the constraint of triple. Well, Dafny is not trying very hard. The plan is to add a witness clause to the type (and newtype) declaration, so that a programmer can show Dafny a value that belongs to the triple type. However, this support is waiting for some implementation changes that will allow customized initial values, so you cannot use this constraint at this time.
Not that you want it here, but Dafny would let you give a weaker constraint that admits the empty sequence:
type triple = s: seq<byte> | |s| <= 3
So, instead, if you want to talk about that an image value has a data component of length 3, then introduce a predicate:
predicate GoodImage(img: image)
{
|img.data| == 3
}
and use this predicate in specifications like pre- and postconditions.
Program safely,
Rustan
In section 4, Tables, in The Implementation of Lua 5.0 there is and example:
local t = {100, 200, 300, x = 9.3}
So we have t[4] == nil. If I write t[0] = 0, this will go to hash part.
If I write t[5] = 500 where it will go? Array part or hash part?
I would eager to hear answer for Lua 5.1, Lua 5.2 and LuaJIT 2 implementation if there is difference.
Contiguous integer keys starting from 1 always go in the array part.
Keys that are not positive integers always go in the hash part.
Other than that, it is unspecified, so you cannot predict where t[5] will be stored according to the spec (and it may or may not move between the two, for example if you create then delete t[4].)
LuaJIT 2 is slightly different - it will also store t[0] in the array part.
If you need it to be predictable (which is probably a design smell), stick to pure-array tables (contiguous integer keys starting from 1 - if you want to leave gap use a value of false instead of nil) or pure hash tables (avoid non-negative integer keys.)
Quoting from Implementation of Lua 5.0
The array part tries to store the values corresponding to integer keys from 1 to some limit n.Values corresponding to non-integer keys or to integer keys outside the array range are
stored in the hash part.
The index of the array part starts from 1, that's why t[0] = 0 will go to hash part.
The computed size of the array part is the largest nsuch that at least half the slots between 1 and n are in use (to avoid wasting space with sparse arrays) and there is at least one used slot between n/2+1 and n(to avoid a size n when n/2 would do).
According from this rule, in the example table:
local t = {100, 200, 300, x = 9.3}
The array part which holds 3 elements, may have a size of 3, 4 or 5. (EDIT: the size should be 4, see #dualed's comment.)
Assume that the array has a size of 4, when writing t[5] = 500, the array part can no longer hold the element t[5], what if the array part resize to 8? With a size of 8, the array part holds 4 elements, which is equal to (so, not less that) half of the array size. And the index from between n/2+1 and n, which in this case, is 5 to 8, has one element:t[5]. So an array size of 8 can accomplish the requirement. In this case, t[5] will go to the array part.
I'm puzzling over how to map a set of sequences to consecutive integers.
All the sequences follow this rule:
A_0 = 1
A_n >= 1
A_n <= max(A_0 .. A_n-1) + 1
I'm looking for a solution that will be able to, given such a sequence, compute a integer for doing a lookup into a table and given an index into the table, generate the sequence.
Example: for length 3, there are 5 the valid sequences. A fast function for doing the following map (preferably in both direction) would be a good solution
1,1,1 0
1,1,2 1
1,2,1 2
1,2,2 3
1,2,3 4
The point of the exercise is to get a packed table with a 1-1 mapping between valid sequences and cells.
The size of the set in bounded only by the number of unique sequences possible.
I don't know now what the length of the sequence will be but it will be a small, <12, constant known in advance.
I'll get to this sooner or later, but though I'd throw it out for the community to have "fun" with in the meantime.
these are different valid sequences
1,1,2,3,2,1,4
1,1,2,3,1,2,4
1,2,3,4,5,6,7
1,1,1,1,2,3,2
these are not
1,2,2,4
2,
1,1,2,3,5
Related to this
There is a natural sequence indexing, but no so easy to calculate.
Let look for A_n for n>0, since A_0 = 1.
Indexing is done in 2 steps.
Part 1:
Group sequences by places where A_n = max(A_0 .. A_n-1) + 1. Call these places steps.
On steps are consecutive numbers (2,3,4,5,...).
On non-step places we can put numbers from 1 to number of steps with index less than k.
Each group can be represent as binary string where 1 is step and 0 non-step. E.g. 001001010 means group with 112aa3b4c, a<=2, b<=3, c<=4. Because, groups are indexed with binary number there is natural indexing of groups. From 0 to 2^length - 1. Lets call value of group binary representation group order.
Part 2:
Index sequences inside a group. Since groups define step positions, only numbers on non-step positions are variable, and they are variable in defined ranges. With that it is easy to index sequence of given group inside that group, with lexicographical order of variable places.
It is easy to calculate number of sequences in one group. It is number of form 1^i_1 * 2^i_2 * 3^i_3 * ....
Combining:
This gives a 2 part key: <Steps, Group> this then needs to be mapped to the integers. To do that we have to find how many sequences are in groups that have order less than some value. For that, lets first find how many sequences are in groups of given length. That can be computed passing through all groups and summing number of sequences or similar with recurrence. Let T(l, n) be number of sequences of length l (A_0 is omitted ) where maximal value of first element can be n+1. Than holds:
T(l,n) = n*T(l-1,n) + T(l-1,n+1)
T(1,n) = n
Because l + n <= sequence length + 1 there are ~sequence_length^2/2 T(l,n) values, which can be easily calculated.
Next is to calculate number of sequences in groups of order less or equal than given value. That can be done with summing of T(l,n) values. E.g. number of sequences in groups with order <= 1001010 binary, is equal to
T(7,1) + # for 1000000
2^2 * T(4,2) + # for 001000
2^2 * 3 * T(2,3) # for 010
Optimizations:
This will give a mapping but the direct implementation for combining the key parts is >O(1) at best. On the other hand, the Steps portion of the key is small and by computing the range of Groups for each Steps value, a lookup table can reduce this to O(1).
I'm not 100% sure about upper formula, but it should be something like it.
With these remarks and recurrence it is possible to make functions sequence -> index and index -> sequence. But not so trivial :-)
I think hash with out sorting should be the thing.
As A0 always start with 0, may be I think we can think of the sequence as an number with base 12 and use its base 10 as the key for look up. ( Still not sure about this).
This is a python function which can do the job for you assuming you got these values stored in a file and you pass the lines to the function
def valid_lines(lines):
for line in lines:
line = line.split(",")
if line[0] == 1 and line[-1] and line[-1] <= max(line)+1:
yield line
lines = (line for line in open('/tmp/numbers.txt'))
for valid_line in valid_lines(lines):
print valid_line
Given the sequence, I would sort it, then use the hash of the sorted sequence as the index of the table.