How I can make method take infinity arguments in objective-c? [duplicate] - ios

This question already has answers here:
How to create variable argument methods in Objective-C
(3 answers)
Closed 9 years ago.
I want to make method take infinity arguments in objective-c and add these arguments .

I'm assuming by "infinite" you mean "as many as I want, as long, as memory allows".
Pass a single argument of type NSArray. Fill it with as many arguments as you wish.
If all arguments are guaranteed to be non-object datatypes - i. e. int, long, char, float, double, struct's and arrays of them - you might be better of with a NSData. But identifying individual values in a binary blob will be trickier.
Since you want to add them up, I assume they're numbers. Are they all the same datatype? Then pass an array (and old style C array) or a pointer, and also a number of elements.
EDIT: now that I think of it, the whole design is fishy. You want a method that takes an arbitrarily large number of arguments and adds them up. But the typing effort required for passing them into a function is comparable to that of summing them up. If you have a varargs function, Sum(a,b,c,d,e) takes less typing than a+b+c+d+e. If you have a container class (NSArray, NSData, etc), you have to loop through the addends; while you're doing that, you might as well sum them up.

That's not possible on a finite machine (that is, all existing computers).
If you're good with a variable, yet finite, amount of arguments, there are C's ... variadic argument functions.

Related

Julia create array of functions with matching arguments to execute in a loop

It's easy to create an array of functions and execute them in a loop.
It's easy to provide arguments in either a corresponding array of the same length or the array could be of tuples (fn, arg).
For 2, the loop is just
for fn_ar in arr # arr is [(myfunc, [1,2,3]), (func2, [10,11,12]), ...]
fn_ar[1](fn_ar[2])
end
Here is the problem: the arguments I am using are arrays of very large arrays. In #2, the argument that will be called with the function will be the current value of the array when the arg entry of the tuple is initially created. What I need is to provide the array names as the argument and defer evaluation of the arguments until the corresponding function is run in the loop body.
I could provide the arrays used as input as an expression and eval the expression in the loop to supply the needed arguments. But, eval can't eval in local scope.
What I did that worked (sort of) was to create a closure for each function that captured the arrays (which are really just a reference to storage). This works because the only argument to each function that varies in the loop body turns out to be the loop counter. The functions in question update the arrays in place. The array argument is really just a reference to the storage location, so each function executed in the loop body sees the latest values of the arrays. It worked. It wasn't hard to do. It is very, very slow. This is a known challenge in Julia.
I tried the recommended hints in the performance section of the manual. Make sure the captured variables are typed before they are captured so the JIT knows what they are. No effect on perf. The other hint is to put the definition of the curried function with the data for the closure in let block. Tried this. No effect on perf. It's possible I implemented the hints incorrectly--I can provide a code fragment if it helps.
But, I'd rather just ask the question about what I am trying to do and not muddy the waters with my past effort, which might not be going down the right path.
Here is a small fragment that is more realistic than the above:
Just a couple of functions and arguments:
(affine!, "(dat.z[hl], dat.a[hl-1], nnw.theta[hl], nnw.bias[hl])")
(relu!, "(dat.a[hl], dat.z[hl])")
Of course, the arguments could be wrapped as an expression with Meta.parse. dat.z and dat.a are matrices used in machine learning. hl indexes the layer of the model for the linear result and non-linear activation.
A simplified version of the loop where I want to run through the stack of functions across the model layers:
function feedfwd!(dat::Union{Batch_view,Model_data}, nnw, hp, ff_execstack)
for lr in 1:hp.n_layers
for f in ff_execstack[lr]
f(lr)
end
end
end
So, closures of the arrays is too slow. Eval I can't get to work.
Any suggestions...?
Thanks,
Lewis
I solved this with the beauty of function composition.
Here is the loop that runs through the feed forward functions for all layers:
for lr in 1:hp.n_layers
for f in ff_execstack[lr]
f(argfilt(dat, nnw, hp, bn, lr, f)...)
end
end
The inner function parameter to f called argfilt filters down from a generic list of all the inputs to return a tuple of arguments needed for the specific function. This also takes advantage of the beauty of method dispatch. Note that the function, f, is an input to argfilt. The types of functions are singletons: each function has a unique type as in typeof(relu!), for example. So, without any crazy if branching, method dispatch enables argfilt to return just the arguments needed. The performance cost compared to passing the arguments directly to a function is about 1.2 ns. This happens in a very hot loop that typically runs 24,000 times so that is 29 microseconds for the entire training pass.
The other great thing is that this runs in less than 1/10 of the time of the version using closures. I am getting slightly better performance than my original version that used some function variables and a bunch of if statements in the hot loop for feedfwd.
Here is what a couple of the methods for argfilt look like:
function argfilt(dat::Union{Model_data, Batch_view}, nnw::Wgts, hp::Hyper_parameters,
bn::Batch_norm_params, hl::Int, fn::typeof(affine!))
(dat.z[hl], dat.a[hl-1], nnw.theta[hl], nnw.bias[hl])
end
function argfilt(dat::Union{Model_data, Batch_view}, nnw::Wgts, hp::Hyper_parameters,
bn::Batch_norm_params, hl::Int, fn::typeof(relu!))
(dat.a[hl], dat.z[hl])
end
Background: I got here by reasoning that I could pass the same list of arguments to all of the functions: the union of all possible arguments--not that bad as there are only 9 args. Ignored arguments waste some space on the stack but it's teeny because for structs and arrays an argument is a pointer reference, not all of the data. The downside is that every one of these functions (around 20 or so) all need to have big argument lists. OK, but goofy: it doesn't make much sense when you look at the code of any of the functions. But, if I could filter down the arguments just to those needed, the function signatures don't need to change.
It's sort of a cool pattern. No introspection or eval needed; just functions.

Manipulating Unbounded Integers using custom data structure

Full disclosure, my question is pertaining to a project I am working on for my Data Structures class. I know this is usually frowned upon, but I am hoping it may be okay due to the fact that I have the data structure itself done and I'm just seeking assistance in creating a method.
The project is to implement a custom data structure to represent unbounded integers using a custom linked list. I cannot use the BigInteger nor LinkedList classes. I implemented the data structure using the IntNode class provided from the project.
The class takes in a string of numbers, breaks it into 3 character chunks, converts those chunks into integers and stores each chunk in a custom "linked list" of IntNode objects.
For example: 123456789123 represented as 4 IntNodes, <123> <456> <789> <123>
The method I am having difficulty implementing is:
UnboundedInt multiply (UnboundedInt )
A method that multiplies the current UnboundedInt with a passed in one. The return is a new UnboundedInt.
There is also an 'add' method which was easy to implement and I do realize I could use to handle multiplication by looping the 'add' method as many times as one of the UnboundedInt objects, however, how would I handle the loop variable when it, itself, breaches the limit of an integer?
I do realize I could use to handle multiplication by looping the 'add' method as many times as one of the UnboundedInt objects
That's not going to be the answer, because it would be too slow if either operand is non-trivial.
There is also an 'add' method which was easy to implement
That's good, because that's going to be part of the solution.
How did you implement that?
Probably following the steps you would do if you had to do it on paper.
You can implement multiplication the same way.
How do you multiply two numbers on paper?
You multiply the number on the left with each digit on the right, one by one.
After you have the multiples by one digits, you add them.
For example, let's say you were to multiply 123456789123 with 234. It would go like this:
123456789123 * 234
------------
246913578246
370370367369
+ 493827156492
==============
28888888654782
Multiplying an IntNode by 1-digit numbers should be easy,
and you already have the implementation of add, so the complete solution is not far away. To sum it up, what you still need to implement:
Multiply by a 1-digit number
Multiply by 10
Combine the above two to compute the total

How does array.inject(:+) work? [duplicate]

This question already has answers here:
How does this ruby injection magic work?
(3 answers)
Closed 8 years ago.
I know hash.map(&:key) function like hash.map{|element| element.key} because it call the to_proc on the symbol :key.
But why does array.inject(:+) work same as array.inject{|sum,x| sum + x }?
Thank you.
When the sole argument to inject is not a symbol, it is used as the initial value, otherwise, to_proc and & are applied to the argument, and that is used as a block. When there are two arguments, the first one is used as the initial value, and the second one must be a symbol, and would be used to create a block as described above.
A drawback of this is that you cannot use a symbol as the initial value of inject, but it is probably considered that there is no use case for that. I don't think this specification is clean.
array.inject(:+)
In the ruby inject method, when no block is passed in, it looks if the first argument is a symbol (i.e. :+) for the method to use. In this case it will recognize :+ as a symbol and know it needs to sum the entire list.
It is possible to use
array.inject(&:+)
which will call to_proc first and is slightly more inefficient.
You may want to use
array.inject(0, :+)
To return 0 (instead of nil) in the case where your array is of length 0. In this case your first argument is not a symbol, and so Ruby will look at the second argument for the method to use.

Negative array indexing and placement in memory (pointing)

In fortran you can declare an array with any suitable (integral) range, for example:
real* 8 array(-10:10)
I believe that fortran, when passing by reference, will always pass around array(1) as the reference, but I'm not sure.
I'm using fortran pointers, and I believe that fortran is pointing the "1st" element address, i.e. array(1), not array(-10). However I'm not sure.
How does Fortran deal with negative array indexing in memory? And is it implimentation defined?
Edit: To add a little more detail, I'm passing a malloc'd block from C to fortran by means of using a fortran pointer to point at the the address, which is done by calling a fortran routine from within C. I.e. C goes:
void * pointer = malloc(blockSize*sizeof(double));
fortranpoint_(pointer);
And the fortran point routine looks like:
real*8 :: target block(5, -6:6, 0:0)
real*8 :: pointer array(:,:,:)
entry fortranPoint(block)
array => block
return
The problem is that sometimes when it later tries to access say:
array(1, -6, 0)
I am not sure if this is accessing the address at the beginning of the block or somewhere before it. I now think this is implementation defined, but would like to know the details of each implementation.
Fortran array argument ABI depends on the compiler, and perhaps more crucially, on whether the called procedure has an explicit or implicit interface.
For an implicit interface, typically the address of the first element is passed [1]. In the callee, the procedure then adds an offset depending on how the array dummy argument is declared. E.g. if the array dummy argument is declared somearray(-10:10), then a reference to somearray(x) is calculated as
address_of_first_element_passed_in_to_the_procedure + x + 10
If the procedure has an explicit interface, typically an array descriptor structure is passed rather than the address of the first element. In this structure, the callee can find information on the bounds of each dimension and, of course, a pointer to the actual data, allowing it to calculate the correct offset, similarly to the case of an implicit interface.
[1] Note that this is the first element in memory, that is, the lowest index for each dimension. Not somearray(1) regardless of how the array was declared.
To answer your updated question, for C/Fortran interoperability, use the ISO_C_BINDING feature which is nowadays widely available. This provides a standardized way to pass information between C and Fortran.
If the dummy argument for a regular array in Fortran is declared A(:) (or with more dimensions), the SHAPE is passed, not the specific index range. So the procedure will default to one-indexing. You can override this with a declaration in the procedure of A(-10:), or A(StartIndex:), where StartIndex is another argument.
Fortran pointers do include the index range, but the passing mechanism will be compiler dependent. Code interfacing this to C is likely to be OS & compiler dependent. As already suggested, I'd use a regular array and the ISO C Binding. It is MUCH easier than the old ways of figuring out the compiler passing mechanisms and standard and portable. If you have a large existing Fortran code, you could write a "glue" Fortran procedure that maps between the regular Fortran variable declarations and the ISO C Binding names. While they types will have formally different names, in practice they will be the same if you select the correct ISO C types. The ISO C Binding has been available for many years now -- can you upgrade the compiler on the problem target platform? If not, I'd use a regular Fortran array and either use zero-indexing on the C-side, or explicitly pass as arguments the desired indices.
There are examples of ISO C Binding usage on other Stack Overflow questions.
The interface to a procedure is explicit if it is declared so that it is known to the compiler in the caller. The simplest way it to place the procedures in a module and "use" the module in the caller. Having explicit interfaces helps avoid bugs since the compiler can check consistency between arguments of the caller and callee. It is a little bit like C header files, only easier.

id values of different variables in python 3

I am able to understand immutability with python (surprisingly simple too). Let's say I assign a number to
x = 42
print(id(x))
print(id(42))
On both counts, the value I get is
505494448
My question is, does python interpreter allot ids to all the numbers, alphabets, True/False in the memory before the environment loads? If it doesn't, how are the ids kept track of? Or am I looking at this in the wrong way? Can someone explain it please?
What you're seeing is an implementation detail (an internal optimization) calling interning. This is a technique (used by implementations of a number of languages including Java and Lua) which aliases names or variables to be references to single object instances where that's possible or feasible.
You should not depend on this behavior. It's not part of the language's formal specification and there are no guarantees that separate literal references to a string or integer will be interned nor that a given set of operations (string or numeric) yielding a given object will be interned against otherwise identical objects.
I've heard that the C Python implementation does include a set of the first hundred or so integers as statically instantiated immutable objects. I suspect that other very high level language run-time libraries are likely to include similar optimizations: the first hundred integers are used very frequently by most non-trivial fragments of code.
In terms of how such things are implemented ... for strings and larger integers it would make sense for Python to maintain these as dictionaries. Thus any expression yielding an integer (and perhaps even floats) and strings (at least sufficiently short strings) would be hashed, looked up in the appropriate (internal) object dictionary, added if necessary and then returned as references to the resulting object.
You can do your own similar interning of any sorts of custom object you like by wrapping the instantiation in your own calls to your own class static dictionary.

Resources