need of pointer objects in objective c - ios

A very basic question .. but really very important to understand the concepts..
in c++ or c languages, we usually don't use pointer variables to store values.. i.e. values are stored simply as is in:
int a=10;
but here in ios sdk, in objective c, most of the objects which we use are initialized by denoting a pointer with them as in:
NSArray *myArray=[NSArray array];
So,the question arises in my mind ,that, what are the benefit and need of using pointer-objects (thats what we call them here, if it is not correct, please, do tell)..
Also I just get confused sometimes with memory allocation fundamentals when using a pointer objects for allocation. Can I look for good explanations anywhere?

in c++ or c languages, we usually don't use pointer variables to store values
I would take that "or C" part out. C++ programmers do frown upon the use of raw pointers, but C programmers don't. C programmers love pointers and regard them as an inevitable silver bullet solution to all problems. (No, not really, but pointers are still very frequently used in C.)
but here in ios sdk, in objective c, most of the objects which we use are initialized by denoting a pointer with them
Oh, look closer:
most of the objects
Even closer:
objects
So you are talking about Objective-C objects, amirite? (Disregard the subtlety that the C standard essentially describes all values and variables as an "object".)
It's really just Objective-C objects that are always pointers in Objective-C. Since Objective-C is a strict superset of C, all of the C idioms and programming techniques still apply when writing iOS apps (or OS X apps, or any other Objective-C based program for that matter). It's pointless, superfluous, wasteful, and as such, it is even considered an error to write something like
int *i = malloc(sizeof(int));
for (*i = 0; *i < 10; ++*i)
just because we are in Objective-C land. Primitives (or more correctly "plain old datatypes" with C++ terminology) still follow the "don't use a pointer if not needed" rule.
what are the benefit and need of using pointer-objects
So, why they are necessary:
Objective-C is an object-oriented and dynamic language. These two, strongly related properties of the language make it possible for programmers to take advantage of technologies such as polymorphism, duck-typing and dynamic binding (yes, these are hyperlinks, click them).
The way these features are implemented make it necessary that all objects be represented by a pointer to them. Let's see an example.
A common task when writing a mobile application is retrieving some data from a server. Modern web-based APIs use the JSON data exchange format for serializing data. This is a simple textual format which can be parsed (for example, using the NSJSONSerialization class) into various types of data structures and their corresponding collection classes, such as an NSArray or an NSDictionary. This means that the JSON parser class/method/function has to return something generic, something that can represent both an array and a dictionary.
So now what? We can't return a non-pointer NSArray or NSDictionary struct (Objective-C objects are really just plain old C structs under the hoods on all platforms I know Objective-C works on), because they are of different size, they have different memory layouts, etc. The compiler couldn't make sense of the code. That's why we return a pointer to a generic Objective-C object, of type id.
The C standard mandates that pointers to structs (and as such, to objects) have the same representation and alignment requirements (C99 6.2.5.27), i. e. that a pointer to any struct can be cast to a pointer to any other struct safely. Thus, this approach is correct, and we can now return any object. Using runtime introspection, it is also possible to determine the exact type (class) of the object dynamically and then use it accordingly.
And why they are convenient or better (in some aspects) than non-pointers:
Using pointers, there is no need to pass around multiple copies of the same object. Creating a lot of copies (for example, each time an object is assigned to or passed to a function) can be slow and lead to performance problems - a moderately complex object, for example, a view or a view controller, can have dozens of instance variables, thus a single instance may measure literally hundreds of bytes. If a function call that takes an object type is called thousands or millions of times in a tight loop, then re-assigning and copying it is quite painful (for the CPU anyway), and it's much easier and more straightforward to just pass in a pointer to the object (which is smaller in size and hence faster to copy over). Furthermore, Objective-C, being a reference counted language, even kind of "discourages" excessive copying anyway. Retaining and releasing is preferred over explicit copying and deallocation.
Also I just get confused sometimes with memory allocation fundamentals when using a pointer objects for allocation
Then you are most probably confused enough even without pointers. Don't blame it on the pointers, it's rather a programmer error ;-)
So here's...
...the official documentation and memory management guide by Apple;
...the earliest related Stack Overflow question I could find;
...something you should read before trying to continue Objective-C programming #1; (i. e. learn C first)
...something you should read before trying to continue Objective-C programming #2;
...something you should read before trying to continue Objective-C programming #3;
...and an old Stack Overflow question regarding C memory management rules, techniques and idioms;
Have fun! :-)

Anything more complex than an int or a char or similar is usually passed as
pointers even in C. In C you could of course pass around a struct of data
from function to function but this is rarely seen.
Consider the following code:
struct some_struct {
int an_int;
char a_char[1234];
};
void function1(void)
{
struct some_struct s;
function2(s);
}
void function2(struct some_struct s)
{
//do something with some_struct s
}
The some_struct data s will be put on the stack for function1. When function2
is called the data will be copied and put on the stack for use in function2.
It requires the data to be on the stack twice as well as the data to be
copied. This is not very efficient. Also, note that changing the values
of the struct in function2 will not affect the struct in function1, they
are different data in memory.
Instead consider the following code:
struct some_struct {
int an_int;
char a_char[1234];
};
void function1(void)
{
struct some_struct *s = malloc(sizeof(struct some_struct));
function2(s);
free(s);
}
void function2(struct some_struct *s)
{
//do something with some_struct s
}
The some_struct data will be put on the heap instead of the stack. Only
a pointer to this data will be put on the stack for function1, copied in the
call to function2 another pointer put on the stack for function2. This is a
lot more efficient than the previous example. Also, note that any changes of
the data in the struct made by function2 will now affect the struct in
function1, they are the same data in memory.
This is basically the fundamentals on which higher level programming languages
such as Objective-C is built and the benefits from building these languages
like this.

The benefit and need of pointer is that it behaves like a mirror. It reflects what it points to. One main place where points could be very useful is to share data between functions or methods. The local variables are not guaranteed to keep their value each time a function returns, and that they’re visible only inside their own function. But you still may want to share data between functions or methods. You can use return, but that works only for a single value. You can also use global variables, but not to store your complete data, you soon have a mess. So we need some variable that can share data between functions or methods. There comes pointers to our remedy. You can just create the data and just pass around the memory address (the unique ID) pointing to that data. Using this pointer the data could be accessed and altered in any function or method. In terms of writing modular code, that’s the most important purpose of a pointer— to share data in many different places in a program.

The main difference between C and Objective-C in this regard is that arrays in Objective-C are commonly implemented as objects. (Arrays are always implemented as objects in Java, BTW, and C++ has several common classes resembling NSArray.)
Anyone who has considered the issue carefully understands that "bare" C-like arrays are problematic -- awkward to deal with and very frequently the source of errors and confusion. ("An array 'decays' to a pointer" -- what is that supposed to mean, anyway, other than to admit in a backhanded way "Yes, it's confusing"??)
Allocation in Objective-C is a bit confusing in large part because it's in transition. The old manual reference count scheme could be easily understood (if not so easily dealt with in implementations), but ARC, while simpler to deal with, is far harder to truly understand, and understanding both simultaneously is even harder. But both are easier to deal with than C, where "zombie pointers" are almost a given, due to the lack of reference counting. (C may seem simpler, but only because you don't do things as complex as those you'd do with Objective-C, due to the difficulty controlling it all.)

You use a pointer always when referring to something on the heap and sometimes, but usually not when referring to something on the stack.
Since Objective-C objects are always allocated on the heap (with the exception of Blocks, but that is orthogonal to this discussion), you always use pointers to Objective-C objects. Both the id and Class types are really pointers.
Where you don't use pointers are for certain primitive types and simple structures. NSPoint, NSRange, int, NSUInteger, etc... are all typically accessed via the stack and typically you do not use pointers.

Related

How does Rust store types at runtime?

A u32 takes 4 bytes of memory, a String takes 3 pointer-sized integers (for location, size, and reserved space) on the stack, plus some amount on the heap.
This to me implies that Rust doesn't know, when the code is executed, what type is stored at a particular location, because that knowledge would require more memory.
But at the same time, does it not need to know what type is stored at 0xfa3d2f10, in order to be able to interpret the bytes at that location? For example, to know that the next bytes form the spec of a String on the heap?
How does Rust store types at runtime?
It doesn't, generally.
Rust doesn't know, when the code is executed, what type is stored at a particular location
Correct.
does it not need to know what type is stored
No, the bytes in memory should be correct, and the rest of the code assumes as much. The offsets of fields in a struct are baked-in to the generated machine code.
When does Rust store something like type information?
When performing dynamic dispatch, a fat pointer is used. This is composed of a pointer to the data and a pointer to a vtable, a collection of functions that make up the interface in question. The vtable could be considered a representation of the type, but it doesn't have a lot of the information that you might think goes into "a type" (unless the trait requires it). Dynamic dispatch isn't super common in Rust as most people prefer static dispatch when it's possible, but both techniques have their benefits.
There's also concepts like TypeId, which can represent one specific type, but only of a subset of types. It also doesn't provide much capability besides "are these the same type or not".
Isn't this all terribly brittle?
Yes, it can be, which is one of the things that makes Rust so interesting.
In a language like C or C++, there's not much that safeguards the programmer from making dumb mistakes that go out and mess up those bytes floating around in memory. Making those mistakes is what leads to bugs due to memory safety. Instead of interpreting your password as a password, it's interpreted as your username and printed out to an attacker (oops!)
Rust provides safeguards against that in the form of a strong type system and tools like the borrow checker, but still all done at compile time. Unsafe Rust enables these dangerous tools with the tradeoff that the programmer is now expected to uphold all the guarantees themselves, much like if they were writing C or C++ again.
See also:
When does type binding happen in Rust?
How does Rust implement reflection?
How do I print the type of a variable in Rust?
How to introspect all available methods and members of a Rust type?

Delphi object without Create? [duplicate]

In Delphi sane people use a class to define objects.
In Turbo Pascal for Windows we used object and today you can still use object to create an object.
The difference is that a object lives on the stack and a class lives on the heap.
And of course the object is depreciated.
Putting all that aside:
is there a benefit to be had, speed wise by using object instead of class?
I know that object is broken in Delphi 2009, but I've got a special use case1) where speed matters and I'm trying to find if using object will make my thing faster without making it buggy
This code base is in Delphi 7, but I may port it to Delphi 2007, haven't decided yet.
1) Conway's game of life
Long comment
Thanks all for pointing me in the right direction.
Let me explain a bit more. I'm trying to do a faster implementation of hashlife, see also here or here for simple sourcecode
The current record holder is golly, but golly uses a straight translation of Bill Gospher original lisp code (which is brilliant as an algorithm, but not optimized at the micro level at all). Hashlife enables you to calculate a generation in O(log(n)) time.
It does this by using a space/time trade off. And for this reason hashlife needs a lot of memory, gigabytes are not unheard of. In return you can calculate generation 2^128 (340282366920938463463374607431770000000) using generation 2^127 (170141183460469231731687303715880000000) in o(1) time.
Because hashlife needs to compute hashes for all sub-patterns that occur in a larger pattern, allocation of objects needs to be fast.
Here's the solution I've settled upon:
Allocation optimization
I allocate one big block of physical memory (user settable) lets say 512MB. Inside this blob I allocate what I call cheese stacks. This is a normal stack where I push and pop, but a pop can also be from the middle of the stack. If that happens I mark it on the free list (this is a normal stack). When pushing I check the free list first if nothing is free I push as normal. I'll be using records as advised it looks like the solution with the least amount of overhead.
Because of the way hashlife works, very little popping takes place and a lot of pushes. I keep separate stacks for structures of different sizes, making sure to keep memory access aligned on 4/8/16 byte boundaries.
Other optimizations
recursion removal
cache optimization
use of inline
precalculation of hashes (akin to rainbow tables)
detection of pathological cases and use of fall-back algorithm
use of GPU
For using normal OOP programming, you should always use the class kind. You'll have the most powerful object model in Delphi, including interface and generics (in later Delphi versions).
1. Records, pointers and objects
Records can be evil (slow hidden copy if you forgot to declare a parameter as const, record hidden slow cleanup code, a fillchar would make any string in record become a memory leak...), but they are sometimes very convenient to access a binary structure (e.g. some "smallish value"), via a pointer.
A dynamic array of tiny records (e.g. with one integer and one double field) will be much faster than a TList of small classes; with our TDynArray wrapper, you will have high-level access to the records, with serialization, sorting, hashing and such.
If using pointers, you must know what you are doing. It's definitively more preferable to stick with classes, and TPersistent if you want to use the magical "VCL component ownership model".
Inheritance is not allowed for records. You'll need either to use a "variant record" (using the case keyword in its type definition), either use nested records. When using C-like API, you'll sometimes have to use object-oriented structures. Using nested records or variant records is IMHO much less clear than the good old "object" inheritance model.
2. When to use object
But there are some places where objects are a good way of accessing already existing data.
Even the object model is better than the new record model, because it handles simple inheritance.
In a Blog entry last summer, I posted some possibilities to still use objects:
A memory mapped file, which I want to parse very quickly: a pointer to such an object is just great, and you still have methods at hand; I use this for TFileHeader or TFileInfo which map the .zip header, in SynZip.pas;
A Win32 structure, as defined by a API call, in which I put handy methods for easy access to the data (for that you may use record but if there is some object orientation in the struct - which is very common - you'll have to nest records, which is not the very handy);
A temporary structure defined on the stack, just used during a procedure: I use this for TZStream in SynZip.pas, or for our RTTI related classes, which map the Delphi generated RTTI in an Object-Oriented way not as the TypeInfo which is function/procedure oriented. By mapping the RTTI memory content directly, our code is faster than using the new RTTI classes created on the heap. We don't instanciate any memory, which, for an ORM framework like ours, is good for its speed. We need a lot of RTTI info, but we need it quick, we need it directly.
3. How object implementation is broken in modern Delphi
The fact that object is broken in modern Delphi is a shame, IMHO.
Normally, if you define a record on the stack, containing some reference-counted variables (like a string), it will be initialized by some compiler magic code, at the begin level of the method/function:
type TObj = object Int: integer; Str: string; end;
procedure Test;
var O: TObj
begin // here, an _InitializeRecord(#O,TypeInfo(TObj)) call is made
O.Str := 'test';
(...)
end; // here, a _FinalizeRecord(#O,TypeInfo(TObj)) call is made
Those _InitializeRecord and _FinalizeRecord will "prepare" then "release" the O.Str variable.
With Delphi 2010, I found out that sometimes, this _InitializeRecord() was not always made.
If the record has only some no public fields, the hidden calls are sometimes not generated by the compiler.
Just build the source again, and there will be...
The only solution I found out was using the record keyword instead of object.
So here is how the resulting code looks like:
/// used to store and retrieve Words in a sorted array
// - is defined either as an object either as a record, due to a bug
// in Delphi 2010 compiler (at least): this structure is not initialized
// if defined as a record on the stack, but will be as an object
TSortedWordArray = {$ifdef UNICODE}record{$else}object{$endif}
public
Values: TWordDynArray;
Count: integer;
/// add a value into the sorted array
// - return the index of the new inserted value into the Values[] array
// - return -(foundindex+1) if this value is already in the Values[] array
function Add(aValue: Word): PtrInt;
/// return the index if the supplied value in the Values[] array
// - return -1 if not found
function IndexOf(aValue: Word): PtrInt; {$ifdef HASINLINE}inline;{$endif}
end;
The {$ifdef UNICODE}record{$else}object{$endif} is awful... but the code generation error didn't occur since..
The resulting modifications in the source code are not huge, but a bit disappointing. I found out that older version of the IDE (e.g. Delphi 6/7) are not able to parse such declaration, so the class hierarchy will be broken in the editor... :(
Backward compatibility should include regression tests. A lot of Delphi users stay to this product because of the existing code. Breaking features are very problematic for the Delphi future, IMHO: if you have to rewrite a lot of code, which shouldn't you just switch the project to C# or Java?
Object was not the Delphi 1 method of setting up objects; it was the short-lived Turbo Pascal method of setting up objects, which was replaced by the Delphi TObject model in Delphi 1. It was kept around for backwards compatibility, but it should be avoided for a few reasons:
As you noted, it's broken in more recent versions. And AFAIK there are no plans to fix it.
It's a conceptualy wrong object model. The entire point of Object Oriented Programming, the one thing that really distinguishes it from procedural programming, is Liskov substitution (inheritance and polymorphism), and inheritance and value types don't mix.
You lose support for a lot of features that require TObject descendants.
If you really need value types that don't need to be dynamically allocated and initialized, you can use records instead. You can't inherit from them, but you can't do that very well with object either so you're not losing anything here.
As for the rest of the question, there aren't all that many speed benefits. The TObject model is plenty fast, especially if you're using the FastMM memory manager to speed up creation and destruction of objects, and if your objects contain lots of fields they can even be faster than records in a lot of cases, because they're passed by reference and don't have to be copied around for each function call.
When given a choice between "fast and possibly broken" and "fast and correct," always choose the latter.
Old-style objects offer no speed incentive over plain old records, so wherever you might be tempted to use old-style objects, you can use records instead without the risk of having uninitialized compiler-managed types or broken virtual methods. If your version of Delphi doesn't support records with methods, then just use standalone procedures instead.
Way back in older versions of Delphi which did not support records with methods then using object was the way to get your objects allocated on the stack. Very occasionally that would yield worthwhile performance benefits. Nowadays record is better. The only feature missing from record is the ability to inherit from another record.
You give up a lot when you change from class to record so only consider it if the performance benefits are overwhelming.

unsafe casting in F# with zero copy semantics

I'm trying to achieve a static cast like coercion that doesn't result in copying of any data.
A naive static cast does not work
let pkt = byte_buffer :> PktHeader
FS0193: Type constraint mismatch. The type byte[] is not compatible with type PktHeader The type 'byte[]' is not compatible with the type 'PktHeader' (FS0193) (program)
where the packet is initially held in a byte array because of the way System.Net.Sockets.Socket.Receive() is defined.
The low level packet struct is defined something like this
[<Struct; StructLayout(LayoutKind.Explicit)>]
type PktHeader =
[<FieldOffset(0)>] val mutable field1: uint16
[<FieldOffset(2)>] val mutable field2: uint16
[<FieldOffset(4)>] val mutable field3: uint32
.... many more fields follow ....
Efficiency is important in this real world scenario because wasteful copying of data could rule out F# as an implementation language.
How do you achieve zero copy efficiencies in this scenario?
EDIT on Nov 29
my question was predicated on the implicit belief that a C/C++/C# style unsafe static cast is a useful construct, as if this is self evident. However, on 2nd thought this kind of cast is not idiomatic in F# since it is inherently an imperative language technique fraught with peril. For this reason I've accepted the answer by V.B. where SBE/FlatBuffers data access is promulgated as best practice.
A pure F# approach for conversion
let convertByteArrayToStruct<'a when 'a : struct> (byteArr : byte[]) =
let handle = GCHandle.Alloc(byteArr, GCHandleType.Pinned)
let structure = Marshal.PtrToStructure (handle.AddrOfPinnedObject(), typeof<'a>)
handle.Free()
structure :?> 'a
This is a minimum example but I'd recommend introducing some checks on the length of the byte array because, as it's written there, it will produce undefined results if you give it a byte array which is too short. You could check against Marshall.SizeOf(typeof<'a>).
There is no pure F# solution to do a less safe conversion than this (and this is already an approach prone to runtime failure). Alternative options could include interop with C# to use unsafe and fixed to do the conversion.
Ultimately though, you are asking for a way to subvert the F# type system which is not really what the language is designed for. One of the principle advantages of F# is the power of the type system and it's ability to help you produce statically verifiable code.
F# and very low-level performance optimizations are not best friends, but then... some smart people do magic even with Java, which doesn't have value types and real generic collections for them.
1) I am a big fan of a flyweight pattern lately. If you architecture allows for it, you could wrap a byte array and access struct members via offsets. A C# example here. SBE/FlatBuffers even have tools to generate a wrapper automatically from a definition.
2) If you could stay within unsafe context in C# to do the work, pointer casting is very easy and efficient. However, that requires pinning the byte array and keeping its handle for later release, or staying within fixed keyword. If you have many small ones without a pool, you could have problems with GC.
3) The third option is abusing .NET type system and cast a byte array with IL like this (this could be coded in F#, if you insist :) ):
static T UnsafeCast(object value) {
ldarg.1 //load type object
ret //return type T
}
I tried this option and even have a snippet somewhere if you need, but this approach makes me uncomfortable because I do not understand its consequences to GC. We have two objects backed by the same memory, what would happen when one of them is GCed? I was going to ask a new question on SO about this detail, will post it soon.
The last approach could be good for arrays of structs, but for a single struct it will box it or copy it anyway. Since structs are on the stack and passed by value, you will probably get better results just by casting a pointer to byte[] in unsafe C# or using Marshal.PtrToStructure as in another answer here, and then copy by value. Copying is not the worst thing, especially on the stack, but allocation of new objects and GC is the enemy, so you need byte arrays pooled and this will add much more to the overall performance than you struct casting issue.
But if your struct is very big, option 1 could still be better.

What is the idiomatic way to write a linked list with a tail pointer?

As a learning project for Rust, I have a very simple (working, if incomplete) implementation of a singly linked list. The declaration of the structs looks like this:
type NodePtr<T> = Option<Box<Node<T>>>;
struct Node<T> {
data: T,
next: NodePtr<T>,
}
pub struct LinkedList<T> {
head: NodePtr<T>,
}
Implementing size and push_front were both reasonably straight-forward, although doing size iteratively did involve some "fighting with the borrow checker."
The next thing I wanted to try was adding a tail pointer to the LinkedList structure. to enable an efficient push_back operation. Here I've run into a bit of a wall. At first I attempted to use Option<&Box<Node<T>>> and then Option<&Node<T>>. Both of those led to sprinkling 'as everywhere, but still eventually being unable to promise the lifetime checker that tail would be valid.
I have since come to the tentative conclusion that it is impossible with these definitions: that there is no way to guarantee to the compiler that tail would be valid in the places that I think it is valid. The only way I can possibly accomplish this is to have all my pointers be Rc<_> or Rc<RefCell<_>>, because those are the only safe ways to have two things pointing at the same object (the final node).
My question: is this the correct conclusion? More generally: what is the idiomatic Rust solution for unowned pointers inside data structures? To my mind, reference counting seems awfully heavy-weight for something so simple, so I think I must be missing something. (Or perhaps I just haven't gotten my mind into the right mindset for memory safety yet.)
Yes, if you want to write a singly-linked-list with a tail-pointer you have three choices:
Safe and Mutable: Use NodePtr = Option<Rc<RefCell<Node<T>>>>
Safe and Immutable: Use NodePtr = Option<Rc<Node<T>>>
Unsafe and Mutable: Use tail: *mut Node<T>
The *mut is going to be more efficient, and it's not like the Rc is actually going to prevent you from producing completely nonsense states (as you correctly deduced). It's just going to guarantee that they don't cause segfaults (and with RefCell it may still cause runtime crashes though...).
Ultimately, any linked list more complex than vanilla singly-linked has an ownership story that's too complex to encode in Rust's ownership system safely and efficiently (it's not a tree). I personally favour just embracing the unsafety at that point and leaning on unit tests to get to the finish-line in one piece (why write a suboptimal data structure...?).

id values of different variables in python 3

I am able to understand immutability with python (surprisingly simple too). Let's say I assign a number to
x = 42
print(id(x))
print(id(42))
On both counts, the value I get is
505494448
My question is, does python interpreter allot ids to all the numbers, alphabets, True/False in the memory before the environment loads? If it doesn't, how are the ids kept track of? Or am I looking at this in the wrong way? Can someone explain it please?
What you're seeing is an implementation detail (an internal optimization) calling interning. This is a technique (used by implementations of a number of languages including Java and Lua) which aliases names or variables to be references to single object instances where that's possible or feasible.
You should not depend on this behavior. It's not part of the language's formal specification and there are no guarantees that separate literal references to a string or integer will be interned nor that a given set of operations (string or numeric) yielding a given object will be interned against otherwise identical objects.
I've heard that the C Python implementation does include a set of the first hundred or so integers as statically instantiated immutable objects. I suspect that other very high level language run-time libraries are likely to include similar optimizations: the first hundred integers are used very frequently by most non-trivial fragments of code.
In terms of how such things are implemented ... for strings and larger integers it would make sense for Python to maintain these as dictionaries. Thus any expression yielding an integer (and perhaps even floats) and strings (at least sufficiently short strings) would be hashed, looked up in the appropriate (internal) object dictionary, added if necessary and then returned as references to the resulting object.
You can do your own similar interning of any sorts of custom object you like by wrapping the instantiation in your own calls to your own class static dictionary.

Resources