Do references take up memory? - memory

I am currently a month or two into learning Rust and wanted to make a small program to learn it which consists of objects that have a struct containing two u8’s. My question is, whenever that struct is asked-for immutably, would cloning/copying it be more memory efficient than referencing to it since the reference would presumably take up 8 bytes on a 64-bit system, while the struct itself only uses 2 bytes?

A struct consisting of two u8's is basically a u16. Since u16 implements Copy, you probably want to make your struct implement Copy as well. For such small types a reference will always be larger than the type itself.
#[derive(Copy, Clone)]
struct U16 {
a: u8,
b: u8,
}

Related

Rust - Why such a big difference in memory usage between malloc/alloc and more 'idiomatic' approaches

I've been using and modifying this library https://github.com/sile/patricia_tree
One thing that bothered a bit was how much unsafe was used in node.rs, particularly, it's defined as just a pointer to some heap location. When doing the first benchmark listed on the readme page (wikipedia inputs), the PatriciaSet uses ~700mb (PatriciaSet is just holding a Node at it's root)
pub struct Node<V> {
// layout:
// all these fields accessed with ptr.offset
// - flags: u8
// - label_len: u8
// - label: [u8; label_len]
// - value: Option<V>
// - child: Option<Node<V>>
// - sibling: Option<Node<V>>
ptr: *mut u8,
_value: PhantomData<V>,
}
and uses malloc for allocation:
let ptr = unsafe { libc::malloc(block_size) } as *mut u8;
I was told this memory is not aligned properly, so I tried to add the new alloc api and use Layout/alloc, this is also still not aligning properly just seems to 'work'. full pr
let layout = Layout::array::<u8>(block_size).expect("Failed to get layout");
let ptr = unsafe { alloc::alloc(layout) as *mut u8 };
This single change, which also holds the layout in the block of memory pointed to by ptr, caused memory consumption to go up 40% under the performance tests for very large trees. The layout type is just 2 words wide, so this was unexpected. For the same tests this uses closer to ~1000mb (compared to previous 700)
In another attempt, I tried to remove most of the unsafe and go with something a bit more rust-y full pr here
pub struct Node<V> {
value: Option<V>,
child: Option<*mut Node<V>>,
sibling: Option<*mut Node<V>>,
label: SmallVec<[u8; 10]>,
_value: PhantomData<V>,
}
creating the node in a manner you may expect in rust
let child = child.map(|c| Box::into_raw(Box::new(c)));
let sibling = sibling.map(|c| Box::into_raw(Box::new(c)));
Node {
value,
child,
sibling,
label: SmallVec::from_slice(label),
_value: PhantomData,
}
Performance wise, it's about equivalent to the original unmodified library, but it's memory consumption appears to be not much better than just inserting every single item in a HashSet, using around ~1700mb for the first benchmark.
The final data structure which uses node is a compressed trie or a 'patricia tree'. No other code was changed other than the structure of the Node and the implementations of some of its methods that idiomatically fall out of those changes.
I was hoping someone could tip me off of a what exactly is causing such a drastic difference in memory consumption between these implementations. In my mind, they should be about equivalent. They all allocate around the same number of fields with about the same widths. The unsafe first one is able to store a dynamic label length in-line, so that could be one reason. But smallvec should be able to do something similar with smaller label sizes (using just Vec was even worse).
Looking for any suggestions or help on why the ending results are so different. If curious, the code to run these is here though it is spread across the original authors and my own repo
Tools for how to investigate the differences between these would be open to hearing that as well!
You're seeing increased memory use for a couple of reasons. I'll assume a standard 64-bit Unix system.
First, a pointer is 8 bytes. An Option<*mut Node<V>> is 16 bytes because pointers aren't subject to the nullable optimization that happens with references. References can never be null, so the compiler can convert an Option<&'a V> into a null pointer if the value is None and a regular pointer if it's Some, but pointers can be null so that can't happen here. Rust makes the size of the enum field the same size as the size of the data type, so you use 16 bytes per pointer here.
The easiest and most type-safe way to deal with this is just to use Option<NonNull<Node<V>>>. Doing that drops your structure by 16 bytes total.
Second, your SmallVec is 32 bytes in size. They avoid needing a heap allocation in some cases, but they aren't, despite the name, necessarily small. You can use a regular Vec or a boxed slice, which will likely result in lower memory usage at the cost of an additional allocation.
With those changes and using a Vec, your structure will be 48 bytes in size. With a boxed slice, it will be 40. The original used 72. How much savings you see will depend on how big your labels are, since you'll need to allocate space for them.
The required alignment for this structure is 8 bytes because the largest alignment of any type (the pointer) is 8 bytes. Even on architectures like x86-64 where alignment is not required for all types, it is still faster, and sometimes significantly so, so the compiler always does it.
The original code was not properly aligned at all and will either outright fail (on SPARC), perform badly (on PowerPC), or require an alignment trap into the kernel if they're enabled (on MIPS) or fail if they're not. An alignment trap into the kernel for unaligned access performs terribly because you have to do a full context switch just to load and shift two words, so most people turn them off.
The reason that this is not properly aligned is because Node contains a pointer and it appears in the structure at an offset which is not guaranteed to be a multiple of 8. If it were rewritten such that the child and sibling attributes came first, then it would be properly aligned provided the memory were suitably aligned (which malloc guarantees but your Rust allocation does not). You could create a suitable Layout with Layout::from_size_align(block_size, std::mem::align_of::<*mut Node>()).
So while the original code worked on x86-64 and saved a bunch of memory, it performed badly and was not portable.
The code I used for this example is simply the following, plus some knowledge about how Rust does nullable types and knowledge about C and memory allocation:
extern crate smallvec;
use smallvec::SmallVec;
use std::marker::PhantomData;
use std::ptr::NonNull;
pub struct Node<V> {
value: Option<V>,
child: Option<NonNull<Node<V>>>,
sibling: Option<NonNull<Node<V>>>,
label: Vec<u8>,
_value: PhantomData<V>,
}
fn main() {
println!("size: {}", std::mem::size_of::<Node<()>>());
}

Will using struct over class, will have impact on the allowable data size?

Currently, I'm designing a note taking app.
Its overall usage should be
class Note {
var title: String?
var body: String?
}
var notes = [Note]()
notes.append(Note())
I have strong temptation to design Note as struct
struct Note {
var title: String?
var body: String?
}
var notes = [Note]()
notes.append(Note())
But, I also worry that might impose limit on the
Maximum allowable size per Note instance. As, it is uncommon, for the Note's string body has >10MB
Maximum allowable array size, for Array of Note struct
As far as I know, instance of struct is created at stack memory, and instance of class is created in heap memory. Stack memory size is much more smaller than heap memory size - Is the stack size of iPhone fixed?
Will using struct over class, will have impact on the allowable data size?
In a word, no.
There is no difference in the maximum sizes of structs versus classes, or arrays of structs or classes.
Besides, as Martin said in his comment, your struct/class actually contains pointers to strings, not the strings themselves. Thus neither structs nor classes change size with different-sized strings.

Accessing field on original C struct in Go

I'm trying to use OpenCV from Go. OpenCV defines a struct CvMat that has a data field:
typedef struct CvMat
{
...
union
{
uchar* ptr;
short* s;
} data;
}
I'm using the go bindings for opencv found here. This has a type alias for CvMat:
type Mat C.CvMat
Now I have a Mat object and I want to access the data field on it. How can I do this? If I try to access _data, it doesn't work. I printed out the fields on the Mat object with the reflect package and got this:
...
{data github.com/lazywei/go-opencv/opencv [8]uint8 24 [5] false}
...
So there is a data field on it, but it's not even the same type. It's an array of 8 uint8s! I'm looking for a uchar* that is much longer than 8 characters. How do I get to this uchar?
The short answer is that you can't do this without modifying go-opencv. There are a few impediments here:
When you import a package, you can only use identifiers that have been exported. In this case, data does not start with an upper case letter, so is not exported.
Even if it was an exported identifier, you would have trouble because Go does not support unions. So instead the field has been represented by a byte array that matches the size of the underlying C union (8 bytes in this case, which matches the size of a 64-bit pointer).
Lastly, it is strongly recommended not to expose cgo types from packages. So even in cases like this where it may be possible to directly access the underlying C structure, I would recommend against it.
Ideally go-opencv would provide an accessor for the information you are after (presumably one that could check which branch of the union is in use, rather than silently returning bad data. I would suggest you either file a bug report on the package (possibly with a patch), or create a private copy with the required modifications if you need the feature right away.

Binary file IO in Swift

EDIT: TLDR In C family languages you can represent arbitrary data ( ints, floats, doubles, structs ) as byte streams via casting and pack them into streams or buffers. And you can do the reverse to get data back out. And of course you can byte swap for endianness correctness.
Is this possible in idiomatic swift?
Now the original question:
If I were writing in C/C++/ObjC I might cast a struct to unsigned char * and write its bytes to a FILE*, or memcpy them to a buffer. Same for ints, doubles, etc. I know there are endianness concerns to deal with, but this is for an iOS app, and I don't expect endianness to change any time soon for the platform. Swift's type system doesn't seem like it would allow this behavior ( casting arbitrary data to unsigned 8 bit ints and passing the address ), but I don't know.
I'm learning Swift, and would like an idiomatic way to write my data. Note that my highly numeric, and ultimately going to be sent over the wire so it needs to be compact, so textual formats like JSON are out.
I could use NSKeyedArchiver but I want to learn here. Also I don't want to write off an android client at some point in the future so a simple binary coding seems where it's at.
Any suggestions?
As noted in Using Swift with Cocoa and Objective-C, you can pass/assign an array of a Swift type to a parameter/variable of pointer type, and vice versa, to get a binary representation. This even works if you define your own struct types, much like in C.
Here's an example -- I use code like this for packaging up 3D vertex data for the GPU (with SceneKit, OpenGL, etc.):
struct Float3 {
var x, y, z: GLfloat
}
struct Vertex {
var position, normal: Float3
}
var vertices: [Vertex] // initialization omitted for brevity
let data = NSData(bytes: vertices, length: vertices.count * sizeof(Vertex))
Inspect this data and you'll see a pattern of 32 * 3 * 2 bits of IEEE 754 floating-point numbers (just like you'd get from serializing a C struct through pointer access).
For going the other direction, you might sometimes need unsafeBitCast.
If you're using this for data persistence, be sure to check or enforce endianness.
The kind of format you're discussing is well explored with MessagePack. There are a few early attempts at doing this in Swift:
https://github.com/briandw/SwiftPack
https://github.com/yageek/SwiftMsgPack
I'd probably start with the yageek version. In particular, look at how packing is done into [Byte] data structures. I'd say this is pretty idiomatic Swift, without losing endian management (which you shouldn't ignore; chips do change, and the numeric types give you via bigEndian):
extension Int32 : MsgPackMarshable{
public func msgpack_marshal() -> Array<Byte>{
let bigEndian: UInt32 = UInt32(self.bigEndian)
return [0xce, Byte((bigEndian & 0xFF000000) >> 24), Byte((bigEndian & 0xFF0000) >> 16), Byte((bigEndian & 0xFF00) >> 8), Byte(bigEndian & 0x00FF)]
}
}
This is also fairly similar to how you'd write it in C or C++ if you were managing byte order (which C and C++ should always do, so the fact that they could splat their bytes into memory doesn't make correct implementations trivial). I'd probably drop Byte (which comes from Foundation) and use UInt8 (which is defined in core Swift). But either is fine. And of course it's more idiomatic to say [UInt8] rather than Array<UInt8>.
That said, as Zaph notes, NSKeyedArchiver is idiomatic for Swift. But that doesn't mean MessagePack isn't a good format for this kind of problem, and it's very portable.

C-Struct vs Object

I am currently working on a Conway's Game of Life simulator for the iPhone and I had a few questions about memory management. Note that I am using ARC.
For my application, I am going to need a large amount of either C style structs or Objective-C objects to represent cells. There may be a couple thousand of these, so obviously, memory management came to mind.
Structs My argument for structs is that the cells do not need typical OO properties. The only thing that they will be holding is two BOOL values, so there will not be huge amount of memory chewed up by these cells. Also, I need to utilize a two-dimensional array. With structs, I can use the C-style 2d arrays. As far as I know, there is no replacement for this in Objective-C. I feel that it is overkill to create an object for just two boolean values.
Objective-C objects My argument (and most other people's) is that the memory management around Objective-C objects is very easy and efficient with ARC. Also, I have seen arguments that a struct is not such a big memory reduction to an object.
So, my question. Should I go with the old-school, lean, and compatible with two-dimensional array structs? Or should I stick with the typical Objective-C objects and risk the extra memory used.
Afterthoughts: If you recommend Objective-C objects, provide an alternate storage method that represents a two-dimensional array. This is critical and is one of the biggest downsides of going with Objective-C objects.
Thankyou.
"Premature optimization is the root of all evil"... If you are trying to build a Game of Life server with 100,000 users playing concurrently, memory footprint might matter. For a single-person implementation on any modern device, even a mobile one, memory size is pretty academic.
Therefore, do whatever either gets the game up and running fastest or (better) makes the code most readable and maintainable. Human cycles cost more than computer cycles. Suppose you needed a third boolean for each cell of the game... wouldn't an object you could extend save a ton of time rather than hardcoded array indices? (A struct is a lot better than an array of primitives for this reason...)
I've certainly used denser representations of data when I need to, but the overhead in programmer time has to be worth it. Just my $.02...
If it is just 2 BOOL values that you are going to store for every cell, then you could just use an array of integers to do the job. For example:
Let us assume that the two bool values are boolX and boolY, we could combine them into an int as:
int combinedBool = boolY + (10*boolX);
So you can retrieve the two bool values like:
BOOL boolX, boolY;
boolX = combinedBool/10;
boolY = combinedBool%10;
And then you can store the whole board in the form a single dimension array of integers with the index of each cell represented by ((yIndex*width)+xIndex) where width is the number of cells left-to-right on your board and, xIndex and yIndex represent the X and Y coordinates of the cell on your board.
Hope this helps with your memory management and cell organisation.
You could build one and test it's size with malloc_size(myObject). Thousands of pairs of bools will be small enough. In fact, you'll be able to make the objects larger and enjoy the benefits of the OO design. For example, what if the cells also kept pointers to their neighboring cells. The cells could compute their own t+1 state with cached access to their neighbors.

Resources