How can I create a HashMap on the heap in Zig? - zig

The conventional way to make an object on the heap is to make a create fn:
const Something = struct {
a:Allocator,
b:[1000]u8,
pub fn create(a:Allocator) !*Something {
var mem = try a.create(Something)
mem.* = {
.a =a,
.b = undefined
};
return mem;
}
}
But what about if I want to put a std lib ArrayHashMap on the heap? For example:
const StringStringArrayHashMap = std.StringArrayHashMap([]const u8);
fn makeMap(a:Allocator) StringStringArrayHashMap {
return StringStringArrayHashMap.init(a);
}
const WithMap = struct {
map:StringStringHashMap
};
fn fillMap(a:Allocator) !WithMap {
var map = makeMap(a);
try map.put("a", "hello");
try map.put("b", "world");
return WithMap { .map = map };
}
fn badMemory(a:Allocator) !void {
const with_map = fillMap(a);
_ = with_map;
}
badMemory will receive a WithMap but it's internal map, having been made on the stack in fillMap will be freed at the end of fillMap and consequently unsade in badMemory.
I can't see any way to make a valid HashMap without somehow hacking the internals of the zig stdlib.

You can put the map on the heap the same way you did with Something type:
var map: *StringStringArrayHashMap = try allocator.create(StringStringArrayHashMap);
map.* = StringStringArrayHashMap.init(allocator);
badMemory will receive a WithMap but it's internal map, having been made on the stack in fillMap will be freed at the end of fillMap and consequently unsafe in badMemory.
This is false. The map is safe to use in badMemory as it's copied to the stack in badMemory. (Or maybe the compiler could decide to pass it as a pointer, I'm not sure if parameter's pass-by-value rule applies to the return value. Doesn't matter here.) But you should probably be careful when copying the map or you might step into the same issue as this question.

Related

type safe create Lua tables in Haxe without runtime overhead and without boilerplate

I am trying to write some externs to some Lua libraries that require to pass dictionary tables and I want to make them type safe.
So far, I have been declaring abstract classes with public inline constructors, but this gets tedious really fast:
abstract JobOpts(Table<String, Dynamic>) {
public inline function new(command:String, args:Array<String>) {
this = Table.create(null, {
command: command,
arguments: Table.create(args)
});
}
}
Is there a better way that allows me to keep things properly typed but that does not require that much boilerplate?
Please note that typedefs and anonymous structures are not valid options, because they introduce nasty fields in the created table and also do a function execution to assign a metatable to them:
--typedef X = {cmd: String}
_hx_o({__fields__={cmd=true},cmd="Yo"})
My abstract code example compiles to a clean lua table, but it is a lot of boilerplate
Some targets support #:nativeGen to strip Haxe-specific metadata from objects, but this does not seem to be the case for typedefs on Lua target. Fortunately, Haxe has a robust macro system so you can make the code write itself. Say,
Test.hx:
import lua.Table;
class Test {
public static function main() {
var q = new JobOpts("cmd", ["a", "b"]);
Sys.println(q);
}
}
#:build(TableBuilder.build())
abstract JobOpts(Table<String, Dynamic>) {
extern public inline function new(command:String, args:Array<String>) this = throw "no macro!";
}
TableBuilder.hx:
import haxe.macro.Context;
import haxe.macro.Expr;
class TableBuilder {
public static macro function build():Array<Field> {
var fields = Context.getBuildFields();
for (field in fields) {
if (field.name != "_new") continue; // look for new()
var f = switch (field.kind) { // ... that's a function
case FFun(_f): _f;
default: continue;
}
// abstract "constructors" transform `this = val;`
// into `{ var this; this = val; return this; }`
var val = switch (f.expr.expr) {
case EBlock([_decl, macro this = $x, _ret]): x;
default: continue;
}
//
var objFields:Array<ObjectField> = [];
for (arg in f.args) {
var expr = macro $i{arg.name};
if (arg.type.match(TPath({ name: "Array", pack: [] } ))) {
// if the argument's an array, make an unwrapper for it
expr = macro lua.Table.create($expr, null);
}
objFields.push({ field: arg.name, expr: expr });
}
var objExpr:Expr = { expr: EObjectDecl(objFields), pos: Context.currentPos() };
val.expr = (macro lua.Table.create(null, $objExpr)).expr;
}
return fields;
}
}
And thus...
Test.main = function()
local this1 = ({command = "cmd", args = ({"a","b"})});
local q = this1;
_G.print(Std.string(q));
end
Do note, however, that Table.create is a bit of a risky function - you will only be able to pass in array literals, not variables containing arrays. This can be remedied by making a separate "constructor" function with the same logic but without array➜Table.create unwrapping.

Idiomatic way to wite/pass type to generic function in ziglang

This is a much simplified version of my real issue but hopefully demonstrates my question in a concise manner.
My question is about the interface to printKeys. I have to pass in the type of the data to be printed as a comptime parameter, and the easiest way to get this is to use #TypeOf on the map at the point of calling it.
Coming from C++ this seems slightly inelegant that the type can't be inferred, although I do like being explicit too.
Is there a more idiomatic way to have a generic function in zig that doesn't need this use of #TypeOf at the point of calling, or a better way to do this in general?
const std = #import("std");
fn printKeys(comptime MapType: type, data: MapType) void {
var iter = data.keyIterator();
while (iter.next()) | value | {
std.debug.print("Value is {}\n", .{value.*});
}
}
pub fn main() !void {
const allocator = std.heap.page_allocator;
var map = std.AutoHashMap(i32, []const u8).init(allocator);
defer map.deinit();
try map.put(10, "ten");
try map.put(12, "twelve");
try map.put(5, "five");
printKeys(#TypeOf(map), map);
}
use anytype. you can find more examples in zig docs (Ctrl + F) and search anytype
fn printKeys(data: anytype) void {
...
}
pub fn main() !void {
...
printKeys(map);
}

FatalExecutionEngineError on accessing a pointer set with memcpy_s

See update 1 below for my guess as to why the error is happening
I'm trying to develop an application with some C#/WPF and C++. I am having a problem on the C++ side on a part of the code that involves optimizing an object using GNU Scientific Library (GSL) optimization functions. I will avoid including any of the C#/WPF/GSL code in order to keep this question more generic and because the problem is within my C++ code.
For the minimal, complete and verifiable example below, here is what I have. I have a class Foo. And a class Optimizer. An object of class Optimizer is a member of class Foo, so that objects of Foo can optimize themselves when it is required.
The way GSL optimization functions take in external parameters is through a void pointer. I first define a struct Params to hold all the required parameters. Then I define an object of Params and convert it into a void pointer. A copy of this data is made with memcpy_s and a member void pointer optimParamsPtr of Optimizer class points to it so it can access the parameters when the optimizer is called to run later in time. When optimParamsPtr is accessed by CostFn(), I get the following error.
Managed Debugging Assistant 'FatalExecutionEngineError' : 'The runtime
has encountered a fatal error. The address of the error was at
0x6f25e01e, on thread 0x431c. The error code is 0xc0000005. This error
may be a bug in the CLR or in the unsafe or non-verifiable portions of
user code. Common sources of this bug include user marshaling errors
for COM-interop or PInvoke, which may corrupt the stack.'
Just to ensure the validity of the void pointer I made, I call CostFn() at line 81 with the void * pointer passed as an argument to InitOptimizer() and everything works. But in line 85 when the same CostFn() is called with the optimParamsPtr pointing to data copied by memcpy_s, I get the error. So I am guessing something is going wrong with the memcpy_s step. Anyone have any ideas as to what?
#include "pch.h"
#include <iostream>
using namespace System;
using namespace System::Runtime::InteropServices;
using namespace std;
// An optimizer for various kinds of objects
class Optimizer // GSL requires this to be an unmanaged class
{
public:
double InitOptimizer(int ptrID, void *optimParams, size_t optimParamsSize);
void FreeOptimizer();
void * optimParamsPtr;
private:
double cost = 0;
};
ref class Foo // A class whose objects can be optimized
{
private:
int a; // An internal variable that can be changed to optimize the object
Optimizer *fooOptimizer; // Optimizer for a Foo object
public:
Foo(int val) // Constructor
{
a = val;
fooOptimizer = new Optimizer;
}
~Foo()
{
if (fooOptimizer != NULL)
{
delete fooOptimizer;
}
}
void SetA(int val) // Mutator
{
a = val;
}
int GetA() // Accessor
{
return a;
}
double Optimize(int ptrID); // Optimize object
// ptrID is a variable just to change behavior of Optimize() and show what works and what doesn't
};
ref struct Params // Parameters required by the cost function
{
int cost_scaling;
Foo ^ FooObj;
};
double CostFn(void *params) // GSL requires cost function to be of this type and cannot be a member of a class
{
// Cast void * to Params type
GCHandle h = GCHandle::FromIntPtr(IntPtr(params));
Params ^ paramsArg = safe_cast<Params^>(h.Target);
h.Free(); // Deallocate
// Return the cost
int val = paramsArg->FooObj->GetA();
return (double)(paramsArg->cost_scaling * val);
}
double Optimizer::InitOptimizer(int ptrID, void *optimParamsArg, size_t optimParamsSizeArg)
{
optimParamsPtr = ::operator new(optimParamsSizeArg);
memcpy_s(optimParamsPtr, optimParamsSizeArg, optimParamsArg, optimParamsSizeArg);
double ret_val;
// Here is where the GSL stuff would be. But I replace that with a call to CostFn to show the error
if (ptrID == 1)
{
ret_val = CostFn(optimParamsArg); // Works
}
else
{
ret_val = CostFn(optimParamsPtr); // Doesn't work
}
return ret_val;
}
// Release memory used by unmanaged variables in Optimizer
void Optimizer::FreeOptimizer()
{
if (optimParamsPtr != NULL)
{
delete optimParamsPtr;
}
}
double Foo::Optimize(int ptrID)
{
// Create and initialize params object
Params^ paramsArg = gcnew Params;
paramsArg->cost_scaling = 11;
paramsArg->FooObj = this;
// Convert Params type object to void *
void * paramsArgVPtr = GCHandle::ToIntPtr(GCHandle::Alloc(paramsArg)).ToPointer();
size_t paramsArgSize = sizeof(paramsArg); // size of memory block in bytes pointed to by void pointer
double result = 0;
// Initialize optimizer
result = fooOptimizer->InitOptimizer(ptrID, paramsArgVPtr, paramsArgSize);
// Here is where the loop that does the optimization will be. Removed from this example for simplicity.
return result;
}
int main()
{
Foo Foo1(2);
std::cout << Foo1.Optimize(1) << endl; // Use orig void * arg in line 81 and it works
std::cout << Foo1.Optimize(2) << endl; // Use memcpy_s-ed new void * public member of Optimizer in line 85 and it doesn't work
}
Just to reiterate I need to copy the params to a member in the optimizer because the optimizer will run all through the lifetime of the Foo object. So it needs to exist as long as the Optimizer object exist and not just in the scope of Foo::Optimize()
/clr support need to be selected in project properties for the code to compile. Running on an x64 solution platform.
Update 1: While trying to debug this, I got suspicious of the way I get the size of paramsArg at line 109. Looks like I am getting the size of paramsArg as size of int cost_scaling plus size of the memory storing the address to FooObj instead of the size of memory storing FooObj itself. I realized this after stumbling across this answer to another post. I confirmed this by checking the value of paramsArg after adding some new dummy double members to Foo class. As expected the value of paramsArg doesn't change. I suppose this explains why I get the error. A solution would be to write code to correctly calculate the size of a Foo class object and set that to paramsArg instead of using sizeof. But that is turning out to be too complicated and probably another question in itself. For example, how to get size of a ref class object? Anyways hopefully someone will find this helpful.

Secure Memory For Swift Objects

I am writing a swift application that requires handling private keys in memory. Because of the sensitivity of such objects, the keys need to be cleared (a.k.a. written to all zeros) when the object is deallocated, and the memory cannot be paged to disk (which is typically done using mlock()).
In Objective-C, you can provide a custom CFAllocator object, which allows you to use your own functions to allocate/deallocate/reallocate the memory used by an object.
So one solution is to just implement a "SecureData" object in objective-c, which internally creates an NSMutableData object using a custom CFAllocator (also in objective-c).
However, is there any way for me to provide my own custom memory allocation functions for a pure swift object (for example, a struct or a [UInt8])? Or is there a better, "proper" way to implement secure memory like this in swift?
If you want complete control over a region of memory you allocate yourself, you can use UnsafePointer and co:
// allocate enough memory for ten Ints
var ump = UnsafeMutablePointer<Int>.alloc(10)
// memory is in an uninitialized raw state
// initialize that memory with Int objects
// (here, from a collection)
ump.initializeFrom(reverse(0..<10))
// memory property gives you access to the underlying value
ump.memory // 9
// UnsafeMutablePointer acts like an IndexType
ump.successor().memory // 8
// and it has a subscript, but it's not a CollectionType
ump[3] // = 6
// wrap it in an UnsafeMutableBufferPointer to treat it
// like a collection (or UnsafeBufferPointer if you don't
// need to be able to alter the values)
let col = UnsafeMutableBufferPointer(start: ump, count: 10)
col[3] = 99
println(",".join(map(col,toString)))
// prints 9,8,7,99,5,4,3,2,1,0
ump.destroy(10)
// now the allocated memory is back in a raw state
// you could re-allocate it...
ump.initializeFrom(0..<10)
ump.destroy(10)
// when you're done, deallocate the memory
ump.dealloc(10)
You can also have UnsafePointer point to other memory, such as memory you’re handed by some C API.
UnsafePointer can be passed into C functions that take a pointer to a contiguous block of memory. So for your purposes, you could then pass this pointer into a function like mlock:
let count = 10
let ump = UnsafeMutablePointer.allocate<Int>(count)
mlock(ump, UInt(sizeof(Int) * count))
// initialize, use, and destroy the memory
munlock(ump, UInt(sizeof(Int) * count))
ump.dealloc(count)
You can even hold your own custom types:
struct MyStruct {
let a: Int
let b: Int
}
var pointerToStruct = UnsafeMutablePointer<MyStruct>.alloc(1)
pointerToStruct.initialize(MyStruct(a: 1, b: 2))
pointerToStruct.memory.b // 2
pointerToStruct.destroy()
pointerToStruct.dealloc(1)
However be aware if doing this with classes, or even arrays or strings (or a struct that contains them), that all you will be holding in your memory is pointers to other memory that these objects allocate and own. If this matters to you (i.e. you are doing something special to this memory such as securing it, in your example), this is probably not what you want.
So either you need to use fixed-size objects, or make further use of UnsafePointer to hold pointers to more memory regions. If they don't need to dynamically resize, then just a single allocation of an unsafe pointer, possibly wrapped in a UnsafeBufferPointer for a collection interface, could do it.
If you need more dynamic behavior, below is a very bare-bones implementation of a collection that can resize as necessary, that could be enhanced to cover specialty memory-handling logic:
// Note this is a class not a struct, so it does NOT have value semantics,
// changing a copy changes all copies.
public class UnsafeCollection<T> {
private var _len: Int = 0
private var _buflen: Int = 0
private var _buf: UnsafeMutablePointer<T> = nil
public func removeAll(keepCapacity: Bool = false) {
_buf.destroy(_len)
_len = 0
if !keepCapacity {
_buf.dealloc(_buflen)
_buflen = 0
_buf = nil
}
}
public required init() { }
deinit { self.removeAll(keepCapacity: false) }
public var count: Int { return _len }
public var isEmpty: Bool { return _len == 0 }
}
To cover the requirements of MutableCollectionType (i.e. CollectionType plus assignable subscript):
extension UnsafeCollection: MutableCollectionType {
typealias Index = Int
public var startIndex: Int { return 0 }
public var endIndex: Int { return _len }
public subscript(idx: Int) -> T {
get {
precondition(idx < _len)
return _buf[idx]
}
set(newElement) {
precondition(idx < _len)
let ptr = _buf.advancedBy(idx)
ptr.destroy()
ptr.initialize(newElement)
}
}
typealias Generator = IndexingGenerator<UnsafeCollection>
public func generate() -> Generator {
return Generator(self)
}
}
And ExtensibleCollectionType, to allow for dynamic growth:
extension UnsafeCollection: ExtensibleCollectionType {
public func reserveCapacity(n: Index.Distance) {
if n > _buflen {
let newBuf = UnsafeMutablePointer<T>.alloc(n)
newBuf.moveInitializeBackwardFrom(_buf, count: _len)
_buf.dealloc(_buflen)
_buf = newBuf
_buflen = n
}
}
public func append(x: T) {
if _len == _buflen {
reserveCapacity(Int(Double(_len) * 1.6) + 1)
}
_buf.advancedBy(_len++).initialize(x)
}
public func extend<S: SequenceType where S.Generator.Element == T>
(newElements: S) {
var g = newElements.generate()
while let x: T = g.next() {
self.append(x)
}
}
}
I know this question is old but something for those who land here: since iOS 10 you can use Secure Enclave to store private keys securely. The way it works is that all the operations that require decryption is performed inside the Secure Enclave so you do not have to worry about runtime hooking of your classes or memory leaks.
Take a look here: https://developer.apple.com/documentation/security/certificate_key_and_trust_services/keys/storing_keys_in_the_secure_enclave

Assign function/method to variable in Dart

Does Dart support the concept of variable functions/methods? So to call a method by its name stored in a variable.
For example in PHP this can be done not only for methods:
// With functions...
function foo()
{
echo 'Running foo...';
}
$function = 'foo';
$function();
// With classes...
public static function factory($view)
{
$class = 'View_' . ucfirst($view);
return new $class();
}
I did not found it in the language tour or API. Are others ways to do something like this?
To store the name of a function in variable and call it later you will have to wait until reflection arrives in Dart (or get creative with noSuchMethod). You can however store functions directly in variables like in JavaScript
main() {
var f = (String s) => print(s);
f("hello world");
}
and even inline them, which come in handy if you are doing recusion:
main() {
g(int i) {
if(i > 0) {
print("$i is larger than zero");
g(i-1);
} else {
print("zero or negative");
}
}
g(10);
}
The functions stored can then be passed around to other functions
main() {
var function;
function = (String s) => print(s);
doWork(function);
}
doWork(f(String s)) {
f("hello world");
}
I may not be the best explainer but you may consider this example to have a wider scope of the assigning functions to a variable and also using a closure function as a parameter of a function.
void main() {
// a closure function assigned to a variable.
var fun = (int) => (int * 2);
// a variable which is assigned with the function which is written below
var newFuncResult = newFunc(9, fun);
print(x); // Output: 27
}
//Below is a function with two parameter (1st one as int) (2nd as a closure function)
int newFunc(int a, fun) {
int x = a;
int y = fun(x);
return x + y;
}

Resources