How do I get the full path of a `std.fs.Dir`? - zig

Is there any way to access the full path of a std.fs.Dir struct? I've looked through all of the methods in the source but I can't find anything that gets path-related information on the directory.

One option is realpath
std.fs.Dir.realpathAlloc(self: Dir, allocator: Allocator, pathname: []const u8) ![]u8
const std = #import("std");
pub fn main() !void {
var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
defer arena.deinit();
const alloc = arena.allocator();
std.log.info("cwd: {s}", .{
try std.fs.cwd().realpathAlloc(alloc, "."),
});
}
realpath internally calls std.os.getFdPath and there doesn't seem to be any function on Dir that just does that right now.

Related

using mem.eql - unable to evaluate constant expression

Can someone explain me why this piece of code can't be compiled?
const std = #import("std");
const ParseError = error { NotAValidField };
const TestEnum = enum {
field_1,
field_2,
pub fn fromString(str: []const u8) !TestEnum {
switch(true) {
std.mem.eql(u8, "field_1", str) => TestEnum.field_1,
std.mem.eql(u8, "field_2", str) => TestEnum.field_2,
else => ParseError.NotAValidField,
}
}
};
pub fn main() void {
const field = "field_1";
try TestEnum.fromString(field);
}
It results to an error:
./example.zig:11:40: error: unable to evaluate constant expression
std.mem.eql(u8, "field_1", str) => TestEnum.field_1,
Is the compiler trying to figure the str during the compile time while it is passed as an argument? Here's the code in godbolt: https://zig.godbolt.org/z/reK6xv7h5
P.S. I already know there is a std.meta.stringToEnum function.
The compiler sees str in mem.eql call as a runtime value, and thus the error. To specify that str is never used at run time, add a comptime keyword like so:
pub fn fromString(comptime str: []const u8) TestEnum {
return switch (true) {
std.mem.eql(u8, "field_1", str) => TestEnum.field_1,
std.mem.eql(u8, "field_2", str) => TestEnum.field_2,
};
}
Note that all of this comes from wanting to use the mem.eql results to match true. This limits the number of enum values to exactly two and requires the string to be known at compile time. What meta.stringToEnum does instead is to perform the operation at run time.

How to read a file in zig?

How can I read a file in zig, and run over it line by line?
I did found os.File.openRead, but it seems old cause it says that container 'std.os' has no member called 'File'.
std.io.reader.readUntilDelimiterOrEof lets your read any std.io.reader line by line. You usually get the reader of something like a file by calling it’s reader() method. So for example:
var file = try std.fs.cwd().openFile("foo.txt", .{});
defer file.close();
var buf_reader = std.io.bufferedReader(file.reader());
var in_stream = buf_reader.reader();
var buf: [1024]u8 = undefined;
while (try in_stream.readUntilDelimiterOrEof(&buf, '\n')) |line| {
// do something with line...
}
The std.io.bufferedReader isn’t mandatory but recommended for better performance.
I muddled through this by looking at the Zig library source/docs, so this might not be the most idiomatic way:
const std = #import("std");
pub fn main() anyerror!void {
// Get an allocator
var gp = std.heap.GeneralPurposeAllocator(.{ .safety = true }){};
defer _ = gp.deinit();
const allocator = &gp.allocator;
// Get the path
var path_buffer: [std.fs.MAX_PATH_BYTES]u8 = undefined;
const path = try std.fs.realpath("./src/main.zig", &path_buffer);
// Open the file
const file = try std.fs.openFileAbsolute(path, .{ .read = true });
defer file.close();
// Read the contents
const buffer_size = 2000;
const file_buffer = try file.readToEndAlloc(allocator, buffer_size);
defer allocator.free(file_buffer);
// Split by "\n" and iterate through the resulting slices of "const []u8"
var iter = std.mem.split(file_buffer, "\n");
var count: usize = 0;
while (iter.next()) |line| : (count += 1) {
std.log.info("{d:>2}: {s}", .{ count, line });
}
}
The above is a little demo program that you should be able to drop into the default project created from zig init-exe, it'll just print out it's own contents, with a line number.
You can also do this without allocators, provided you supply the required buffers.
I'd also recommend checking out this great resource: https://ziglearn.org/chapter-2/#readers-and-writers
Note: I'm currently running a development version of Zig from master (reporting 0.9.0), but I think this has been working for the last few official releases.
To open a file and get a file descriptor back
std.os.open
https://ziglang.org/documentation/0.6.0/std/#std;os.open
To read from the file
std.os.read
https://ziglang.org/documentation/0.6.0/std/#std;os.read
I can't find a .readlines() style function in the zig standard library. You'll have to write your own loop to find the \n characters.
Below is a test case that shows how to create a file, write to it then open the same file and read its content.
const std = #import("std");
const testing = std.testing;
const expect = testing.expect;
test "create a file and then open and read it" {
var tmp_dir = testing.tmpDir(.{}); // This creates a directory under ./zig-cache/tmp/{hash}/test_file
// defer tmp_dir.cleanup(); // commented out this line so, you can see the file after execution finished.
var file1 = try tmp_dir.dir.createFile("test_file", .{ .read = true });
defer file1.close();
const write_buf: []const u8 = "Hello Zig!";
try file1.writeAll(write_buf);
var file2 = try tmp_dir.dir.openFile("test_file", .{});
defer file2.close();
const read_buf = try file2.readToEndAlloc(testing.allocator, 1024);
defer testing.allocator.free(read_buf);
try testing.expect(std.mem.eql(u8, write_buf, read_buf));
}
Check out fs package tests on Github or on your local machine under <zig-install-dir>/lib/fs/test.zig.
Also note that test allocator only works for tests. In your actual source code you need to choose an appropriate allocator.

How can I get a file checksum in Deno?

Just starting with Deno, I am trying to figure out how to calculate a binary file checksum. It seems to me that the problem is not with the methods provided by the hash module of the standard library, but with the file streaming method and/or the type of the chunks feeding the hash.update method.
I have been trying a few alternatives, related to file opening and chunk types,with no success. A simple example is in the following:
import {createHash} from "https://deno.land/std#0.80.0/hash/mod.ts";
const file= new File(["my_big_folder.tar.gz"], "./my_big_folder.tar.gz");
const iterator = file.stream() .getIterator();
const hash = createHash("md5");
for await( let chunk of iterator){
hash.update(chunk);
}
console.log(hash.toString()); //b35edd0be7acc21cae8490a17c545928
This code compiles and runs with no errors, pity that the result is different from what I get running the functions of the crypto module provided by node and the md5sum provided by linux coreutils. Any suggestion ?
nodejs code:
const crypto = require('crypto');
const fs = require('fs');
const hash = crypto.createHash('md5');
const file = './my_big_folder.tar.gz';
const stream = fs.ReadStream(file);
stream.on('data', data=> { hash.update(data); });
stream.on('end', ()=> {
console.log(hash.digest('hex')); //c18f5eac67656328f7c4ec5d0ef5b96f
});
The same result in bash:
$ md5sum ./my_big_folder.tar.gz
$ c18f5eac67656328f7c4ec5d0ef5b96f ./my_big_folder.tar.gz
on Windows 10 this can be used:
CertUtil -hashfile ./my_big_folder.tar.gz md5
The File API isn't used to read a File in Deno, to do that you need to use the Deno.open API and then turn it into an iterable like this
import {createHash} from "https://deno.land/std#0.80.0/hash/mod.ts";
const hash = createHash("md5");
const file = await Deno.open(new URL(
"./BigFile.tar.gz",
import.meta.url, //This is needed cause JavaScript paths are relative to main script not current file
));
for await (const chunk of Deno.iter(file)) {
hash.update(chunk);
}
console.log(hash.toString());
Deno.close(file.rid);
import { crypto, toHashString } from 'https://deno.land/std#0.176.0/crypto/mod.ts';
const getFileBuffer = (filePath: string) => {
const file = Deno.openSync(filePath);
const buf = new Uint8Array(file.statSync().size);
file.readSync(buf);
file.close();
return buf;
};
const getMd5OfBuffer = (data: BufferSource) => toHashString(crypto.subtle.digestSync('MD5', data));
export const getFileMd5 = (filePath: string) => getMd5OfBuffer(getFileBuffer(filePath));

Current Way to Get User Input in Zig

I'm following this blog post on 'comptime' in Zig.
The following line no longer compiles in Zig 0.6.0.
const user_input = try io.readLineSlice(buf[0..]);
Below is the full function:
fn ask_user() !i64 {
var buf: [10]u8 = undefined;
std.debug.warn("A number please: ");
const user_input = try io.readLineSlice(buf[0..]);
return fmt.parseInt(i64, user_input, 10);
}
What is the equivalent in the current version (of getting user input)?
You can use the method readUntilDelimiterOrEof of stdin instead:
const stdin = std.io.getStdIn().reader();
pub fn readUntilDelimiterOrEof(self: #TypeOf(stdin), buf: []u8, delimiter: u8) !?[]u8
So, the code can be:
fn ask_user() !i64 {
const stdin = std.io.getStdIn().reader();
const stdout = std.io.getStdOut().writer();
var buf: [10]u8 = undefined;
try stdout.print("A number please: ", .{});
if (try stdin.readUntilDelimiterOrEof(buf[0..], '\n')) |user_input| {
return std.fmt.parseInt(i64, user_input, 10);
} else {
return #as(i64, 0);
}
}
See also: Zig 0.7.0 documentation.

How can I efficiently extract the first element of a futures::Stream in a blocking manner?

I've got the following method:
pub fn load_names(&self, req: &super::MagicQueryType) -> ::grpcio::Result<::grpcio::ClientSStreamReceiver<String>> {
My goal is to get the very first element of grpcio::ClientSStreamReceiver; I don't care about the other names:
let name: String = load_names(query)?.wait().nth(0)?;
It seems inefficient to call wait() before nth(0) as I believe wait() blocks the stream until it receives all the elements.
How can I write a more efficient solution (i.e., nth(0).wait()) without triggering build errors? Rust's build errors for futures::stream::Stream look extremely confusing to me.
The Rust playground doesn't support grpcio = "0.4.4" so I cannot provide a link.
To extract the first element of a futures::Stream in a blocking manner, you should convert the Stream to an iterator by calling executor::block_on_stream and then call Iterator::next.
use futures::{executor, stream, Stream}; // 0.3.4
use std::iter;
fn example() -> impl Stream<Item = i32> {
stream::iter(iter::repeat(42))
}
fn main() {
let v = executor::block_on_stream(example()).next();
println!("{:?}", v);
}
If you are using Tokio, you can convert the Stream into a Future with StreamExt::into_future and annotate a function with #[tokio::main]:
use futures::{stream, Stream, StreamExt}; // 0.3.4
use std::iter;
use tokio; // 0.2.13
fn example() -> impl Stream<Item = i32> {
stream::iter(iter::repeat(42))
}
#[tokio::main]
async fn just_one() -> Option<i32> {
let (i, _stream) = example().into_future().await;
i
}
fn main() {
println!("{:?}", just_one());
}
See also:
How do I synchronously return a value calculated in an asynchronous Future in stable Rust?
How to select between a future and stream in Rust?

Resources