How can I get a file checksum in Deno? - stream

Just starting with Deno, I am trying to figure out how to calculate a binary file checksum. It seems to me that the problem is not with the methods provided by the hash module of the standard library, but with the file streaming method and/or the type of the chunks feeding the hash.update method.
I have been trying a few alternatives, related to file opening and chunk types,with no success. A simple example is in the following:
import {createHash} from "https://deno.land/std#0.80.0/hash/mod.ts";
const file= new File(["my_big_folder.tar.gz"], "./my_big_folder.tar.gz");
const iterator = file.stream() .getIterator();
const hash = createHash("md5");
for await( let chunk of iterator){
hash.update(chunk);
}
console.log(hash.toString()); //b35edd0be7acc21cae8490a17c545928
This code compiles and runs with no errors, pity that the result is different from what I get running the functions of the crypto module provided by node and the md5sum provided by linux coreutils. Any suggestion ?
nodejs code:
const crypto = require('crypto');
const fs = require('fs');
const hash = crypto.createHash('md5');
const file = './my_big_folder.tar.gz';
const stream = fs.ReadStream(file);
stream.on('data', data=> { hash.update(data); });
stream.on('end', ()=> {
console.log(hash.digest('hex')); //c18f5eac67656328f7c4ec5d0ef5b96f
});
The same result in bash:
$ md5sum ./my_big_folder.tar.gz
$ c18f5eac67656328f7c4ec5d0ef5b96f ./my_big_folder.tar.gz
on Windows 10 this can be used:
CertUtil -hashfile ./my_big_folder.tar.gz md5

The File API isn't used to read a File in Deno, to do that you need to use the Deno.open API and then turn it into an iterable like this
import {createHash} from "https://deno.land/std#0.80.0/hash/mod.ts";
const hash = createHash("md5");
const file = await Deno.open(new URL(
"./BigFile.tar.gz",
import.meta.url, //This is needed cause JavaScript paths are relative to main script not current file
));
for await (const chunk of Deno.iter(file)) {
hash.update(chunk);
}
console.log(hash.toString());
Deno.close(file.rid);

import { crypto, toHashString } from 'https://deno.land/std#0.176.0/crypto/mod.ts';
const getFileBuffer = (filePath: string) => {
const file = Deno.openSync(filePath);
const buf = new Uint8Array(file.statSync().size);
file.readSync(buf);
file.close();
return buf;
};
const getMd5OfBuffer = (data: BufferSource) => toHashString(crypto.subtle.digestSync('MD5', data));
export const getFileMd5 = (filePath: string) => getMd5OfBuffer(getFileBuffer(filePath));

Related

How to read a file in zig?

How can I read a file in zig, and run over it line by line?
I did found os.File.openRead, but it seems old cause it says that container 'std.os' has no member called 'File'.
std.io.reader.readUntilDelimiterOrEof lets your read any std.io.reader line by line. You usually get the reader of something like a file by calling it’s reader() method. So for example:
var file = try std.fs.cwd().openFile("foo.txt", .{});
defer file.close();
var buf_reader = std.io.bufferedReader(file.reader());
var in_stream = buf_reader.reader();
var buf: [1024]u8 = undefined;
while (try in_stream.readUntilDelimiterOrEof(&buf, '\n')) |line| {
// do something with line...
}
The std.io.bufferedReader isn’t mandatory but recommended for better performance.
I muddled through this by looking at the Zig library source/docs, so this might not be the most idiomatic way:
const std = #import("std");
pub fn main() anyerror!void {
// Get an allocator
var gp = std.heap.GeneralPurposeAllocator(.{ .safety = true }){};
defer _ = gp.deinit();
const allocator = &gp.allocator;
// Get the path
var path_buffer: [std.fs.MAX_PATH_BYTES]u8 = undefined;
const path = try std.fs.realpath("./src/main.zig", &path_buffer);
// Open the file
const file = try std.fs.openFileAbsolute(path, .{ .read = true });
defer file.close();
// Read the contents
const buffer_size = 2000;
const file_buffer = try file.readToEndAlloc(allocator, buffer_size);
defer allocator.free(file_buffer);
// Split by "\n" and iterate through the resulting slices of "const []u8"
var iter = std.mem.split(file_buffer, "\n");
var count: usize = 0;
while (iter.next()) |line| : (count += 1) {
std.log.info("{d:>2}: {s}", .{ count, line });
}
}
The above is a little demo program that you should be able to drop into the default project created from zig init-exe, it'll just print out it's own contents, with a line number.
You can also do this without allocators, provided you supply the required buffers.
I'd also recommend checking out this great resource: https://ziglearn.org/chapter-2/#readers-and-writers
Note: I'm currently running a development version of Zig from master (reporting 0.9.0), but I think this has been working for the last few official releases.
To open a file and get a file descriptor back
std.os.open
https://ziglang.org/documentation/0.6.0/std/#std;os.open
To read from the file
std.os.read
https://ziglang.org/documentation/0.6.0/std/#std;os.read
I can't find a .readlines() style function in the zig standard library. You'll have to write your own loop to find the \n characters.
Below is a test case that shows how to create a file, write to it then open the same file and read its content.
const std = #import("std");
const testing = std.testing;
const expect = testing.expect;
test "create a file and then open and read it" {
var tmp_dir = testing.tmpDir(.{}); // This creates a directory under ./zig-cache/tmp/{hash}/test_file
// defer tmp_dir.cleanup(); // commented out this line so, you can see the file after execution finished.
var file1 = try tmp_dir.dir.createFile("test_file", .{ .read = true });
defer file1.close();
const write_buf: []const u8 = "Hello Zig!";
try file1.writeAll(write_buf);
var file2 = try tmp_dir.dir.openFile("test_file", .{});
defer file2.close();
const read_buf = try file2.readToEndAlloc(testing.allocator, 1024);
defer testing.allocator.free(read_buf);
try testing.expect(std.mem.eql(u8, write_buf, read_buf));
}
Check out fs package tests on Github or on your local machine under <zig-install-dir>/lib/fs/test.zig.
Also note that test allocator only works for tests. In your actual source code you need to choose an appropriate allocator.

session.run(LOAD CSV) issues

When using LOAD CSV function with session.run() to execute cypher statements into Neo4J it doesn't return anything. Have tried removing LOAD CSV and it works perfectly fine in creating nodes.
This is the code:
import { neo4jgraphql } from "neo4j-graphql-js";
import fs from "fs";
import path from "path";
const {createWriteStream} = require("fs");
const {GraphQLUpload} = require("apollo-server");
import {session} from "./index.js"
/*
* Check for GRAPHQL_SCHEMA environment variable to specify schema file
* fallback to schema.graphql if GRAPHQL_SCHEMA environment variable is not set
*/
const files = [];
export const typeDefs = fs
.readFileSync(
process.env.GRAPHQL_SCHEMA || path.join(__dirname, "schema.graphql")
)
.toString("utf-8");
export const resolvers = {
// Upload: GraphQLUpload,
Query: {
files: () => files
},
Mutation: {
uploadFile: async (_, {file}) => {
const {createReadStream, filename } = await file;
await new Promise(res =>
createReadStream()
.pipe(createWriteStream(path.join(__dirname, "./uploads", filename)))
.on("close", res)
);
session.run("LOAD CSV WITH HEADERS FROM 'file:///Connections.csv' AS csvLine CREATE(n)"); // This does not work with the LOAD CSV (Works with only CREATE(n))
return true;
}
}
};
That LOAD CSV command will just invoke CREATE (n) once per line which will create a node with no label or properties. Try changing it to something like CREATE (n:CsvNode) SET n.prop_1 = csvLine.header1 (where header1 is one of the headers from your CSV) and see if nodes labelled :CsvNode get created.

how can i save received image or document file from sender whatsapp using twilio?

I have a scenario like user will send the image or pdf file to twilio whatsapp number so i need to save that image/pdf in folder which will be processed to next level.
How can i save the files? I am using Node SDK.
Thanks in advance.
Assuming you've already configured your webhook on your sandbox page, so that messages containing media from Whatsapp are getting to your app.
As documentation says, you'll receive MediaContentType{N} and MediaUrl{N} as long as body and other parameters. The following snippet was translated-ish from a Python example from official documentation:
const Fs = require('fs')
const Path = require('path')
const Axios = require('axios')
const num_media = req.body.NumMedia;
const media_files = []
for (let i = 0; i <= num_media; i++) {
const id = req.body.MessageSid
const media_url = req.body[`MediaUrl{i}`];
const mime_type = req.body[`MediaContentType{i}`);
media_files.push({'media_url': media_url, 'mime_type': mime_type});
download(media_url, id);
}
async function download(url, name) {
const path = Path.resolve(__dirname, 'files', name)
const writer = Fs.createWriteStream(path)
const response = await Axios({
url,
method: 'GET',
responseType: 'stream'
})
response.data.pipe(writer)
return new Promise((resolve, reject) => {
writer.on('finish', resolve)
writer.on('error', reject)
})
}

How to use/consume an event stream from wolkenkit-eventstore

I want to use wolkenkit's eventstore and was trying to set up a quick example. But I'm not able to simply output an event stream.
Simplified example:
const eventstore = require("wolkenkit-eventstore/inmemory");
const Stream = require("stream");
const uuidv4 = require("uuid/v4");
const Event = require("commands-events/dist/Event");
const main = async () => {
await eventstore.initialize();
const aggregateId = uuidv4();
const event = new Event({ ... });
event.metadata.revision = 1;
await eventstore.saveEvents({ events: event });
const writableStream = new Stream.Writable();
writableStream._write = (chunk, encoding, next) => {
console.log(chunk.toString());
next()
};
const readableStream = eventstore.getUnpublishedEventStream();
readableStream.pipe(writableStream);
};
main();
As far as I understand, getUnpublishedEventStream returns a readable stream. I followed this instructions, but it didn't work as expected.
All I get is the following error:
(node:10988) UnhandledPromiseRejectionWarning: TypeError: readableStream.pipe is not a function
According to the documentation of wolkenkit-eventstore, getUnpublishedEventStream is an async function, i.e. you have to call it with await. Otherwise, you don't get a stream back, but a promise (and a promise doesn't have a pipe function).
So, this line
const readableStream = eventstore.getUnpublishedEventStream();
should be:
const readableStream = await eventstore.getUnpublishedEventStream();
I have not taken a closer look at your code apart from this, but this is the reason why you get the current error message.
PS: Please note that I am one of the core developers of wolkenkit, so please take my answer with a grain of salt.

How does one 'read' a file from a Dart VM program?

How does one 'read' a file from a Dart program ?
http://api.dartlang.org/index.html
Dart would be running on the client-side and so taking files as input should be allowed.
You can find a usage of files in Dart's testing framework:
status_file_parser.dart (search for 'File').
In short:
File file = new File(path);
if (!file.existsSync()) <handle missing file>;
InputStream file_stream = file.openInputStream();
StringInputStream lines = new StringInputStream(file_stream);
lines.lineHandler = () {
String line;
while ((line = lines.readLine()) != null) {
...
};
lines.closeHandler = () {
...
};
Note that the API is not yet finalized and could change at any moment.
Edit: API has changed. See Introduction to new IO
Your question implies you want to do this from the client-side, that is, the browser. The dart:io library only works in the stand-alone VM on the command line.
If you do want to read a file from within the VM, there's now an easier way:
import 'dart:io';
main() {
var filename = new Options().script;
var file = new File(filename);
if (!file.existsSync()) {
print("File $filename does not exist");
return;
}
var contents = file.readAsStringSync();
print(contents);
}
If you do not want to block while the whole file is read, you can use the async version of readAsString which returns a Future:
file.readAsString().then((contents) {
print(contents);
});

Resources