How to use GCrypt AES128 in Vala? - vala

I tried with this code but the result is not as expected, maybe I am wrong somewhere, please help.
gcrypt.vapi from: https://gitlab.gnome.org/GNOME/vala-extra-vapis
using GCrypt;
using Posix;
void main () {
GCrypt.Cipher.Cipher cipher;
GCrypt.Error err = GCrypt.Cipher.Cipher.open(out cipher, Cipher.Algorithm.AES128, Cipher.Mode.CBC, Cipher.Flag.SECURE);
if (err != 0) {
print("Error: %s\n", err.to_string());
Process.exit(EXIT_FAILURE);
}
string iv = "1111111111111111";
string key = "2222222222222222";
err = cipher.set_key(key.data);
if (err != 0) {
print("Error key: %s\n", err.to_string());
Process.exit(EXIT_FAILURE);
}
err = cipher.set_iv(iv.data);
if (err != 0) {
print("Error iv: %s\n", err.to_string());
Process.exit(EXIT_FAILURE);
}
string str = "Hello World!!!!!";
uchar[] ary = new uchar[str.length];
print("ary: %d\n", ary.length);
err = cipher.encrypt(ary, str.data);
if (err != 0) {
print("Error encrypt: %s\n", err.to_string());
Process.exit(EXIT_FAILURE);
}
string result = Base64.encode(ary);
print("ary: %d\n", ary.length);
print("result: %s\n", result); // tus0150r+OSFg63kxluXpg==
// expect result: tus0150r+OSFg63kxluXpmlrUQOsLMbbgx51GhLZats=
cipher.close();
Process.exit(EXIT_SUCCESS);
}

Your output buffer ary is sized too small so you're losing data. You can't assume that the encrypted output will be the same size as the input data:
uchar[] ary = new uchar[str.length]
Edit: I was incorrect, since AES is a block cipher the size will be the same, however the padding scheme must be considered
Here's an example of creating an output buffer of the correct size (granted it's for AES256, not AES128, so you'll need to make some adjustment): https://github.com/JCWasmx86/LFinance/blob/e6057b38a594ecbef728b212d9aa7df8cd8e869b/src/crypto/encryption.vala
That code has a link out to another Stack Overflow post about determining the correct size: Size of data after AES/CBC and AES/ECB encryption
Unfortunately I don't know enough about AES to provide the exact size that you'll need, but hopefully this gets you going in the right direction!

Related

iOS 13.1.3 VTDecompressionSessionDecodeFrame can't decode right

CVPixelBufferRef outputPixelBuffer = NULL;
CMBlockBufferRef blockBuffer = NULL;
void* buffer = (void*)[videoUnit bufferWithH265LengthHeader];
OSStatus status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
buffer,
videoUnit.length,
kCFAllocatorNull,
NULL, 0, videoUnit.length,
0, &blockBuffer);
if(status == kCMBlockBufferNoErr) {
CMSampleBufferRef sampleBuffer = NULL;
const size_t sampleSizeArray[] = {videoUnit.length};
status = CMSampleBufferCreateReady(kCFAllocatorDefault,
blockBuffer,
_decoderFormatDescription ,
1, 0, NULL, 1, sampleSizeArray,
&sampleBuffer);
if (status == kCMBlockBufferNoErr && sampleBuffer && _deocderSession) {
VTDecodeFrameFlags flags = 0;
VTDecodeInfoFlags flagOut = 0;
OSStatus decodeStatus = VTDecompressionSessionDecodeFrame(_deocderSession,
sampleBuffer,
flags,
&outputPixelBuffer,
&flagOut);
if(decodeStatus == kVTInvalidSessionErr) {
NSLog(#"IOS8VT: Invalid session, reset decoder session");
} else if(decodeStatus == kVTVideoDecoderBadDataErr) {
NSLog(#"IOS8VT: decode failed status=%d(Bad data)", decodeStatus);
} else if(decodeStatus != noErr) {
NSLog(#"IOS8VT: decode failed status=%d", decodeStatus);
}
CFRelease(sampleBuffer);
}
CFRelease(blockBuffer);
}
return outputPixelBuffer;
This is my code to decode the stream data.It was working good on iPhone 6s,but when the code running on iPhoneX or iphone11, the "outputPixelBuffer" return a nil. Can anyone help?
Without seeing the code for your decompressionSession creation, it is hard to say. It could be that your decompressionSession is providing the outputBuffer to the callback function provided at creation, so I highly recommend you add that part of your code too.
By providing &outputPixelBuffer in:
OSStatus decodeStatus = VTDecompressionSessionDecodeFrame(_deocderSession,
sampleBuffer,
flags,
&outputPixelBuffer,
&flagOut);
only means that you've provided the reference, it does not mean that it will be synchronously filled for certain.
I also recommend that you print out the OSStatus for:
CMBlockBufferCreateWithMemoryBlock
and
CMSampleBufferCreateReady
And if there's issues at those steps, there should be a way to know.

Read UInt32 from InputStream

I need to communicate with a server that has a special message format: Each message begins with 4 bytes (together a unsigned long / UInt32 in big endian format) which determines the length of the following message. After those 4 bytes the message is sent as a normal string
So I first need to read 4 bytes into an Integer (32 bit unsigned). In Java I do this like:
DataInputStream is;
...
int len = is.readInt();
How can I do this in Swift 4?
At the moment I use
var lengthbuffer = [UInt8](repeating: 0, count: 4)
let bytecount = istr.read(&lengthbuffer, maxLength: 4)
let lengthbytes = lengthbuffer[0...3]
let bigEndianValue = lengthbytes.withUnsafeBufferPointer {
($0.baseAddress!.withMemoryRebound(to: UInt32.self, capacity: 1) { $0 })
}.pointee
let bytes_expected = Int(UInt32(bigEndian: bigEndianValue))
But this looks not like this is the most elegant way. And furthermore, sometimes (I cannot reproduces it reliably) there is a wrong value read (too big). When I then try to allocate memory for the following message, the app crashes:
let buffer = UnsafeMutablePointer<UInt8>.allocate(capacity: bytes_expected)
let bytes_read = istr.read(buffer, maxLength: bytes_expected)
So what is the swift way to read a UInt32 from a InputStream?
EDIT:
My current code (implemented things from the comments. Thanks!) looks like this:
private let inputStreamAccessQueue = DispatchQueue(label: "SynchronizedInputStreamAccess") // NOT concurrent!!!
// This is called on Stream.Event.hasBytesAvailable
func handleInput() {
self.inputStreamAccessQueue.sync(flags: .barrier) {
guard let istr = self.inputStream, istr.hasBytesAvailable else {
log.error(self.buildLogMessage("handleInput() called when inputstream has no bytes available"))
return
}
let lengthbuffer = UnsafeMutablePointer<UInt8>.allocate(capacity: 4)
defer { lengthbuffer.deallocate(capacity: 4) }
let lenbytes_read = istr.read(lengthbuffer, maxLength: 4)
guard lenbytes_read == 4 else {
self.errorHandler(NetworkingError.InputError("Input Stream received \(lenbytes_read) (!=4) bytes"))
return
}
let bytes_expected = Int(UnsafeRawPointer(lengthbuffer).load(as: UInt32.self).bigEndian)
log.info(self.buildLogMessage("expect \(bytes_expected) bytes"))
let buffer = UnsafeMutablePointer<UInt8>.allocate(capacity: bytes_expected)
let bytes_read = istr.read(buffer, maxLength: bytes_expected)
guard bytes_read == bytes_expected else {
print("Error: Expected \(bytes_expected) bytes, read \(bytes_read)")
return
}
guard let message = String(bytesNoCopy: buffer, length: bytes_expected, encoding: .utf8, freeWhenDone: true) else {
log.error("ERROR WHEN READING")
return
}
self.handleMessage(message)
}
}
This works most of the time, but sometimes istr.read() does not read bytes_expected bytes but bytes_read < bytes_expected. This results in another hasbytesAvailable event and handleInput() is called again. This time, of course, the first 4 bytes that are read do not contain the length of a new message but some content of the last message. But my code does not know that, so the first bytes are interpreted as the length. In many cases this is a real big value => allocating too much memory => crash
I think this is the explanation for the bug. But how to solve it?
Call read() on the stream while hasBytesAvailable = true? Is there maybe a better solution?
I would assume that when I loop, the hasBytesAvailableEvent would still happen after every read() => handleInput would still be called again too early... How can I avoid this?
EDIT 2: I have implemented the loop now, unfortunately it is still crashing with the same error (and probably same reason). Relevant code:
let bytes_expected = Int(UnsafeRawPointer(lengthbuffer).load(as: UInt32.self).bigEndian)
var message = ""
var bytes_missing = bytes_expected
while bytes_missing > 0 {
print("missing", bytes_missing)
let buffer = UnsafeMutablePointer<UInt8>.allocate(capacity: bytes_missing)
let bytes_read = istr.read(buffer, maxLength: bytes_missing)
guard bytes_read > 0 else {
print("bytes_read not <= 0: \(bytes_read)")
return
}
guard bytes_read <= bytes_missing else {
print("Read more bytes than expected. missing=\(bytes_missing), read=\(bytes_read)")
return
}
guard let partial_message = String(bytesNoCopy: buffer, length: bytes_expected, encoding: .utf8, freeWhenDone: true) else {
log.error("ERROR WHEN READING")
return
}
message = message + partial_message
bytes_missing -= bytes_read
}
My console output when it crashes:
missing 1952807028 malloc: * mach_vm_map(size=1952808960) failed
(error code=3)
* error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
So it seems that the whole handleInput() method is called too early, although I use the barrier! What am I doing wrong?
I‘d do it like this (ready to be pasted into a playground):
import Foundation
var stream = InputStream(data: Data([0,1,0,0]))
stream.open()
defer { stream.close() }
var buffer = UnsafeMutablePointer<UInt8>.allocate(capacity: 4)
defer { buffer.deallocate(capacity: 4) }
guard stream.read(buffer, maxLength: 4) >= 4 else {
// handle all cases: end of stream, error, waiting for more data to arrive...
fatalError()
}
let number = UnsafeRawPointer(buffer).load(as: UInt32.self)
number // 256
number.littleEndian // 256
number.bigEndian // 65536
Using UnsafeRawPointer.load directly (without explicit rebinding) is safe for trivial types according to the documentation. Trivial types are generally those that don‘t require ARC operations.
Alternatively, you can access the same memory as a different type without rebinding through untyped memory access, so long as the bound type and the destination type are trivial types.
I would suggest load(as:) to convert the buffer to the UInt32, and I would make sure you make the endianness explicit, e.g.
let value = try stream.read(type: UInt32.self, endianness: .little)
Where:
enum InputStreamError: Error {
case readFailure
}
enum Endianness {
case little
case big
}
extension InputStream {
func read<T: FixedWidthInteger>(type: T.Type, endianness: Endianness = .little) throws -> T {
let size = MemoryLayout<T>.size
var buffer = [UInt8](repeating: 0, count: size)
let count = read(&buffer, maxLength: size)
guard count == size else {
throw InputStreamError.readFailure
}
return buffer.withUnsafeBytes { pointer -> T in
switch endianness {
case .little: return T(littleEndian: pointer.load(as: T.self))
case .big: return T(bigEndian: pointer.load(as: T.self))
}
}
}
func readFloat(endianness: Endianness) throws -> Float {
return try Float(bitPattern: read(type: UInt32.self, with: endianness))
}
func readDouble(endianness: Endianness) throws -> Double {
return try Double(bitPattern: read(type: UInt64.self, with: endianness))
}
}
Note, I made read(type:endianness:) a generic, so it can be reused with any of the standard integer types. I have also thrown in readFloat and readDouble for good measure.

One function gives several results in swift

I have a method in objective-C which I call from swift. It worked pretty well in swift 2, but in swift 3 the behaviour has changed. It gives me 3 different results, even though I send the same parameters.
Sometimes it doesnt find pfile, sometimes it fails on pin checking, sometimes works good and gives me x509.
char* ParsePKCS12(unsigned char* pkcs12_path, unsigned char * pin) {
printf("PARSE PATH: %s\n", pkcs12_path);
printf("PASSWORD: %s\n", pin);
NSString *pfile = [NSString stringWithUTF8String:pkcs12_path];
FILE *fp;
PKCS12 *p12;
EVP_PKEY *pkey;
X509 *cert;
BIO *databio = BIO_new(BIO_s_mem());
STACK_OF(X509) *ca = NULL;
if([[NSFileManager defaultManager] fileExistsAtPath:pfile]) {
NSLog(#"ok, pfile exists!");
} else {
NSLog(#"error, pfile does not exists!");
return "-1";
}
OpenSSL_add_all_algorithms();
ERR_load_crypto_strings();
fp = fopen([pfile UTF8String], "rb");
p12 = d2i_PKCS12_fp(fp, NULL);
fclose (fp);
if (!p12) {
fprintf(stderr, "Error reading PKCS#12 file\n");
ERR_print_errors_fp(stderr);
return "-1";
}
if (!PKCS12_parse(p12, (const char *)pin, &pkey, &cert, &ca)) { //Error at parsing or pin error
fprintf(stderr, "Error parsing PKCS#12 file\n");
ERR_print_errors_fp(stderr);
ERR_print_errors(databio);
return "-1";
}
BIO *bio = NULL;
char *pem = NULL;
if (NULL == cert) {
//return NULL;
return "-1";
}
bio = BIO_new(BIO_s_mem());
if (NULL == bio) {
return "-1";
}
if (0 == PEM_write_bio_X509(bio, cert)) {
BIO_free(bio);
//return NULL;
}
pem = (char *) malloc(bio->num_write + 1);
if (NULL == pem) {
BIO_free(bio);
return "-1";
}
memset(pem, 0, bio->num_write + 1);
BIO_read(bio, pem, bio->num_write);
BIO_free(bio);
PKCS12_free(p12);
return pem;
}
this code I call in swift like this:
self.x509 = String(cString:ParsePKCS12(UnsafeMutablePointer<UInt8>(mutating: self.path),
UnsafeMutablePointer<UInt8>(mutating: "123456"))!)
Your call
self.x509 = String(cString:ParsePKCS12(UnsafeMutablePointer<UInt8>(mutating: self.path),
UnsafeMutablePointer<UInt8>(mutating: "123456"))!)
does not work reliably because in both
UnsafeMutablePointer<UInt8>(mutating: someSwiftString)
calls, the compiler creates a temporary C string representation of
the Swift string and passes that to the function. But that C string
is only valid until the UnsafeMutablePointer constructor returns, which means that the second
string conversion can overwrite the first, or any other undefined
behaviour.
The simplest solution would be to change the C function to
take constant C strings (and use the default signedness):
char* ParsePKCS12(const char * pkcs12_path, const char * pin)
Then you can simply call it as
self.x509 = String(cString: ParsePKCS12(self.path, "123456"))
and the compiler creates temporary C strings which are valid during
the call of ParsePKCS12().

AudioConverter#FillComplexBuffer returns -50 and does not convert anything

I'm strongly following this Xamarin sample (based on this Apple sample) to convert a LinearPCM file to an AAC file.
The sample works great, but implemented in my project, the FillComplexBuffer method returns error -50 and the InputData event is not triggered once, thus nothing is converted.
The error only appears when testing on a device. When testing on the emulator, everything goes great and I get a good encoded AAC file at the end.
I tried a lot of things today, and I don't see any difference between my code and the sample code. Do you have any idea where this may come from?
I don't know if this is in anyway related to Xamarin, it doesn't seem so since the Xamarin sample works great.
Here's the relevant part of my code:
protected void Encode(string path)
{
// In class setup. File at TempWavFilePath has DecodedFormat as format.
//
// DecodedFormat = AudioStreamBasicDescription.CreateLinearPCM();
// AudioStreamBasicDescription encodedFormat = new AudioStreamBasicDescription()
// {
// Format = AudioFormatType.MPEG4AAC,
// SampleRate = DecodedFormat.SampleRate,
// ChannelsPerFrame = DecodedFormat.ChannelsPerFrame,
// };
// AudioStreamBasicDescription.GetFormatInfo (ref encodedFormat);
// EncodedFormat = encodedFormat;
// Setup converter
AudioStreamBasicDescription inputFormat = DecodedFormat;
AudioStreamBasicDescription outputFormat = EncodedFormat;
AudioConverterError converterCreateError;
AudioConverter converter = AudioConverter.Create(inputFormat, outputFormat, out converterCreateError);
if (converterCreateError != AudioConverterError.None)
{
Console.WriteLine("Converter creation error: " + converterCreateError);
}
converter.EncodeBitRate = 192000; // AAC 192kbps
// get the actual formats back from the Audio Converter
inputFormat = converter.CurrentInputStreamDescription;
outputFormat = converter.CurrentOutputStreamDescription;
/*** INPUT ***/
AudioFile inputFile = AudioFile.OpenRead(NSUrl.FromFilename(TempWavFilePath));
// init buffer
const int inputBufferBytesSize = 32768;
IntPtr inputBufferPtr = Marshal.AllocHGlobal(inputBufferBytesSize);
// calc number of packets per read
int inputSizePerPacket = inputFormat.BytesPerPacket;
int inputBufferPacketSize = inputBufferBytesSize / inputSizePerPacket;
AudioStreamPacketDescription[] inputPacketDescriptions = null;
// init position
long inputFilePosition = 0;
// define input delegate
converter.InputData += delegate(ref int numberDataPackets, AudioBuffers data, ref AudioStreamPacketDescription[] dataPacketDescription)
{
// how much to read
if (numberDataPackets > inputBufferPacketSize)
{
numberDataPackets = inputBufferPacketSize;
}
// read from the file
int outNumBytes;
AudioFileError readError = inputFile.ReadPackets(false, out outNumBytes, inputPacketDescriptions, inputFilePosition, ref numberDataPackets, inputBufferPtr);
if (readError != 0)
{
Console.WriteLine("Read error: " + readError);
}
// advance input file packet position
inputFilePosition += numberDataPackets;
// put the data pointer into the buffer list
data.SetData(0, inputBufferPtr, outNumBytes);
// add packet descriptions if required
if (dataPacketDescription != null)
{
if (inputPacketDescriptions != null)
{
dataPacketDescription = inputPacketDescriptions;
}
else
{
dataPacketDescription = null;
}
}
return AudioConverterError.None;
};
/*** OUTPUT ***/
// create the destination file
var outputFile = AudioFile.Create (NSUrl.FromFilename(path), AudioFileType.M4A, outputFormat, AudioFileFlags.EraseFlags);
// init buffer
const int outputBufferBytesSize = 32768;
IntPtr outputBufferPtr = Marshal.AllocHGlobal(outputBufferBytesSize);
AudioBuffers buffers = new AudioBuffers(1);
// calc number of packet per write
int outputSizePerPacket = outputFormat.BytesPerPacket;
AudioStreamPacketDescription[] outputPacketDescriptions = null;
if (outputSizePerPacket == 0) {
// if the destination format is VBR, we need to get max size per packet from the converter
outputSizePerPacket = (int)converter.MaximumOutputPacketSize;
// allocate memory for the PacketDescription structures describing the layout of each packet
outputPacketDescriptions = new AudioStreamPacketDescription [outputBufferBytesSize / outputSizePerPacket];
}
int outputBufferPacketSize = outputBufferBytesSize / outputSizePerPacket;
// init position
long outputFilePosition = 0;
long totalOutputFrames = 0; // used for debugging
// write magic cookie if necessary
if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0)
{
outputFile.MagicCookie = converter.CompressionMagicCookie;
}
// loop to convert data
Console.WriteLine ("Converting...");
while (true)
{
// create buffer
buffers[0] = new AudioBuffer()
{
NumberChannels = outputFormat.ChannelsPerFrame,
DataByteSize = outputBufferBytesSize,
Data = outputBufferPtr
};
int writtenPackets = outputBufferPacketSize;
// LET'S CONVERT (it's about time...)
AudioConverterError converterFillError = converter.FillComplexBuffer(ref writtenPackets, buffers, outputPacketDescriptions);
if (converterFillError != AudioConverterError.None)
{
Console.WriteLine("FillComplexBuffer error: " + converterFillError);
}
if (writtenPackets == 0) // EOF
{
break;
}
// write to output file
int inNumBytes = buffers[0].DataByteSize;
AudioFileError writeError = outputFile.WritePackets(false, inNumBytes, outputPacketDescriptions, outputFilePosition, ref writtenPackets, outputBufferPtr);
if (writeError != 0)
{
Console.WriteLine("WritePackets error: {0}", writeError);
}
// advance output file packet position
outputFilePosition += writtenPackets;
if (FlowFormat.FramesPerPacket != 0) {
// the format has constant frames per packet
totalOutputFrames += (writtenPackets * FlowFormat.FramesPerPacket);
} else {
// variable frames per packet require doing this for each packet (adding up the number of sample frames of data in each packet)
for (var i = 0; i < writtenPackets; ++i)
{
totalOutputFrames += outputPacketDescriptions[i].VariableFramesInPacket;
}
}
}
// write out any of the leading and trailing frames for compressed formats only
if (outputFormat.BitsPerChannel == 0)
{
Console.WriteLine("Total number of output frames counted: {0}", totalOutputFrames);
WritePacketTableInfo(converter, outputFile);
}
// write the cookie again - sometimes codecs will update cookies at the end of a conversion
if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0)
{
outputFile.MagicCookie = converter.CompressionMagicCookie;
}
// Clean everything
Marshal.FreeHGlobal(inputBufferPtr);
Marshal.FreeHGlobal(outputBufferPtr);
converter.Dispose();
outputFile.Dispose();
// Remove temp file
File.Delete(TempWavFilePath);
}
I already saw this SO question, but the not-detailed C++/Obj-C related answer doesn't seem to fit with my problem.
Thanks !
I finally found the solution!
I just had to declare AVAudioSession category before converting the file.
AVAudioSession.SharedInstance().SetCategory(AVAudioSessionCategory.AudioProcessing);
AVAudioSession.SharedInstance().SetActive(true);
Since I also use an AudioQueue to RenderOffline, I must in fact set the category to AVAudioSessionCategory.PlayAndRecord so both the offline rendering and the audio converting work.

iOS Game Center identity verification in Go

I am trying to write a service in Go that takes the parameters given by GameCenter in
//GKLocalPlayer
- (void)generateIdentityVerificationSignatureWithCompletionHandler:(void (^)(NSURL *publicKeyUrl, NSData *signature, NSData *salt, uint64_t timestamp, NSError *error))completionHandler
Inside the completionHandler of the method, I am sending the public key URL, base 64 encoded signature, base 64 encoded salt, timestamp and the user's game center ID to my Go service. Inside my Go (in Google App Engine), this is what I am doing:
Get the certificate from the public key URL
Decode signature and salt
Form the payload based on player ID, bundle ID, timestamp and
salt
Use X509.CheckSignature to verify that the payload matches the
signature when it's hased with the public key
*I know that I still need to verify with the certificate authority but I am skipping that for now (if you know how to do that in Go for this case, please please please share!)
Problem: CheckSignature is returning crypto/rsa: verification error and I really think that I am doing everything as instructed by Apple
https://developer.apple.com/library/ios/documentation/GameKit/Reference/GKLocalPlayer_Ref/Reference/Reference.html#//apple_ref/doc/uid/TP40009587-CH1-SW25
And the code that I have so far:
func (v *ValidationRequest) ValidateGameCenter(publicKeyUrl string, playerId string, bundleId string, signature string, salt string, timestamp uint64) error {
client := urlfetch.Client(v.Context)
resp, err := client.Get(publicKeyUrl)
if err != nil {
v.Context.Errorf("%v", err.Error())
return err
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
v.Context.Errorf("%v", err.Error())
return err
}
cert, err := x509.ParseCertificate(body)
if err != nil {
v.Context.Errorf("%v", err.Error())
return err
}
signatureBytes, err := base64.StdEncoding.DecodeString(signature)
saltBytes, err:= base64.StdEncoding.DecodeString(salt)
payload, err := formPayload(v, playerId, bundleId, timestamp, saltBytes)
if err != nil {
v.Context.Errorf("%v", err.Error())
return err
}
err = cert.CheckSignature(cert.SignatureAlgorithm, payload, signatureBytes)
if err != nil {
v.Context.Errorf("%v", err.Error())
return err
}
return nil
}
func formPayload(v *ValidationRequest, playerId string, bundleId string, timestamp uint64, salt []byte) ([]byte, error) {
bundleIdBytes := []byte(bundleId)
playerIdBytes := []byte(playerId)
payloadBuffer := new(bytes.Buffer)
written, err := payloadBuffer.Write(playerIdBytes)
if err != nil {
return nil, err
}
written, err = payloadBuffer.Write(bundleIdBytes)
if err != nil {
return nil, err
}
var bigEndianTimestamp []byte = make([]byte, 8)
binary.BigEndian.PutUint64(bigEndianTimestamp, timestamp)
if written != len(bundleIdBytes) {
return nil, errors.New(fmt.Sprintf("Failed writing all bytes. Written: %d Length: %d", written, len(bundleIdBytes)))
}
written, err = payloadBuffer.Write(bigEndianTimestamp)
if err != nil {
return nil, err
}
if written != len(bigEndianTimestamp) {
return nil, errors.New(fmt.Sprintf("Failed writing all bytes. Written: %d Length: %d", written, len(bigEndianTimestamp)))
}
written, err = payloadBuffer.Write(salt)
if err != nil {
return nil, err
}
if written != len(salt) {
return nil, errors.New(fmt.Sprintf("Failed writing all bytes. Written: %d Length: %d", written, len(salt)))
}
return payloadBuffer.Bytes(), nil
}

Resources