I have a log of a programs state. This log can be manualy or time interval saved on a file for persistant storage. Before saving it to the file it is encrypted with RNCryptor.
My current appending(saving) to file flow:
Read file
Decript information from the read string
Concat decrypted string with the new string
Encrypt the concatenated string
Write it to file
What I imagine:
Encode new string
Append to file
When I read this I will have to build a string from all the encoded strings. But I don't know how to decrypt the file with multiple encrypted blocks in it. How to differentiate where one ends and another begins.
Also is this the best performance choice. The text in the file at maximum could get to 100MB(Possibly it will never get this big).
Is using Core Data viable? Each append as different record or something. And core data could be encrypted so no need for RNCryptor.
Would appreciate code in Objective-C if any.
There are many things you can do:
Easiest would be to encode the ciphertexts to text (e.g. with Base64) and write each encoded ciphertext to a new line. You need encoding for that, because the ciphertext itself might contain bytes that can be interpreted as newline control characters, but that won't happen with a text encoding. The problem with this is that it blows up the logs unnecessarily (e.g. by 33% if Base64 is used)
You can prepend each unencoded ciphertext with its length (e.g. big-endian int32 encoding) and write both as-is to a file in binary mode. If you begin reading the file from the beginning, then you can distinguish each ciphertext, because you know how long the following ciphertext is and when the next encoded length starts. The blowup is only as big as the encoding of the ciphertext length for each ciphertext.
Use a binary delimiter such as 0x0101 between ciphertexts, but such a delimiter might still appear in the ciphertexts, so you need to escape it if you find it somewhere in the ciphertext. This is a little tricky to get right.
If the amount of logs is small (few MB), then you can find a library to append to a ZIP file.
You can use the array to store the information and then read and write that array to file. find Example here.
Steps :
Read Array from the file.
Add the New Encrypted string to array.
Write array to file.
Related
Azure data factory is not encoding the special characters properly.
For example, the CSV file has word sún which gets converted into sún after performing transformation through data flow and writing it to the blob storage container.
There are many files with different encoding types in my container which dataflow is selecting to apply transformation and these encoding types are like UTF-8, ANSI, etc.
So if I set my encoding part to WINDOWS-1252 in DelimitedText dataset then it works fine for ANSI encoding type csv file but if encoding type if UTF-8 then I have to set this part to UTF-8, then only dataflow generates proper output for these special characters.
Dataset Image
My CSV file data screenshot is here: CSV file data
Is there any generic way that irrespective of what encoding type of file, we can generate proper output for such characters?
I got it if I understand you correctly. For Data Factory, we must choose one encoding type firstly to read the file. If you files have many encoding, you want to keep the data between different encoding, that is limited my the encoding type not Data Factory. If the output encoding can't parse the data and it will be converted to other type. Data Factory only provide these encoding type for us to read/write data.
Data Factory can't get the encoding type of the files even with get Metadata active. Maybe you can achieve that in code level, try function or notebook, that's the only way.
HTH.
I typed in
file -I*
to look at all the encoding of all the CSV files in an entire directory. A lot of the file encodings are charset=binary. I'm not too familiar with this encoding format.
Does anyone know how to handle this encoding?
Thanks a lot for your time.
"Binary" encoding pretty much means that the encoding is unknown.
Everything is binary data under the hood. In text files each byte, or sequence of bytes, represents a specific character, and which character in particular depends on the encoding the file was encoded with/you're interpreting the file with. Some encodings are unambiguously recognisable, others aren't (e.g. any file is valid in any single-byte encoding, you can't easily distinguish one single-byte encoding from another). What file is telling you with charset=binary is that it doesn't have any more specific information than that the file contains bits and bytes (Capt'n Obvious to the rescue). It's up to you to interpret the file in the correct encoding/interpret it as the correct file format.
I am trying to read a binary file in which i have been appending data using a BinaryWriter object. I keep getting this error:
"The output char buffer is too small to contain the decoded
characters, encoding 'Unicode (UTF-8)' fallback
'System.Text.DecoderReplacementFallback'."
My file has characters like | which i suspect are the problem but I don't know how to solve it.
The most probable reason is that your file contains some binary data, that does not represent valid UTF-8 codepoint, at the place from which you are trying to read UTF-8 character.
This can happen if your read algorithm lose "synchronization" with your write algorithm and tries to read character from the wrong place, where something else (not a character) was written.
My requirements are to write binary records inside a file. The binary records can be thought of as raw bytes in memory. I need a way to delimit each record, so that i can do something similar to binary search on the file. For example start in middle of file, find the next record delimited and start the search.
My question is that can ASCII such "START-RECORD" be used to delimit the binary record ?
START-RECORD, data-length, .......binary data...........START-RECORD, data-length, .......binary data...........
When starting from an arbitrary position within a file, i can simply search for ASCII String "START-DATA". Is this approach feasible?
Not in a single pass, since you're reading in binary mode or not. If you insert some strings or another pattern as "delimiter", you'd need to search for the binary representation of it while reading the file.
I have a text file containing what I am told are unicode characters, for example:
\320\222\320\21015-25'ish per main or \320\222\320\21020-40'ish per starter
Which should read:
£15-25'ish per main or £20-40'ish per main starter
However, when viewing this text in Firefox, the output is mangled with various unwanted characters.
So, are these really unicode characters? And if so, how can I convert them to a form which is displayable correctly?
You need to:
know the encoding of the text file
read the data without losing information (either by reading it as binary or by reading it as text with the right encoding)
write the data with the right encoding (either by writing it out in binary and specifying the original encoding, or writing it out as text in an encoding which you also specify in the headers)
Try to separate out the problem into "reading" and/or "writing". Do you know the encoding of the file? What do you have to do with the file? When you've written it with backslashes, is that actually what's in the file (i.e. an escaped form) or is it actually just a "normal" text encoding such as UTF-8?