Compressing a file using Huffman coding - huffman-code

While Compressing a file Using Huffmann coding,
After assigning Huffmann codes to each character in a file, these characters should be replaced with equivalent Huffmann codes in the compressed file. Then how the equivalent characters gets extracted with those Huffman codes from the compressed files while decompressing the file. Do the compressed file contains some extra information to decode the Huffmann codes?

Yes. You need to send a description of the Huffman code in order to decode them.
The usual implementation is to encode using a canonical Huffman code, and then sending just the lengths for each symbol. The description of the code can itself be compressed.

Related

NSKeyedArchiver sometimes makes a broken file

My iOS app saves NSCoding objects in Document directory.
NSKeyedArchiver archives them. It is always O.K. but sometimes makes broken files.
The broken files have the following two patterns.
Lack of data
I can convert them to ascii strings and recover meaningful
How do I convert an NSData object with hex data to ASCII in Swift?
They have bplist prefix. But they don’t have the trailers.
Total loss
I cannot convert them to ascii strings.
They look shifting all bytes.
This is one of the headers in the broken files comparing with the correct header.
broken (sequence of characters seems to be different every data):
Nè\à¡<99>K<80>^_È<97>▸T§:Æñã9µú▸Ñ1^LË^VYGfM^A%KÍ<95
expected:
bplist00Ô^A^B^C^D^E^H01T$topX$objectsX$versionY$
Has anyone experience the same case?

Multiple encryption append to a file

I have a log of a programs state. This log can be manualy or time interval saved on a file for persistant storage. Before saving it to the file it is encrypted with RNCryptor.
My current appending(saving) to file flow:
Read file
Decript information from the read string
Concat decrypted string with the new string
Encrypt the concatenated string
Write it to file
What I imagine:
Encode new string
Append to file
When I read this I will have to build a string from all the encoded strings. But I don't know how to decrypt the file with multiple encrypted blocks in it. How to differentiate where one ends and another begins.
Also is this the best performance choice. The text in the file at maximum could get to 100MB(Possibly it will never get this big).
Is using Core Data viable? Each append as different record or something. And core data could be encrypted so no need for RNCryptor.
Would appreciate code in Objective-C if any.
There are many things you can do:
Easiest would be to encode the ciphertexts to text (e.g. with Base64) and write each encoded ciphertext to a new line. You need encoding for that, because the ciphertext itself might contain bytes that can be interpreted as newline control characters, but that won't happen with a text encoding. The problem with this is that it blows up the logs unnecessarily (e.g. by 33% if Base64 is used)
You can prepend each unencoded ciphertext with its length (e.g. big-endian int32 encoding) and write both as-is to a file in binary mode. If you begin reading the file from the beginning, then you can distinguish each ciphertext, because you know how long the following ciphertext is and when the next encoded length starts. The blowup is only as big as the encoding of the ciphertext length for each ciphertext.
Use a binary delimiter such as 0x0101 between ciphertexts, but such a delimiter might still appear in the ciphertexts, so you need to escape it if you find it somewhere in the ciphertext. This is a little tricky to get right.
If the amount of logs is small (few MB), then you can find a library to append to a ZIP file.
You can use the array to store the information and then read and write that array to file. find Example here.
Steps :
Read Array from the file.
Add the New Encrypted string to array.
Write array to file.

file encoding on a mac, charset=binary

I typed in
file -I*
to look at all the encoding of all the CSV files in an entire directory. A lot of the file encodings are charset=binary. I'm not too familiar with this encoding format.
Does anyone know how to handle this encoding?
Thanks a lot for your time.
"Binary" encoding pretty much means that the encoding is unknown.
Everything is binary data under the hood. In text files each byte, or sequence of bytes, represents a specific character, and which character in particular depends on the encoding the file was encoded with/you're interpreting the file with. Some encodings are unambiguously recognisable, others aren't (e.g. any file is valid in any single-byte encoding, you can't easily distinguish one single-byte encoding from another). What file is telling you with charset=binary is that it doesn't have any more specific information than that the file contains bits and bytes (Capt'n Obvious to the rescue). It's up to you to interpret the file in the correct encoding/interpret it as the correct file format.

iOS write to CSV file: which encoding to use

In my iOS app, I have a feature that writes data to a CSV file. This works fine in most cases with the following:
[csvString writeToFile: filePath atomically:YES encoding: NSUTF8StringEncoding error:&error];
I recently got an email from a Japanese user that the CSV file exported has weird symbols instead of Japanese characters. So I switched to using NSUTF16StringEncoding and it seems to work fine for Japanese characters as well.
So the question is: is it better to use NSUTF16StringEncoding, or are there any drawbacks to doing this? It seems that other examples I've seen for writing to CSV files (including CHCSVParser) use
NSUTF8StringEncoding, so I'm not sure which one to prefer.
Thanks.
There's no a "better" encoding.
UTF-8 uses a variable number of bytes per each character, from 1 to 4. UTF-16 uses always 2 bytes for every character. What is best, is really up to you and your business. In theory, if your users are mostly based in Asia and use primarily non-ASCII character, files encoded in UTF-16 are smaller. If your users are primarily living in the Western world and use Latin-based alphabets, using UTF-8 makes every file 50% smaller.
I believe your problem is not with the choice of the encoding, but rather with the presentation. Text editors cannot guess the encoding of a file, so it's possible that your Japanese user was using a text editor that defaults to UTF-16, and thus was unable to represent UTF-8 character sequences correctly.
The solution to this problem is to using the BOM sequence, as per this SO answer: https://stackoverflow.com/a/2585194/192024 (in short: just add those 3 bytes at the beginning of the file to tell editors what encoding to use)

How to read unicode characters accurately

I have a text file containing what I am told are unicode characters, for example:
\320\222\320\21015-25'ish per main or \320\222\320\21020-40'ish per starter
Which should read:
£15-25'ish per main or £20-40'ish per main starter
However, when viewing this text in Firefox, the output is mangled with various unwanted characters.
So, are these really unicode characters? And if so, how can I convert them to a form which is displayable correctly?
You need to:
know the encoding of the text file
read the data without losing information (either by reading it as binary or by reading it as text with the right encoding)
write the data with the right encoding (either by writing it out in binary and specifying the original encoding, or writing it out as text in an encoding which you also specify in the headers)
Try to separate out the problem into "reading" and/or "writing". Do you know the encoding of the file? What do you have to do with the file? When you've written it with backslashes, is that actually what's in the file (i.e. an escaped form) or is it actually just a "normal" text encoding such as UTF-8?

Resources