How does one do checksum verification of network transmitted data - network-programming

I am makign my own 'protocol' where I have to design the header. I understand how to verify the integrity of the first header I sent..
What I'm confused about is how does one verify the integrity of the data? Do I transmit a header appended to the front of the data that will contain the checksum of the header + the data frament itself, so the server knows to re-transmit it if it's wrong?
Or is there a way to be able to verify the data hasn't been damaged via some extra prepping in the beginning?

Related

How not to include the information of the files appended to the log Tidlog file (indy) of a message

The log file (Indy) of an email in SMTP format includes the information of the attached files that are not necessary for my needs.
Adding information from attached files greatly increases the log file and causes me problems reading this information. I keep this file in a "blob" field of the database. Reading this field is causing me problems.
Do you have an example of a code that retains this information (other than the files attached)?
The default TIdLog... components are meant to log whatever raw data is being transmitted/received over the socket connection, for purposes of debugging and replaying sessions. There are no real filtering capabilities.
If you don't want portions of the emails being logged, you will have to use TIdLogEvent or TIdConnectionIntercept, or derive a custom TIdLog... or TIdConnectionIntercept... based component, to parse the raw data yourself, essentially re-implementing the SMTP and RFC822 protocols so you can choose to log only what you want.

YAWS webserver - how to know if successful download?

I let people download files using HTTP-GET from Yaws. I have implemented as it is done in yaws_appmod_dav.erl, and it works fine.
case file:read(Fd,PPS) of
{ok,Data} when size(Data)<PPS ->
?DEBUG("only chunk~n"),
status(200,H,{content,Mimetype,Data});
{ok,Data} ->
?DEBUG("first chunk~n"),
spawn(fun() -> deliver_rest(Pid,Fd) end),
status(200,H,{streamcontent,Mimetype,Data});
eof ->
status(200,{content,"application/octet-stream",<<>>});
{error,Reason} ->
Response = [{'D:error',[{'xmlns:D',"DAV:"}],[Reason]}],
status(500,{xml,Response})
end;
I would like to mark a successful download on the server, i.e. when the client has accepted the last package.
How do I do that?
A minor questions: In webdav-app for Yaws, yaws_api:stream_chunk_deliver is used instead yaws_api:stream_chunk_deliver_blocking when getting a file. (See row 449 in https://github.com/klacke/yaws/blob/master/src/yaws_appmod_dav.erl)
Why isn't this a problem? According to http://yaws.hyber.org/stream.yaws "Whenever the producer of the stream is faster than the consumer, that is the WWW client, we must use a synchronous version of the code. " I notice that both versions works fine, is it just the amount of memory on the server that is affected?
The HTTP protocol doesn't specify a way for the client to notify the server that a download has been successful. The client either gets the requested data confirmed by the result code 200 (or 206) or it doesn't, and in that case it gets one of the error codes. Then the client is free to re-request that data. So, there isn't a reliable way of achieving what you want.
You could record the fact that the last chunk of data has been sent to the client and assume that it has been successful, unless the client re-requests that data, in which case you can invalidate the previous assumption.
Also, please note that HTTP specification allows to request from the server any part of the data when it sends the GET request with the Range header. See an example in this fusefs-httpfs implementation and some more info in this SO post. How can you determine if the download has been successful if you don't know which GET request that uses Range header is the last one (e.g. the client may download the whole file in chunks in backward order).
This may also answer you minor question. The client controls the flow by requesting a specified range of bytes from the given file. I don't know the implementation of WebDAV protocol, but it's possible that it doesn't request the whole file at once, and so the server can deliver data in chunks and never overflow the client.
HTTP Range header is something separate from the TCP window size, which is at the TCP protocol level (HTTP is an application level protocol implemented on top of TCP). Say the client requested the whole file and the server sends it like that. It's not that the whole file has been send through the network yet. The data to be send is buffered in the kernel and send in chunks according to the TCP window size. Had the client requested only part of the data with the Range header, only that part would be buffered in the kernel.

AFNetworking iOS JSON Parsing incorrect in just Lebanon

My application has a weird problem. I have a login webservice which is used to authenticate the users, it works well for everyone except for a tester who is in Lebanon. For her, the request always fails. It turns out that the json response is not getting parsed for her.
My first guess was that her network place is using a proxy server that converts json to html, so I asked her to switch to cellular network but this isn't solving the problem either.
Please refer to the debug message in the screenshot below.
Any suggestions on what must be wrong will be greatly helpful.
You'd really need the exact data that was received. JSON parsing is totally independent of any localisation. On the other hand, whatever service produced the JSON data may not. There is a good chance that being in the Lebanon, that customer receives non-ASCII data (which should be just fine), while other customers don't. It is possible that the server sends that data not in UTF-8 but say in some Windows encoding. That would be fine for ASCII but not for non-ASCII data. Or it could be that the server figures out that full UTF-8 is needed and not ASCII, and transmits a byte order marker, which is not legal JSON and would likely create the error message that you received.
To reproduce, I'd try to set up things so that non-ASCII data would be used. For example a username or password with non-ASCII data.

Sending Signed XML to secure WebService returns BadSignature

I am using Delphi 7's HTTPReqResp component to send a digitally signed SOAP XML Document to a HTTPS web service. I use Eldos XML BlackBox and have set all the transformAlgorithms, CanonicalizationMethod, signaturemethod, etc. to the ones the web service requires and have confirmed this with a tech support officer.
I have validated the signature using XML BlackBox and also this XML Verifier website.
Both ways confirm the signature is correct. However, when I send the XML document via HTTPReqResp.execute, the response I get back is BadSignature (The signature value is invalid).
Originally, I received back a different error messages due to XML errors (malformed, etc.). It appears that the service will do all the standard formatting checks first, then it will attempt to validate the signature. Since I get back the BadSignature response, the rest of the XML must be correct.
I suppose I have 2 questions here.
Does the HTTPReqResp component alter the XML.
Is it likely the webservice alters the XML.
The site is using Access Manager WebSEAL.
It's very likely that the receiving partner is getting a modified document somehow. Some minor modifications shouldn't affect the signature (that's the idea, at least) so you may want to check the following:
"Recommended" encoding used by the receiving partner. A very annoying practice by some receiving partners is to favor one form of encoding and completely ignore others. XML signatures should use utf-8 but I've seen servers that only accept iso-8859-1
Make sure you don't accidentally change encoding after signing.
Verify that the receiving partner is using a sane canonicalization method.
Verify with your receiving partner that no extraneous elements are being added to your document.
Also, have you tried to post this using the SecureBlackBox components? They also have an HTTP client that can do SSL, and that can be used to also verify the bytes being sent through the wire.

Reliable way to tell if wrong key is used in aes256 decryption

I have some code that I am using to encrypt and decrypt some strings in an ios application. The code involves the use of CCCrypt. Is there a reliable way to test the validity of a key used without actually storing the key anywhere? From my research it seems as though the only way to come close to telling if the key is valid is by using key lengths and key hashes. Can anyone guide me in the proper direction for this?
Getting to the answer requires a little bit of background about proper encryption. You may know this already, but most people do this wrong so I'm covering it. (If you're encrypting with a password and don't encode at least an HMAC, two salts, and an IV, you're doing it wrong.)
First, you must use an HMAC (see CCHmac()) any time you encrypt with an unauthenticated mode (such as AES-CBC). Otherwise attackers can modify your ciphertext in ways that cause it to decrypt into a different message. See modaes for an example of this attack. An HMAC is a cryptographically secure hash based on a key.
Second, if your are using password-based encryption, you must use a KDF to convert it into a key. The most common is PBKDF2. You cannot just copy password bytes into a key.
Assuming you're using a password this way, you generally generate two keys, one for encryption and one for HMAC.
OK, with those parts in place, you can verify that the password is correct because the HMAC will fail if it isn't. This is how RNCryptor does it.
There are two problems with this simple approach: you have to process the entire file before you can verify the password, and there is no way to detect file corruption vs bad password.
To fix these issues somewhat, you can add a small block of extra data that you HMAC separately. You then verify that small block rather than the whole file. This is basically how aescrypt does it. Specifically, they generate a "real" key for encrypting the entire file, and then encrypt that key with a PBKDF2-generated key and HMAC that separately. Some forms of corruption still look like bad passwords, but it's a little easier to tell them apart this way.
You can store a known value encrypted with the key in your database. validating if the key is correct is then straightforward: you encrypt the known string, and compare it to the encrypted output in the database. If you stick with a single block of data, then you don't have to worry about modes of operation and you can keep it simple.
It is also possible to store a hash of the key, but I would treat the key as a password, and take all the defensive measures you would take in storing a password in your database (e.g. use bcrypt, salt the hash, etc).
If you can't store these values, you can decrypt something where you don't know the actual contents, but perhaps know some properties of the message (e.g. ASCII text, has today's date somewhere in the string, etc) and test the decrypted message for those properties. Then if the decrypted block that doesn't have those properties (e.g. has bytes with MSB set, no instance of the date), you know the key is invalid. There is a possibility of a false positive in this case, but chances are very low.
Generally I agree with Peter Elliott. However, I have couple of additional comments:
a) If keys were randomly generated then storing hashes of the keys are safe
b) You can always attach to encrypted message (if you can control that) a hash of orginial message. In such case, you can decrypt message, get hash of decrypted message and compare it with the hash of original message. If they are eqaul then correct key was used for decryption.

Resources