Encrypt / Decrypt plain text in 2022 - dart

I need to encrypt plain text and send it as plain text via JSON
I also need to receive the plain text that was encrypted from JSON and decrypt it to the original plain text.
All examples I can find suggest using encrypt 5.0.1
I can see how they encrypt the text and decrypt the encrypted text but there is no example where you can actually decrypt the plain encrypted text back to the original text.
String Encryption package is not supported anymore and Flutter will not accept it as a valid package anymore.
Could you please give me an idea where to look to solve the issue?

Related

Can I decrypt data which is encrypted by Keyczar using Google Tink?

I have been using Google Keyczar for encrypting data in my JAVA app. And I want to change the crypto solution to Google Tink.
But the problem is the already encrypted data by Keyczar. Can I decrypt them by Tink?
If yes, I want to change the crypto solution from Keyczar to Tink. If no, I have to think about another solution.
Thank you.
I did it.
Keyczar is using AES. So I use TinyAES.
Keyczar is also using HMAC. So I use HMAC of avr-crypto-lib.
Just one thing is I have to extract the key from Keyczar key.

image encrypt/decrypt between php and Android

I want to encrypt an image using PHP and decrypt it in an Android app. I found someone suggest to use MCrypt. However, I noticed that ImageMagick, which I use to convert pdf into jpg, seemed to have ability for encryption. Can I use ImageMagick to encrypt the jpg at the server side and decrypt it using JAVA? Thanks very much.
As per documentation
"ImageMagick only scrambles the image pixels. The image metadata remains untouched and readable by anyone with access to the image file.
ImageMagick uses the AES cipher in Counter mode. We use the the first half of your passphrase to derive the nonce. The second half is the cipher key."
To decrypt the image on the client side, you would have to keep the image header as is and decrypt the remainder of the file using the password with which it was encrypted with. That will require custom coding with knowledge of the image format internals. You will also have to find out how the nonce is derived from the passphrase.
You can alternatively use a SSL connection between the client and server or use any cryptographic scheme available in both PHP and Java either with symetric key or public key encryption as per your requirements.

What is the reason for the convention of sending base64 encoded images to Rails applications?

I develop for iOS primarily and I'm flirting with Rails as I divorce PHP, so I'm having my first encounter with Paperclip.
Looking for a simple example of the request format Paperclip is expecting, it seems that everyone is encoding their images to base64 on the client before sending the data to Rails. But when their Rails receives the data, they just unpack the base64 and pass the image into paperclip.
Why do people encode and decode their image data when sending it to rails?
Is there any way that a plaintext png byte stream would get corrupted where base64 wouldn't?
Or is this just an early optimization for security reasons?
Here is a related question about why base64 encoding is used Why do we use Base64? and here is a quote from there that relates to embedding images in html.
Historically it has been used to encode binary data in email messages where the email server might modify line-endings. A more modern example is the use of Base64 encoding to embed image data directly in HTML source code. Here it is necessary to encode the data to avoid characters like '<' and '>' being interpreted as tags.

Reliable way to tell if wrong key is used in aes256 decryption

I have some code that I am using to encrypt and decrypt some strings in an ios application. The code involves the use of CCCrypt. Is there a reliable way to test the validity of a key used without actually storing the key anywhere? From my research it seems as though the only way to come close to telling if the key is valid is by using key lengths and key hashes. Can anyone guide me in the proper direction for this?
Getting to the answer requires a little bit of background about proper encryption. You may know this already, but most people do this wrong so I'm covering it. (If you're encrypting with a password and don't encode at least an HMAC, two salts, and an IV, you're doing it wrong.)
First, you must use an HMAC (see CCHmac()) any time you encrypt with an unauthenticated mode (such as AES-CBC). Otherwise attackers can modify your ciphertext in ways that cause it to decrypt into a different message. See modaes for an example of this attack. An HMAC is a cryptographically secure hash based on a key.
Second, if your are using password-based encryption, you must use a KDF to convert it into a key. The most common is PBKDF2. You cannot just copy password bytes into a key.
Assuming you're using a password this way, you generally generate two keys, one for encryption and one for HMAC.
OK, with those parts in place, you can verify that the password is correct because the HMAC will fail if it isn't. This is how RNCryptor does it.
There are two problems with this simple approach: you have to process the entire file before you can verify the password, and there is no way to detect file corruption vs bad password.
To fix these issues somewhat, you can add a small block of extra data that you HMAC separately. You then verify that small block rather than the whole file. This is basically how aescrypt does it. Specifically, they generate a "real" key for encrypting the entire file, and then encrypt that key with a PBKDF2-generated key and HMAC that separately. Some forms of corruption still look like bad passwords, but it's a little easier to tell them apart this way.
You can store a known value encrypted with the key in your database. validating if the key is correct is then straightforward: you encrypt the known string, and compare it to the encrypted output in the database. If you stick with a single block of data, then you don't have to worry about modes of operation and you can keep it simple.
It is also possible to store a hash of the key, but I would treat the key as a password, and take all the defensive measures you would take in storing a password in your database (e.g. use bcrypt, salt the hash, etc).
If you can't store these values, you can decrypt something where you don't know the actual contents, but perhaps know some properties of the message (e.g. ASCII text, has today's date somewhere in the string, etc) and test the decrypted message for those properties. Then if the decrypted block that doesn't have those properties (e.g. has bytes with MSB set, no instance of the date), you know the key is invalid. There is a possibility of a false positive in this case, but chances are very low.
Generally I agree with Peter Elliott. However, I have couple of additional comments:
a) If keys were randomly generated then storing hashes of the keys are safe
b) You can always attach to encrypted message (if you can control that) a hash of orginial message. In such case, you can decrypt message, get hash of decrypted message and compare it with the hash of original message. If they are eqaul then correct key was used for decryption.

UTF-8 characters mangled in HTTP Basic Auth username

I'm trying to build a web service using Ruby on Rails. Users authenticate themselves via HTTP Basic Auth. I want to allow any valid UTF-8 characters in usernames and passwords.
The problem is that the browser is mangling characters in the Basic Auth credentials before it sends them to my service. For testing, I'm using 'カタカナカタカナカタカナカタカナカタカナカタカナカタカナカタカナ' as my username (no idea what it means - AFAIK it's some random characters our QA guy came up with - please forgive me if it is somehow offensive).
If I take that as a string and do username.unpack("h*") to convert it to hex, I get: '3e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a8' That seems about right for 32 kanji characters (3 bytes/6 hex digits per).
If I do the same with the username that's coming in via HTTP Basic auth, I get:
'bafbbaacbafbbaacbafbbaacbafbbaacbafbbaacbafbbaacbafbbaacbafbbaac'. It's obviously much shorter. Using the Firefox Live HTTP Headers plugin, here's the actual header that's being sent:
Authorization: Basic q7+ryqu/q8qrv6vKq7+ryqu/q8qrv6vKq7+ryqu/q8o6q7+ryqu/q8qrv6vKq7+ryqu/q8qrv6vKq7+ryqu/q8o=
That looks like that 'bafbba...' string, with the high and low nibbles swapped (at least when I paste it into Emacs, base 64 decode, then switch to hexl mode). That might be a UTF16 representation of the username, but I haven't gotten anything to display it as anything but gibberish.
Rails is setting the content-type header to UTF-8, so the browser should be sending in that encoding. I get the correct data for form submissions.
The problem happens in both Firefox 3.0.8 and IE 7.
So... is there some magic sauce for getting web browsers to send UTF-8 characters via HTTP Basic Auth? Am I handling things wrong on the receiving end? Does HTTP Basic Auth just not work with non-ASCII characters?
I want to allow any valid UTF-8 characters in usernames and passwords.
Abandon all hope. Basic Authentication and Unicode don't mix.
There is no standard(*) for how to encode non-ASCII characters into a Basic Authentication username:password token before base64ing it. Consequently every browser does something different:
Opera uses UTF-8;
IE uses the system's default codepage (which you have no way of knowing, other than it's never UTF-8), and silently mangles characters that don't fit into to it using the Windows ‘guess a random character that looks a bit like the one you wanted or maybe just not’ secret recipe;
Mozilla uses only the lower byte of character codepoints, which has the effect of encoding to ISO-8859-1 and mangling the non-8859-1 characters irretrievably... except when doing XMLHttpRequests, in which case it uses UTF-8;
Safari and Chrome encode to ISO-8859-1, and fail to send the authorization header at all when a non-8859-1 character is used.
*: some people interpret the standard to say that either:
it should be always ISO-8859-1, due to that being the default encoding for including raw 8-bit characters directly included in headers;
it should be encoded using RFC2047 rules, somehow.
But neither of these proposals are on topic for inclusion in a base64-encoded auth token, and the RFC2047 reference in the HTTP spec really doesn't work at all since all the places it might potentially be used are explicitly disallowed by the ‘atom context’ rules of RFC2047 itself, even if HTTP headers honoured the rules and extensions of the RFC822 family, which they don't.
In summary: ugh. There is little-to-no hope of this ever being fixed in the standard or in the browsers other than Opera. It's just one more factor driving people away from HTTP Basic Authentication in favour of non-standard and less-accessible cookie-based authentication schemes. Shame really.
It's a known shortcoming that Basic authentication does not provide support for non-ISO-8859-1 characters.
Some UAs are known to use UTF-8 instead (Opera comes to mind), but there's no interoperability for that either.
As far as I can tell, there's no way to fix this, except by defining a new authentication scheme that handles all of Unicode. And getting it deployed.
HTTP Digest authentication is no solution for this problem, either. It suffers from the same problem of the client being unable to tell the server what character set it's using and the server being unable to correctly assume what the client used.
Have you tested using something like curl to make sure it's not a Firefox issue? The HTTP Auth RFC is silent on ASCII vs. non-ASCII, but it does say the value passed in the header is the username and the password separated by a colon, and I can't find a colon in the string that Firefox is reporting sending.
If you are coding for Windows 8.1, note that the sample in the documentation for HttpCredentialsHeaderValue is (wrongly) using UTF-16 encoding. Reasonably good fix is to switch to UTF-8 (as ISO-8859-1 is not supported by CryptographicBuffer.ConvertStringToBinary).
See http://msdn.microsoft.com/en-us/library/windows/apps/windows.web.http.headers.httpcredentialsheadervalue.aspx.
Here a workaround we used today to circumvent the issue of non-ascii characters in the password of a colleague:
curl -u "USERNAME:`echo -n 'PASSWORT' | iconv -f ISO-8859-1 -t UTF-8`" 'URL'
Replace USERNAME, PASSWORD and URL with your values. This example uses shell command substitution to transform the password character encoding to UTF-8 before executing the curl command.
Note: I used here a ` ... ` evaluation instead of ${ ... } because it doesn't fail if the password contains a ! character... [shells love ! characters ;-)]
Illustration of what happens with non-ASCII characters:
echo -n 'zz<zz§zz$zz-zzäzzözzüzzßzz' | iconv -f ISO-8859-1 -t UTF-8
I might be a total ignorant, but I came to this post while looking for a problem while sending a UTF8 string as a header inside an ajax call.
I could solve my problem by encoding in Base64 the string right before sending it. That means that you could with some simple JS convert the form to base64 right before submittting and that way it can be conevrted back on the server side.
This simple tools allowed me to have utf8 strings send as simple ASCII. I found that thanks to this simple sentence:
base64 (this encoding is designed to make binary data survive transport through transport layers that are not 8-bit clean). http://www.webtoolkit.info/javascript-base64.html
I hope this helps somehow. Just trying to give back a little bit to the community!

Resources