Control over format when using RequestImageFileAsync in Blazor WebAssembly - image-processing

Blazor Web assembly has a convenience method that converts an IBrowserFile containing an image into a resized version - which is handy for resizing large images prior to uploading them.
This method takes a format as a string which determines the format of the output file.
Is there anywhere a list of valid formats that this property will accept? Can you specify the compression or bit depth values on the resulting file?
Currently, If I take an existing .jpg file and convert it using a format string of "jpg" the resulting file, although smaller in pixel dimensions is actually about double the size on disk. A 4000x3000 image at about 2.8MB can be "reduced" to a 2000x1500 image that's 7.7MB in size. Which is obviously not helping when the purpose is to reduce upload size. I could easily upload the 2.8MB file and resize it more efficiently on the server.
var imageFile = await file.RequestImageFileAsync("jpg", 2000, 2000);
This suggests I'm using the method incorrectly - but Microsoft's documentation on this method gives no clues as to what valid "Format" strings might, only stating that it is a string type. I've tried ".jpg", "JPEG", "jpg" - all of which seem to produce the same valid jpg file. What should I be passing here to actually reduce the file size?

See https://developer.mozilla.org/en-US/docs/Web/Media/Formats/Image_types
It's actually not "image/jpg" but "image/jpeg". If you specify non-existent format, the fallback (at least for me) seems to be "image/png". That's why you always got a valid image but with the same filesize.

I think this method uses html types:
html types
Try "image/jpg".
Be careful, though, this is a request to the browser, and the browser can send back whatever it wants. I believe this will work fine on all browsers, but you'd better check some of the common culprits (Hi, Opera!) to confirm.

Related

datauris: are PNG and JPEG MIME types interchangable in modern browsers?

I have noticed that if you take a base64 string representing the raw bytes of either a JPG or a PNG, call this <B>, and you send a datauri to the browser using either:
data:image/png;base64,<B>
or
data:image/jpeg;base64,<B>
all four combinations work (by work I mean Chrome renders them), the four combinations being
<B> is raw png image, and the data uri uses the png type
<B> is raw png image, and the data uri uses the jpeg type (was expecting a failure!)
<B> is raw jpeg image and the data uri uses the png type (was expecting a failure!)
<B> is raw jpeg image, and the data uri uses the jpeg type
Why is this? The binary encoding of jpeg and png are not the same. I was expecting that if <B> was the raw bytes of a png, the jpeg datauri would fail to render, and visa versa.
Data URLs are described in the RFC 2397 proposed standard (The "data" URL scheme) from August 1998:
data:[<mediatype>][;base64],<data>
This document doesn't really go into implementation details such as error handling.
It's worth noting that the media type part is optional and defaults to plain text in 7-bit US-ASCII:
If <mediatype> is omitted, it defaults to text/plain;charset=US-ASCII.
Now, from the context of your question I assume you're really talking about HTML documents and <img> tags. Whether the information inside the src attribute comes from an inline data URL string or an HTTP network request is possibly secondary and I suspect that raw binary data is handled by the same routines inside the browser no matter its source.
You can emulate the same behaviour by sending arbitrary Content-Type headers with regular image files. This can be accomplished by (mis)configuring your web server, writing a download script in a server-side language or just renaming the files. And in fact you don't even need HTML:
In short, it all relates to the ability of browsers to recover from error conditions that arises from the fundamental design philosophy of the World Wide Web, which kind of evolved organically.

Cloudinary image transformation parameters not working in Rails app

Here's the code:
= link_to (cl_image_tag(post.image_url, width:640, quality:30, class: "img-responsive")), post_path(post)
As mentioned here, this should give me an image with quality set to 30, but I'm not seeing the change in quality of the images on the site. I've tried different values for quality ranging from 10 to 100 but I'm not seeing even a slight difference. I also tried other parameters, for example, format: "jpg", which is supposed to force convert all non-jpg files to jpg, but it isn't working either. The width param works fine, by the way.
The cl_image_tag method accepts the image's public ID and doesn't support a URL parameter. The image tag you are getting is a fallback, which ignores all Cloudinary's parameters (except width/height which are used for the html tag). Make sure you save the public IDs in your DB. I recommend using Carrierwave, which handles the DB maintenance for you.

How can I load a .webarchive into a UIWebView from an in-memory NSData?

I have an NSData containing a .webarchive blob that I'd like to load into a UIWebView. I know that this is possible (see this question), and I have it working if I first serialize it to disk and then load it with UIWebView's -loadRequest: method.
However, I'd prefer not to serialize to disk first since I already have the data in memory. I've tried to use -loadData:MIMEType:textEncodingName:baseURL: with the data, and various base URLs, but it always fails (nil, #"http://", the actual root path that the web archive contains, etc) to load.
Again, the same archive loads correctly if I bounce it to disk first and load via -loadRequest:, so I feel like something about the MIMEType (I'm using application/octet-stream) and/or the base URL is wrong. Anyone know what the incantation is?
Using -loadData:... will work. The MIME type specified must be application/x-webarchive (not the generic "octet-stream"). If this is set correctly, both the text encoding and base URL can be just supplied as nil.

How to validate a file as image on the server before uploading to S3?

The flow is:
The user selects an image on the client.
Only filename, content-type and size are sent to the server. (E.g. "file.png", "image/png", "123123")
The response are fields and policies for upload directly to S3. (E.g. "key: xxx, "alc": ...)
The case is that if I change the extension of "file.pdf" to "file.png" and then uploads it, the data sent to the server before uploads to S3 are:
"file.png"
"image/png"
The servers says "ok" and return the S3 fields for upload .
But the content type sent is not a real content type. But how I can validate this on the server?
Thanks!
Example:
Testing Redactorjs server side code (https://github.com/dybskiy/redactor-js/blob/master/demo/scripts/image_upload.php) it checks the file content type. But trying upload fake image (test here: http://imperavi.com/redactor/), it not allows the fake image. Like I want!
But how it's possible? Look at the request params: (It sends as image/jpeg, that should be valid)
When I was dealing with this question at work I found a solution using Mechanize.
Say you have an image url, url = "http://my.image.com"
Then you can use img = Mechanize.new.get(url)[:body]
The way to test whether img is really an image is by issuing the following test:
img.is_a?(Mechanize::Image)
If the image is not legitimate, this will return false.
There may be a way to load the image from file instead of URL, I am not sure, but I recommend looking at the mechanize docs to check.
With older browsers there's nothing you can do, since there is no way for you to access the file contents or any metadata beyond its name.
With the HTML5 file api you can do better. For example,
document.getElementById("uploadInput").files[0].type
Returns the mime type of the first file. I don't believe that the method used to perform this identification is mandated by the standard.
If this is insufficient then you could read the file locally with the FileReader apis and do whatever tests you require. This could be as simple as checking for the magic bytes present at the start of various file formats to fully validating that the file conforms to the relevant specification. MDN has a great article that shows how to use various bits of these apis.
Ultimately none of this would stop a malicious attempt.

Translate binary characters to a human readable string?

So let's say we have a string that is like this:
‰û]M§Äq¸ºþe Ø·¦ŸßÛµÖ˜eÆÈym™ÎB+KºªXv©+Å+óS—¶ê'å‚4ŒBFJF󒉚Ү}Fó†ŽxöÒ&‹¢ T†^¤( OêIº ò|<)ð
How do I turn it into a human readable string of chars, cuz like it was a wierd output of HTML from a webserver that is text I think cuz half the web page loaded correctly. Do I need to read it with like C or Python or something. That's only a snippet of the string.
If that is in fact supposed to be a human-readable string, you'll need to figure out what character encoding it uses and translate. It's also possible that the string is compressed, encrypted, or represents binary data. It would be helpful to know where you got your string from.
I'm guessing your web server isn't sending the correct mime-type. I'd suggest taking a look at the http headers using Firefox's Live Headers plugin. If a web server decides to send you a pdf, but doesn't set the mime-type, you'll just see garbage on your screen. Alternatively, save the page to a file, and then run these commands from Cygwin or a unix shell:
file mypage.htm
strings mypage.htm
The first will tell you if the header bytes follow any recognizable pattern. The second will strip out and display all the human readable text.

Resources