Can anyone comment on the decision to use sprites for images or not? I see the following benefits/trade-offs (some of which can be mitigated):
Sprites over individual images
Pros:
Fewer images to manage
Easier to implement themed images
Image swaps (JS/CSS) happen faster (because they do not require additional image loads)
Faster image loads due to fewer HTTP requests
Fewer images to cache (although virtually no difference in overall KB)
Cons:
More background positions to manage
Image payload may be over-inflated (sprite may contain unused images), causing page load may be slower
Slower images loads because they cannot be downloaded synchronously
I don't think there's one definitive answer to this. Opinions will differ according to need and individual preference.
My guideline is to always evaluate the benefit for the end user vs. the benefit for the developers. ie. what is the real value of the work you're doing as a developer.
Reducing the number of HTTP requests is always one of the first things to fix when optimizing a web page. Proper usage of caching can achieve much of the same thing as using sprites does. After all, very often graphics can be cached for a really long time.
There might be more benefit from minimizing scripts and stylesheets rather than adding graphics into a sprite.
Your code for managing sprites might increase complexity and developer overhead, especially as number of developers increases.
Learning proper use of cache-headers and configure your web-server or code properly might often be a more robust way of improving performance in my opinion.
If you've got a decent amount of menu entries in which you want to do roll-over images I'd recommend going to a sprite system as opposed to doing multiple images, all of which need to be downloaded separately. My reasons for so are pretty much inline with what you have mentioned in your post with a couple modifications:
The image swaps wouldn't be done with javascript; most of the sprites I've seen just use the :hover on the link itself within an unordered list.
Depending on the filetype/compression the download of the image file itself will be negligible. Downloading one image as opposed to multiple is generally faster in overall download and load.
Related
I have been doing research on what the best way to achieve undo/redo functionality is for a painting app. I am using OpenGL ES 2.0 on iOS. The most popular approach seems to be to save a list of commands and VBOs to re-generate the painting to its previous state (Memento design structure). The other approach is to take graphical snapshots after each drawing action and revert to these snapshots on undo.
I have a problem with both approaches:
1) Memento - after a long list of actions, especially computationally intensive flood fill algorithms, the undo/redo functionally will get very slow and intensive.
2) Snapshots - after a long list of actions these snapshot will start to take up a lot of memory, especially if in raw state.
I was wondering if anybody has found a solution that works well for this situation, or perhaps somebody here has an idea how to optimize the above approaches.
Thanks.
I don't think there's a way around limiting the number of steps that are undoable. You will always need some amount of memory to capture either the previous state, or the state change, for each undoable operation.
The Command pattern actually seems like the much more natural fit than the Memento to handle undo/redo. Using this, you will only store information about the specific changes for each operation. Which can still be substantial depending on the operation, but I think it can be much more targeted than blindly saving entire object states with a Memento.
I have decided to try a hybrid approach, where I save a bitmap snapshot every 10-15 actions, and use command lines to restore individual actions past the snapshots. A more in depth answer is offered here: https://stackoverflow.com/a/3944758/2303367
First post on SO; hopefully I am doing it right :-)
Have a situation where users need to upload and view very high resolution files (they need to pan, tilt, zoom, and annotate images). A single file sometimes crosses 1 GB so loading complete file on client side is not an option.
We are thinking about letting the users upload files to the server (like everyone does), then apply some encryption on server side creating multiple, relatively small low resolution images with varying sizes. We then give users thumbnails with canvas size option on the webpage for them to pick and start their work.
Lets assume a user opens low grade image with 1280 x 1028 canvas size. Image will be broken into tiles before display, and when user clicks on a title it will be like zooming in to a specific tile. Client will send request to the server asking for higher resolution image for the title. Server will send the image which will be broken into titles again for the user to click and get another higher resolution image from server and so on ... Having multiple images with varying resolution will help us break images into tiles and serve user needs ('keep zooming in' or out using tiles).
Has anyone dealt with humongous image files? Is there a preferred technical design you can suggest? How to handle areas that have been split across tiles is bothering me a lot so not sure how above approach can be modified to address this issue.
We need to plan for 100 to 200 users connected to the website simultaneously, and ours is .NET environment if it matters
Thanks!
The question is a little vague. I assume you are looking for hints, so here are a few:
I see uploading the images is a problem in the firstplace. Where I come from, upload-speeds are way slower than download speeds. (But there is litte you can do if you need your user to upload gigabytes...) Perhaps offer some more stable upload than web. FTP if you must.
Converting in smaller pieces should be no big problem. Use one of the availabe tools. Perhaps imagemagick. I see there is a .net wrapper out: https://magick.codeplex.com/
More than converting alone I think it is important not to do it everytime on the fly (you would need a realy big machine) but only once the image is uploaded. If you want to scale you can outsource this to another box in the network.
For the viewer. This is the interessting part. There are some ready to use ones. Google has one. It's called 'Maps' :). But there is a free alternative: OpenLayers from the OpenStreetmap Project: http://wiki.openstreetmap.org/wiki/OpenLayers All you have to do is naming your generated files in the right way and a litte configuration.
Even if you must for some reasons create the tiles on the fly or can't use something like OpenLayers I would try to stick to its naming scheme. Having something working to start with is never a bad idea.
I am working on a security problem, where I am trying to identify malicious images. I have to mine for attributes from images (most likely from the metadata) that can be fed in to Weka to run various machine learning algorithms, in order to detect malicious images.
Since the image metadata can be corrupted in various different ways, I am finding it difficult to identify the features to look at in the image metadata, which I can quantify for the learning algorithms.
I had earlier used information like pixel info etc using tools like ImageJ to help me classify images, however I am looking for a better way (with regards to the security) to identify and quantify features from the image/image-metadata.
Any suggestion on the tools and the features?
As mentioned before this is not a learning problem.
The problem is that one exploit is not *similar* to another exploit. They exploit individual, separate bugs in individual, different (!) libraries, things such as missing bounds checking. It's not so much a property of the file, but more of the library that uses it. 9 out of 10 libraries will not care. One will misbehave because of a programming error.
The best you can do to detect such files is to write the most pedantic and at the same time most robust format verifier you can come up with, and reject any image that doesn't 1000% fit the specifications. Assuming that the libraries do not have errors in processing images that are actually valid.
I strongly would recommend you start with investigating how the exploits actually work. Understanding what you are trying to "learn" may guide you to some way of detecting them in general (or understanding why there is no general detection possible ...).
Here is a simple example of the ideas of how one or two of these exploits might work:
Assume we have a very simple file format, like BMP. For compression, it has support for a simple run length encoding, so that identical pixels can be efficiently stored as (count x color pairs). Does not work well with photos, but is quite compact for line art. Consider the following image data:
Width: 4
Height: 4
Colors: 1 = red, 2 = blue, 3 = green, 4 = black
Pixel data: 2x1 (red), 4x2 (blue), 2x3, 5x1, 1x0, 4x1
How many errors in the file do you spot? They may cause some trusting library code to fail, but any modern library (written with knowing about this kind of attacks and with knowing that files may have been corrupted due to transmission and hard disk errors) should just skip that and maybe even produce a partial image. See, maybe it was not an attack, but just a programming error in the program that produced the image...
Heck, even not every out-of-bounds use must be an attack. Think of CDs. Everybody used "overburning" at some time to put more data on a CD than was meant by the specifications. Yes, some drive might crash because you overburned a CD. But I wouldn't consider all the CDs with more than 650 MB to be attacks, just because they broke the Yellow Book specifications of what a CD is.
I am working on an image processing app for the iOS, and one of the various stages of my application is a vector based image posterization/color detection.
Now, I've written the code that can, per-pixel, determine the posterized color, but going through each and every pixel in an image, I imagine, would be quite difficult for the processor if the iOS. As such, I was wondering if it is possible to use the graphics processor instead;
I'd like to create a sort of "pixel shader" which uses OpenGL-ES, or some other rendering technology to process and posterize the image quickly. I have no idea where to start (I've written simple shaders for Unity3D, but never done the underlying programming for them).
Can anyone point me in the correct direction?
I'm going to come at this sideways and suggest you try out Brad Larson's GPUImage framework, which describes itself as "a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies". I haven't used it and assume you'll need to do some GL reading to add your own filtering but it'll handle so much of the boilerplate stuff and provides so many prepackaged filters that it's definitely worth looking into. It doesn't sound like you're otherwise particularly interested in OpenGL so there's no real reason to look into it.
I will add the sole consideration that under iOS 4 I found it often faster to do work on the CPU (using GCD to distribute it amongst cores) than on the GPU where I needed to be able to read the results back at the end for any sort of serial access. That's because OpenGL is generally designed so that you upload an image and then it converts it into whatever format it wants and if you want to read it back then it converts it back to the one format you expect to receive it in and copies it to where you want it. So what you save on the GPU you pay for because the GL driver has to shunt and rearrange memory. As of iOS 5 Apple have introduced a special mechanism that effectively gives you direct CPU access to OpenGL's texture store so that's probably not a concern any more.
Our company develops big corporate solution for business (web-site). After approving to support Apple iPad we saw, that our site is very slow on it. So, I was tasked to optimize performance for iPad by optimizing GUI (Html, JS...), because server part of application is pretty fast.
I've found some solutions tricks with customer's approve:
* Reduce grids columns count and leave only the most useful.
* Turn off all the animation.
* Decrease resizing as much as possible.
And of course, I minified all the scripts and stylesheets.
Can you make me some additional advises how to improve performance?
There's several things, many of which apply to desktop web as well as they are just part of good practice.
In no particular order:
Remove extra whitespace and HTML/CSS/JS comments
GZip all text based content (HTML,CSS,JS)
Optimize all your images (e.g. use a service like http://Smush.it )
Move your images/static content to a separate server (increased HTTP pipelining), and don't serve cookies on that server
Don't serve up anything to "mobile" that they don't need (where possible)
Don't scale down images on the client, served scaled down versions from the server
Since most mobile browsers handle CSS well, convert "lists" of data that render in tables, to use unordered lists etc.
Serve common scripts like jQuery from a CDN like Google
most mobile browsers support some kind of offline caching, or local lightweight database - if there is anything that you can cache to reduce future loads consider doing so
If you want to get really fancy, you can do things like not directly loading images... and then checking on the page which images are currently in view (or delay slightly), and loading them as needed... the helpfulness of this will depend highly on the content
Consider delaying the load of search results if applicable (e.g. like a Twitter stream that might only load the first 20 items, then only load additional items on demand.
Some of the practical things which i would like to tell are;
Avoid use of iframes on your pages. They do not work well on iPad.
Use a library like Sencha touch which is highly optimized for the iPad.
Make links or buttons to have large touvh areas, as users can get frustrated with incorrect link clicks..
Avoid use of CSS absolute positioned elements.
Also to add a few more points;
It is best to use the meta viewport setting as width=device-width...This ensures that your viewport is set based on your device and not hard-coded.
Avoid use of CSS :hover properties in your iPad CSS...They can cause unnecessary issues (false hover's)