We have a web application that is supposed to work offline on iPads. It's using manifest to define which files need to be cached. The point is that we have reached the 10MB limit the iPad has to store those files, and we need to add even more files to the list.
Is there any workaround to increase this limit, or store the files in any other way? Note that going native is not an option at this moment.
You could take a look at this.
Try repeating the process of increasing the manifest in chunks less than 5MB before updating the cache with window.applicationCache.update() and you should be able to pass the 10mb limit
as i understand you can do trick like this:
cache.manifest is a file generated with PHP, on first generation it returns file list less then 5Mb and sets a cookie flag. client side script on 'cached' event checks if cookie set and calls window.applicationCache.update() so PHP code runs again it checks if cookie set and return full cache.manifest file, updates cookie flag
Related
I am working on a web site project PHP/APACHE without any js until now.
I found out various ways to set the upload limit of an image to the server.
They work, but when I upload a very large one, the delay before the message "your file is too big" is from far too long. This means if a user does'nt understand what max 2.4MB is he will be likely to wait more than a minute or 2 before seeing the message.
My question is :
Do you know any mean to have the uopload automatically cancelled if the image he tries to transfer exceeds the limit ?
Thank a lot
SunnyOne.
Basically, there are 2 ways to do this: With Flash/Java, or with fancy HTML5 JavaScript that only works on some browsers (and the most recent version of those, as well.
Check these other SO questions for pointers:
Client Checking file size using HTML5? and Detecting file upload size on the client side?.
Also, check out these tools: YUI2 Uploader, FancyUpload, SWFUpload
Is it possible to create a UIWebView that has an HTML5 offline appcache pre-populated so that it will work offline even if it is the first time the user is accessing the UIWebView?
If so, how?
I know I can achieve this through other mechanisms, but the above is my first choice. And I'm just plain curious if it's possible.
I'm seeing nothing about it in the documentation.
What you are looking for is two files in your cache folder.
ApplicationCache.db and cache.db
They both reside in Library/Caches/[your bundle identifier] folder for your application, which you have full access. You can add pre populated cache data to you bundle, and simply copy it to caches folder on you app launch.
BTW you can play with them easily, as they are simple, SQLITE db's.
I hope this helps
I think this may reduce to a question of whether you can programmatically write to the UIWebView's cache file (which is to say, whether the UIWebView cache resides within your app's sandbox) – if you can't, then game over. If you can, then it becomes a question of what happens after you drop a pre-populated cache file into place, and whether the UIWebView is "fooled" into thinking that it's already downloaded and cached your HTML5 content.
If you're using the iPhone Simulator to test your app, look in ~/Library/Application Support/iPhone Simulator/5.0/Applications (replace "5.0" with your iOS version, if necessary). You should see a long string of hex digits for each app you've compiled in the simulator; find the one that corresponds to your app, and then look in the Library/Caches/[your app's identifier] subfolder for a file named Cache.db.
This may be the place where UIWebView stores its cache data. If it isn't, game over and the answer to your question is "no, that's not possible". If it is where UIWebView caches data, then it may be possible to populate this Cache.db file in the simulator, grab the file, store it in your app bundle, and then write the cache to the appropriate location when it's time to pre-populate the cache.
At any rate, that's the line of attack I'd use to determine whether it's possible – I'm pretty confident the answer is going to be "no, not possible" unless it turns out the UIWebView cache does reside in your app's sandbox, is writable by you, and you can fool UIWebView by replacing its cache file.
What I mean by force loading is this: if a web page is visited, elements are cached for a faster load the next time. What I mean by force-loading or warming up the cache is to actually issue requests to the uiwebview in the background to load the data into the cache for faster loading so when the user actually clicks on it, it has the information in the cache.
If you use chrome, it's similar to their strategy they use to make surfing a little faster: when a page loads, they immediately find all links on the page and resolve the links so if a user clicks on the link, they don't have to wait for a response from the DNS servers before seeing the page as it's already done for them!
I hope this made a little more sense.
QUOTE:
this is a pretty cool question. consider looking at only what is provided in the developer reference as apple will reject your app otherwise. you can consider 'force-loading' whatever you want in a hidden view to warm up the cache. in this way, you have the ability to add elements to the cache but you don't have the power to remove items from the cache unless you know the internal caching algorithms... I'd say this is less of a hack and more of a technique! – vinnybad Nov 23 at 17:26
#vinnybad: I'm not entirely sure what you mean by "force-loading". Can you elaborate on that? (Sounds like it might be worth putting in an answer rather than a comment!) – Trott Nov 23 at 17:40
I mentioned Amazon CDN and iOS devices because I am not sure which part is the culprit.
I host jpg and PDF files in Amazon CDN.
I have an iOS application that download a large number of jpg and PDF files in a queue. I have tried using dataWithContentOfURL and ASIHttpRequest, but I get the same result. ASIHttpRequest at least gives a callback to indicate that there is problem with the download, so I can force it to retry.
But this happens very often. Out of 100 files, usually 1-5 files have to be redownloaded. If I check the file size, it is smaller than the original file size and can't be opened.
The corrupted files are usually different everytime.
I've tried this on different ISP and network. It's the same.
Is there a configuration that I missed out in Amazon CDN, or is there something else I missed out in iOS doWnload? Is it not recommended to queue large number of files for download?
I wouldn't download more than 3 or 4 items at once on an iPhone. Regardless of implementation limitations (ASIHTTPRequest is decent anyway) or the potential for disk thrashing, you have to code for the 3G use case unless your app is explicitly marked (as in, with an Info.plist setting) that it requires Wi-Fi.
A solution exists within ASIHTTPRequest itself to make these operations sequential rather than concurrent. Add your ASIHTTPRequest objects to an ASINetworkQueue (the one returned by [ASINetworkQueue queue] will do fine). They will be executed one after the other.
Note that if you encounter any errors, all the requests on the queue will by default be cancelled unless you set the queue's shouldCancelAllRequestsOnFailure to NO.
EDIT: I just noticed you already mentioned using a download queue. I would therefore suspect it's more of an issue at the other end than on your device. Connections could be dropping for various reasons: keep-alive setting is too short, resources too low on the server so it's timing out, some part of the physical link between the server and the Internet backbone is failing intermittently. To be sure, though, you probably need to test your app on multiple devices to make sure it's consistently failing across all of them to really be able to say that.
You could possibly try reducing the number of concurrent downloads:
ASIHTTPRequest.sharedQueue.maxConcurrentOperationCount = 2;
This changes the default ASIHTTPRequest queue - if you're using your own queue set the value on that instead.
The default is 4, which is above the limit recommended by the HTTP 1.1 RFC when using persistent connections and when all the content is on the same server.
I'm writing an MVC application which serves transformed versions of user-uploaded images (rotated, cropped, and watermarked.) Once the transformations are specified, they're not likely to change, so I'd like to aggressively cache the generated output, and serve it as efficiently as possible.
The transformation options are stored in the database and used by the image servers to create the images on demand; only the original uploads are stored permanently. Caching the generated images in a local directory allows IIS 7 to pick them up without touching the ASP.NET process, i.e. by matching the route. Static images in images/ take precedence over the dynamic MVC route /images/{id}.jpg.
My concern at this point is when the user actually changes the transformation options -- the images need to be re-generated, but I'd like to avoid manually deleting them. I'm storing a last-modified field in the database, so I could append that timestamp to the URL, e.g. http://images.example.com/images/153453543.jpg?m=123542345453. This would work if the caching was handled by the ASP.NET process, which could vary the output cache by the parameter m, but seeing as I need to serve large quantities of images I'd rather avoid that.
Is there an intelligent way to get the IIS to discard static files if some condition is met?
If you don't want your ASP.NET code to be invoke every time someone requests an image then I would recommend that you delete the images when updating the transformations. It is a relatively "free" operation since it just a cache and they will be regenerated when needed.
You might be concerned about tracking if the transformation is actually changed when the user updates image properties but how often will the user make changes at all. Does it matter if you need to regenerate an image a bit to often?
You could include the timestamp in the filename itself, e.g.
http://images.example.com/images/153453543_20091124120059.jpg.
That way you could avoid to delete the images when updating. However, you would leave a trail of old outdated files...
Why not run the process to generate the physical image whenever those settings are changed, rather than on each request?
We take text/csv like data over long periods (~days) from costly experiments and so file corruption is to be avoided at all costs.
Recently, a file was copied from the Explorer in XP whilst the experiment was in progress and the data was partially lost, presumably due to multiple access conflict.
What are some good techniques to avoid such loss? - We are using Delphi on Windows XP systems.
Some ideas we came up with are listed below - we'd welcome comments as well as your own input.
Use a database as a secondary data storage mechanism and take advantage of the atomic transaction mechanisms
How about splitting the large file into separate files, one for each day.
If these machines are on a network: send a HTTP post with the logging data to a webserver.
(sending UDP packets would be even simpler).
Make sure you only copy old data. If you have a timestamp on the filename with a 1 hour resolution, you can safely copy the data older than 1 hour.
If a write fails, cache the result for a later write - so if a file is opened externally the data is still stored internally, or could even be stored to a disk
I think what you're looking for is the Win32 CreateFile API, with these flags:
FILE_FLAG_WRITE_THROUGH : Write operations will not go through any intermediate cache, they will go directly to disk.
FILE_FLAG_NO_BUFFERING : The file or device is being opened with no system caching for data reads and writes. This flag does not affect hard disk caching or memory mapped files.
There are strict requirements for successfully working with files opened with CreateFile using the FILE_FLAG_NO_BUFFERING flag, for details see File Buffering.
Each experiment much use a 'work' file and a 'done' file. Work file is opened exclusively and done file copied to a place on the network. A application on the receiving machine would feed that files into a database. If explorer try to move or copy the work file, it will receive a 'Access denied' error.
'Work' file would become 'done' after a certain period (say, 6/12/24 hours or what ever period). So it create another work file (the name must contain the timestamp) and send the 'done' through the network ( or a human can do that, what is you are doing actually if I understand your text correctly).
Copying a file while in use is asking for it being corrupted.
Write data to a buffer file in an obscure directory and copy the data to the 'public' data file periodically (every 10 points for instance), thereby reducing writes and also providing a backup
Write data points discretely, i.e. open and close the filehandle for every data point write - this reduces the amount of time the file is being accessed provided the time between data points is low