I try to handle large files using Sync API for iOS
My program encrypts files in the stream, and I record the encrypted part (at ~ 320000 bytes) method
[file appendData: data error: nil];
file - variable format DBFile
data - variable format NSData, have size ~320000 bytes
When the file size reaches 80 MB program takes off and does not give an error
In the course of the program produces errors recieve memory warning, although no additional variables are not used
What are some ways to manage large files to dropbox api for ios (and download too)? Or what's my problem here?
Related
I am trying to use thingsboard to allow users to request log files from devices. When requested, the devices send the log files to my TB server as telemetry. First, the logs are compressed with gzip and then base64 encoded. I want to have the rule chain decompress these logs and email to the requestor. I've found code to convert the base64 string to a byte array, but I haven't found a way to decompress the resulting byte array. I tried to invoke zlib using:
var zlib = require('zlib');
but it results in an message that 'require' is not defined.
Any hints? What language exactly is the TB rule node environment?
We send it to S3 and then have a link to it available on the TB gui - the user can request a log file(s) on the UX and then a few minutes later can click on the logfile and it downloads to their computer/device as a zip file. Device is linux based.
We have stored image as an encrypted format and stored in local path. And once we captured the all documents the user click on submit button on that scenario we have decrypted all the images using RNCryptor(https://github.com/RNCryptor/RNCryptor) and save as Zip and https://github.com/marmelroy/Zip
But we have to store a decrypt format in memory instead of disk.
How would I zip a file so I could send it without writing to the hard drive and do it purely in memory?
Update
Another alternative is the ZIPFoundation library on Github (MIT/Thomas Zoechling). It appears to be Swift compatible and is apparently "effortless." BTW - I learned about this library while reading an interesting blog article where the author (Max Desiatov) walks through how he unzips in memory using the library (see the section - Unzipping an archive in memory and parsing the contents).
Original
Have you taken a close look at the Single-Step Compression article? There is a section that talks about writing the compressed data to file (but it's already been compressed in memory at that point). Once you get the data generated then I guess you could do with it as you will...
Article Steps
Create the Source Data
Create the Destination Buffer
Select a Compression Algorithm
Compress the Data
Write the Encoded Data to a File
Read the Encoded Data from a File
Decompress the Data
My App uses the NSStream classes to FTP a file from the iPhone to a server. The file is transferred but the contents are corrupted. The FTP server logs indicate that the transfer type used was "image" where the original file is of type "ASCII". I searched the KCFStreamProperty list but there seems to be no applicable property to set to control the transfer type.
I experimented with the Apple provided SimpleFTP application. It transfers up images. Changing to transfer an ASCII text file causes the same corruption issue.
Any suggestions?
Thanks
I'm developing a download manager using Indy and Delphi XE (The application uses Multithreading to attempt several connections to the server). Everything works fine but sometimes the final downloaded file is broken and when I check downloaded temp files I see that 2 or 3 of them is filled with zero at their end. (Each temp file is download result of each connection).
The larger the file is, the more broken temp files I get as the result.
For example in one of the temp files which was 65,536,000 bytes, only the range of 0-34,359,426 was valid and from 34,359,427 to 64,535,999 it was full of zeros. If I delete those zeros, application will automatically download the missing segments and what I get as the result, well if the problem wouldn't happen again, is the healthy downloaded file.
I want to get rid of those zeros at the end of the temp files without having a lost in download speed.
P.S. I'm using TFileStream and I'm sending it directly to TIdHTTP and downloading the files using GET method.
Additional Info: I handle OnWork event which assigns AWorkCount to a public int64 variable. Each time the file is downloaded, the downloaded file size (That Int64 variable) is logged to a text file and from what the log says is that the file has been downloaded completely (even those zero bytes).
Make sure the server actually supports downloading byte ranges before you request a range to download. If the server does not support ranges, a requested range will be ignored by the server and the entire file will be sent instead. If you are not already doing so, you should be using TIdHTTP.Head() to text for range support before then calling TIdHTTP.Get(). You also need to do this anyway to detect if the remote file has been altered since the last time you downloaded it. Any decent download manager needs to be able to handle things like that.
Also keep in mind that if TIdHTTP knows up front how many bytes are being transferred, it will pre-allocate the size of the destination TStream before then downloading data into it. This is to speed up the transfer and optimize disc I/O when using a TFileStream. So you should NOT use TFileStream to access the same file as the destination for multiple simultaneous downloads, even if they are writing to different areas of the file. Pre-allocating multiple TFileStream objects will likely trample over each other trying to set the file size to different positions. If you need to download a file in multiple pieces simultaneously then either:
1) download each piece to a separate file and copy them into the final file as needed once you have all of the pieces that you need.
2) use a custom TStream class, or Indy's TIdEventStream class, to manage the file I/O yourself so you can ignore TIdHTTP's pre-allocation attempts and ensure that multiple file I/O operatons do not overlap each other incorrectly.
I use NSURLConnection to download large files from the web on iPhone. I use the "didReceiveData" method to append data to a file in the Documents folder. It works fine.
If the download is interrupted (for instance, because the user pressed the "home" button), I would like to be able to continue to download the next time the user launch my application, and not from scratch!
Anyone can help me ?
ASIHTTPRequest has easy to use support for resuming downloads:
http://allseeing-i.com/ASIHTTPRequest/How-to-use#resume
Alternatively, find out how much data you have downloaded already by looking at the size of the existing data, and set the 'Range' header on your NSMutableURLRequest:
[request addValue:#"bytes=x-" forHTTPHeaderField:#"Range"];
..where x is the size in bytes of the data you have already. This will download data starting from byte x. You can then append this data to your file as you receive it.
Note that the web server you are downloading from must support partial downloads for the resource you are trying to download - it sends the Accept-Ranges header if so. You can't generally resume downloads for dynamically generated content (e.g. a page generated by a script on the server).
The only method I can think of is to break the file you're downloading up into smaller files. You can use the
- (void)connectionDidFinishLoading:(NSURLConnection *)connection
method to determine when each portion of the file is finished, and restart the download at the last portion that didn't get downloaded.
Look at the HTTP standard to see how you would form a request to start a transfer of a resource from a specific offset. I can't find any code offhand, sorry.