Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
my project is written in swift/swiftui latest version. Right now the user can export any file to the default folder (Files).
My Question: Is it possible to encrypt/protect the exported file in a way, so that only the app itself can read/write it again?
Does Swift support this kind of feature?
Or do I have to implement a workaround for that?
Thx! for helping out!
If you don't want anyone else to see the file, don't export to Documents. Put it in Application Support or similar. You are sandboxed, so you are protected from anyone else ever seeing the file.
If the goal is also to prevent the file from ever being backed up, then you can add instructions to that effect. Suppose you have the file's URL as myFileURL. Then (supplying mentally the surrounding do...catch blocks):
var rv = URLResourceValues()
rv.isExcludedFromBackup = true
try myFileURL.setResourceValues(rv)
Finally, if the goal is to encrypt a piece of data, not really a file per se, then put it in the user's keychain.
You certainly can encrypt your file; it's a fair bit of work, but if done correctly it gives you a pretty good guarantee that only your app can read it again. Generally, #matt's answer will be your best choice -- put it where no other app can access it. The only use case I can think of where encrypting it & storing it to Documents would be worthwhile is if you need to be able to move (though not read) the file with other apps -- say to e-mail it to yourself or to somebody else who is also using your app. In that case, the Files app will let you see the files in Documents, and then Share them.
Unless that ability is important, go with #matt's answer. If you want to go the encryption route, something like Swift-Sodium is a good place to start.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
Do they use some algorithm such as the ones used in GDIFF program?
Do they ship the intermediate code to the user instead of iOS binary?
Why is it not possible do do a Delta update while updating iOS on Windows?
A delta update is defined as 'an update that only requires the user to download the code that has changed, not the whole program.'
iOS does, in fact, perform delta updates, and contradictory to Ralph's comment, we know exactly how it works.
Applications
For iOS applications, delta updates are used to minimize the download size of new versions, to save internet traffic. Usually, the things that give a program most of its size are the static images and other media. During a simple update such as a bug fix, usually these static images stay the same, and there is no reason to send them over the network again. The only thing that is sent is the actual Mach-O binary containing the code that is buggy, along with whatever files have been changed.
So, delta updates most likely run by checksumming. When you submit an app update, Apple checks the checksum of all files submitted using some algorithm (most likley SHA1 or MD5) to scan for changes. If the executable has changed, but a certain image or other file has not, the image isn't packed with the update, as users have already downloaded it, and don't need to download it again. While this method may not pertain exactly to the definition of a delta update, as it includes executables along with other types of files, it has very similar concepts.
OTA Updates
Apple uses delta updates in over-the-air updates for iOS as well. This is visible on any jailbroken iOS device. Updates are downloaded to /var/MobileSofwareUpdate/softwareupdate.xxxx, where xxxx presumbably is the build/release number. Each software update contains an image of the root filesystem, but not the entire version of iOS is included. Only the files that have changed from the version the user is currently on need to be replaced, and so only those files are included in the update package. The method for finding these changes is likley to be the same as with iOS apps, where checksumming finds changes in the files.
Algorithm
Basically, to answer your question, Apple's algorithm doesn't send the differences between two individual files (similar to what you see in a git commit), but sends the entire updated file. Their 'algorithm' just looks for any change at all between the last version, and doesn't look for the actual change itself. This is proven by the fact that in OTA update packages, the complete files are available, and not just a log of the changes.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am wondering if it is possible to make a completely new format. Mac use a .text file for text and .mov for movies so is it possible to make a new format that i can put into a iOS application that it will use off and export off, for example the application pages exports a .pages file, and if so what is a good tutorial site or script that u can use.
A few thoughts:
File extensions
If this is just a file that your app uses internally, it doesn't matter too much what extension you use. Use whatever you want.
If you're going to be sharing files, however, you should ensure that you pick an extension that you're confident is unique. For example, it looks like RCB extension is used by Easy Resume Creator Pro (I don't know it, but that's what a quick google of that extension reports).
How to do it.
I'd simply advise you refer to How to Import and Export App Data on Ray Wenderlich's site, which describes how you can define a UTI for your app's data files. Also refer to this Stack Overflow answer.
How to store the information.
In terms of how to store the data in the file, you can obviously do whatever you want, but I'd encourage you to consider, if dealing with text data, using an established format (even if you're using your own custom extension). For example, if dealing with simple Cocoa objects like arrays or dictionaries of strings, numbers, dates, etc., I might suggest using a property lists (see Apple's Property List Programming Guide which writes data in an XML format.
Alternatively, if using your own NSObject subclasses, you can use a binary property list format as enabled by NSKeyedArchiver and NSKeyedUnarchiver which are discussed in Apple's Archives and Serializations Programming Guide.
There are lots of other formats that are open to you, but these two approaches take advantage of well established interfaces for reading and writing data and can be done with a minimum of effort
Alternatives to new file format.
Depending upon precisely what you're trying to do, if you're exchanging trivial amounts of information, a custom URL scheme might be sufficient. This bypasses the issue of dealing with custom file formats and enables a few other workflows.
Hopefully this is enough to let you start researching the different alternatives.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm working on a piece of software that creates a contract and captures the client's signature via gif and applies it to the contract (and spits out a pdf). Now, we're storing the data from the contract but when it comes to the signature, I'm not sure if I should.
Store it:
Pros: If the pdf document is lost, I can reconstruct the document instantly and easily for whomever needs it (us or client). (checked with the lawyers, reconstructing the document from data is legal and applicable as long as no data is or has changed)
Cons: Although I will never do anything with the stored signature, I can't be certain that, if I ever leave the company, my coworkers or replacements will honor that.
Don't Store it:
Pros: Ethical high ground, there's no option for anyone now or in the future to use that image and do anything with it. It keeps everyone honest.
Cons: Now there is no way to reconstruct the original document if the pdf is lost - which is a good possibility.
Talk to a lawyer.
If it's grey enough an area, I'd vote for don't-store-it.
Lawyers are the most important people to talk to in this case.
But I'd still say don't store it.
If necessary, I'd suggest storing the contract in a format from which you CANNOT extract the original GIF. Such as taking a png file for the whole document, or some other solution.
However, if you are storing the whole document with the signature embedded (and not extractable) then you have your ability to resend the signature, and you have no reason to store the unattached GIFs.
Ultimately, having the unattached GIFs is just providing an enormous opening to getting sued.
Storing them that way also opens you up to problems relating to 'pasting the wrong GIF' into a contract.
I would say that having the gif files in a way that does not EXPLICITLY bind them to the ONE contract they apply to is VERY dangerous.
EDIT
After reading your post again I would say that there isn't a point in storing the GIFs or the PDFs. You should have a hard copy somewhere of the signed document (and if you are losing hard copies of contracts, then there are SERIOUS organizational issues) and after that, you don't need the signed version anymore, you just need to know the terms of the contract. So as long as you can reconstruct the terms for reading over, then I don't see why you'd need the literal signature again. If you need to prove they signed it, go back to the hard copy.
Do you need to keep the signature for anything else? If not I'd store it only for as long as is required to produce the PDF, as there is no reason to keep it around.
In New Zealand the collection of personal data is governed by the Privacy Act and as such one of its requirements is the data is only stored for the length of time required for the reason the data was collected.
The signature can still be extracted from the PDF. So whether you store the original GIF does not seem to make a difference, security wise.
How about this for thought:
If it was an electronic signature, you would probably not be able/allowed to store it at all. You could store the signature+document (i.e. the crypto-signed hash of the initial document) and verify it with the public key, but to store a lot of client's private keys to be able to re-sign documents.
Imagine some one breaking in to the database and stealing those private keys (gifs or RSA/DSA keys). That store would be very useful/profitable to a criminal organization.
Do you want to expose yourself to that?
I don't know, a GIF can be re-created by anybody with a copy of the document and a scanner...not storing you lose the benefit of having it, without any real security value being added....
I would get legal advice on some text to place near the signature in the final document. Maybe something like:
John Hancock http://rightzinger.com/LibraryofProgress/FoundingFathers/John_Hancock_signature.gif
(electronically added signature)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I am looking for a text editor that will be able to load a 4+ Gigabyte file into it. Textpad doesn't work. I own a copy of it and have been to its support site, it just doesn't do it. Maybe I need new hardware, but that's a different question. The editor needs to be free OR, if its going to cost me, then no more than $30. For Windows.
glogg could also be considered, for a different usage:
Caveat (reported by Simon Tewsi in the comments, Feb. 2013)
One caveat - has two search functions, Main Search and Quick Find.
The lower one, which I assume is Quick Find, is at least an order of magnitude slower than the upper one, which is fast.
I've had to look at monster(runaway) log files (20+ GB). I used hexedit FREE version which can work with any size files. It is also open source. It is a Windows executable.
Jeff Atwood has a post on this here: http://www.codinghorror.com/blog/archives/000229.html
He eventually went with Edit Pad Pro, because "Based on my prior usage history, I felt that EditPad Pro was the best fit: it's quite fast on large text files, has best-of-breed regex support, and it doesn't pretend to be an IDE."
Instead of loading a gigantic log file in an editor, I'm using Unix command line tools like grep, tail, gawk, etc. to filter the interesting parts into a much smaller file and then, I open that.
On Windows, try Cygwin.
Have you tried context editor? It is small and fast.
I Stumbled on this post many times, as I often need to handle huge files (10 Gigas+).
After being tired of buggy and pretty limited freeware, and not willing to pay fo costly editors after trial expired (not worth the money after all), I just used VIM for Windows with great success and satisfaction.
It is simply PERFECT for this need, fully customizable, with ALL feature one can think of when dealing with text files (searching, replacing, reading, etc. you name it)
I am very surprised nobody answered that (Except a previous answer but for MacOS)...
For the record I stumbled on it on this blog post, which wisely adviced it.
It's really tough to handle a 4G file as such. I used to handle larger text files, but I never used to load them in to my editor. I mostly used UltraEdit in my previous company, now I use Notepad++, but I would get just those parts which i needed to edit. (Most of the cases, the files never needed an edit).
Why do u want to load such a big file in to an editor? When I handled files of these size, I used GNU Core Utils. The most common operations i performed on those files were head ( to get the top 250k lines etc ), tail, split, sort, shuf, uniq etc. It's really powerful.
There's a lot of things you can do with GNU Core Utils. I would definitely recommend those, instead of a new editor.
Sorry to post on such an old thread, but I tried several of the tips here, and none of them worked for me.
It's slightly different than a text editor, but I found that Beyond Compare could handle an extremely large (3.6 Gig) file on my Vista 32-bit machine.
This is a file that that Emacs, Large Text File Viewer, HexEdit, and Notepad++ all choked on.
-Eric
My favourite after trying a few to read a 6GB mysqldump file:
PilotEdit Lite http://www.pilotedit.com/
Because:
Memory usage has (somehow?!) never gone above 25MB, so basically no impact on the rest of my system - though it took several minutes to open.
There was an accurate progress bar during that time so I knew how it was getting on.
Once open, simple searching, and browsing through the file all worked as well as a small notepad file.
It's free.
Others I tried...
EmEditor Pro trial was very impressive, the file opened almost instantly, but unfortunately too expensive for my requirements.
EditPad Pro loaded the whole 6GB file into memory and slowed everything to a crawl.
For windows, unix, or Mac? On the Mac or *nix you can use command line or GUI versions of emacs or vim.
For the Mac: TextWrangler to handle big files well. I'm not versed enough on the Windows landscape to help out there.
f you just want to view a large file rather than edit it, there are a couple of freeware programs that read files a chunk at a time rather than trying to load the entire file in to memory. I use these when I need to read through large ( > 5 GB) files.
Large Text File Viewer by swiftgear http://www.swiftgear.com/ltfviewer/features.html
Big File Viewer by Team Walrus.
You'll have to find the link yourself for that last one because the I can only post a maximum of one hyperlink being a newbie.
When I'm faced with an enormous log file, I don't try to look at the whole thing, I use Free File Splitter
Admittedly this is a workaround rather than a solution, and there are times when you would need the whole file. But often I only need to see a few lines from a larger file and that seems to be your problem too. If not, maybe others would find that utility useful.
A viewer that lets you see enormous text files isn't much help if you are trying to get it loaded into Excel to use the Autofilter, for example. Since we all spend the day breaking down problems into smaller parts to be able to solve them, applying the same principle to a large file didn't strike me as contentious.
HxD -- it's a hexeditor, but it allows in place edits, and doesn't barf on large files.
Tweak is a hex editor which can handle edits to very large files, including inserts and deletes.
EmEditor should handle this. As their site claims:
EmEditor is now able to open even larger than 248 GB (or 2.1 billion lines) by opening a
portion of the file with the new custom bar - Large File Controller.
The Large File Controller allows you to specify the beginning point,
end point, and range of the file to be opened. It also allows you to
stop the opening of the file and monitor the real size of the file and
the size of the temporary disk available.
Not free though..
I found that FAR commander could open large files ( I tried 4.2 GB xml file)
And it does not load the entire file in memory and works fast.
Opened 5GB file (quickly) with:
1) Hex Editor Neo
2) 010 editor
Textpad also works well at opening files that size. I have done it many times when having to deal with extremely large log files in the 3-5gb range. Also, using grep to pull out the worthwhile lines and then look at those works great.
The question would need more details.
Do you want just to look at a file (eg. a log file) or to edit it?
Do you have more memory than the size of the file you want to load or less?
For example, TheGun, a very small text editor written in assembly language, claims to "not have an effective file size limit and the maximum size that can be loaded into it is determined by available memory and loading speed of the file. [...] It has been speed optimised for both file load and save."
To abstract the memory limit, I suppose one can use mapped memory. But then, if you need to edit the file, some clever method should be used, like storing in memory the local changes, and applying them chunk by chunk when saving. Might be ineffective in some cases (big search/replace for example).
I have had problems with TextPad on 4G files too. Notepad++ works nicely.
Emacs can handle huge file sizes and you can use it on Windows or *nix.
What OS and CPU are you using? If you are using a 32-bit OS, then a process on your system physically cannot address more than 4GB of memory. Since most text editors try to load the entire file into memory, I doubt you'll find one that will do what you want. It would have to be a very fancy text editor, that can do out-of-core processing, i. e. load a chunk of the file at a time.
You may be able to load such a huge file with if you use a 64-bit text editor on a computer with a 64-bit CPU and a 64-bit operating system. And you have to make sure that you have enough space in your swap partition or your swap file.
Why do you want to load a 4+ GB file into memory? Even if you find a text editor that can do that, does your machine have 4 GB of memory? And unless it has a lot more than 4 GB in physical memory, your machine will slow down a lot and go swap file crazy.
So why do you want a 4+ GB file? If you want to transform it, or do a search and replace, you may be better off writing a small quick program to do it.
I also like notepad++.