Is there any possibility to use this.copy in yo generator in sync mode?
I need my sub-generators to be invoked only after all the files from parent generator has been copied.
You can use
this.fs.commit(function(){});
to write the in-memory files to disk.
Related
I have an app that is using NSFileWrapper to create a backup of the user's data. This backup file contains text and media files (compression is not relevant here). Sometimes these backup files get quite large, over 200 MB in size. When I call NSFileWrapper -writeToURL... it appears to load the entire contents into memory as part of the writing process. On older devices, this causes my app to be terminated by the system due to memory constraints.
Is there a simple way to avoid having NSFileWrapper load everything into memory? I've read through every NSFileWrapper question on here that I could find. Any suggestions on how to tackle this?
Here is the current file structure of the backup file:
BackupContents.backupxyz
user.txt
- folder1
- audio files
asdf.caf
asdf2.caf
- folder2
- audio files
asdf3.caf
Again, please don't tell me to compress my audio files. That would only be a band-aid to a flawed design.
It seems like I could just move/copy all of the files into a directory using NSFileManager and then make that directory a package. Should I go down that path?
When an NSFileWrapper tree gets written out to disk, it will attempt to perform a hard-link of the original file to the the new location, but only if you supply a parameter for the originalContentsURL.
It sounds like you're constructing the file wrapper programmatically (for the backup scenario), so your files are probably scattered all over the filesystem. This would mean that when you writeToURL, you don't have an originalContentsURL. This means the hard-link logic is going to get skipped, and the file will get loaded so it can get rewritten.
So, if you want the hard-linking behavior, you need to find a way to provide an originalContentsURL. This is most easily done by supplying an appropriate URL to the initial writeToURL call.
Alternatively, you could try subclassing NSFileWrapper for regular files, and giving them an NSURL that they internally hang on to. You'd need to override writeToURL to pass this new URL up to super, but that URL should be enough to trigger the hard-link code. You'd want to then use this subclass of NSFileWrapper for the large files you want hard-linked in to place.
How does one create an empty directory via a yeoman generator?
I've looked at mem-fs-editor, but as far as I can tell, directories are only created when a child file is created. I've tried creating a file in a sub directory and then deleting the file, but this doesn't work (I assume because mem-fs is prebuilt completely in memory, when it comes to write to disk, empty directories are not written).
mem-fs-editor (the file library used by Yeoman) do not support empty folders. This is very similar to how git work, the internal only keep track of files.
One option is to add .gitkeep or other empty files in those directory. That's my recommended solution as this will fix the issues you'll have anyway using git.
Another option is to use mkdirp:
var mkdirp = require('mkdirp');
// In your generator
mkdirp.sync('/some/path/to/dir/');
I set the WinSCP temporary directory on my hard-drive, but after quitting WinSCP the files stored there get deleted.
Is it possible to prevent them from being deleted? So I can edit them or copy them later.
And if its possible, can WinSCP automatically load those files, if they are newer than the ones on the server? This is optional, but it would be good.
Is it possible to prevent them from being deleted?
Yes, the option is named Keep temporary copies of remote files in deterministic paths.
Can WinSCP automatically load those files, if they are newer than the ones on the server?
WinSCP has function Keep remote directory up to date that monitors local folder and automatically uploads any changes to the server.
If you run two instances of WinSCP, you can combine these two features, but it's quite strange setup.
I've created an ISAPI Extension in Delphi which works just fine, but I am wondering if there is a best practice on how to store configuration settings? I can think of a number of ways to do this, but I would of course like to do it the best way. I might have looked in all the wrong places, but I can't find anything that helps me out...
Is an ini or xml file in the same directory as the dll a good way? Or use the windows registry? Or is is possible (and sensible) to put ISAPI Extension-specific configuration in web.config and thereby utilize the IIS Manager to configure? Or something else?
I generally use GetModuleFileName(HInstance); to find out where the DLL is stored, and keep a file there (ini or xml). It's advisable to keep it, and the DLL, out of reach of IIS so it's not accessible over a URL.
You can use an INI file (make sure it's outside the \InetPub\ folder. Make sure you cache the INI file using TMemIniFile.
You can use a XML file. Make sure you read the XML file to cache and release the file handler after that, to prevent locked files during a concurrent read.
To check for changes while the ISAPI is loaded, check the Date/Time
stamp of the XML file before re-reading it again.
Another suggestion is to use an in-memory database and load/save the data file to and from disk.
How can I display an image from file object? The file object holds the location of the image in temp uploaded directory.
I dont want to use any models.
Its for previewing a form having filefield
The problem with most temporary files is that they don't exist. They're in a deleted state and will disappear entirely once the file handle is closed. It's your responsibility to move copy the data out of them and into another file, database, or cache, whatever works best, in order to preserve it.
You don't need to use any models to make this work, but you will need to be able to write to a directory your web server will be able to access. Typically you can make a /uploads directory and copy the file there, removing it later on when it is no longer required.
That clean-up step is easily done by a cron job that deletes all files with an mtime of a day or so, depending on your preference.