I have a package I want to use for internal use. One of the files queries our database, cleans the data, then saves the data with devtools::use_data(sql_data).
The problem is, this file is run every-time I load the library. I would like to hide this file so it does not run when I load. Instead, I am thinking of writing a function to source the file when I would like the data updated.
How can I prevent this file from running everytime I run library(my_package)?
Related
I am using fuseki-server to query a large chunk of data. The data is in .nt format and I have stored it in a folder using tdbloader (The folder contains .dat and .idt files). Now when I try to load the data using:
./fuseki-server --loc="/path/to/database_folder/" /ds
I get an error message saying that the database I'm trying to load is locked by another process with PID = xxxxx and therefore can't be loaded. i have checked that no other instance of Fuseki-server is running on my system. Is there a way by which I can see which process is accessing that database folder and free it so that I can load it to the fuseki-server?
I also tried loading another .nt file the same way, by using tdb loader and finally setting its location to --loc and it works fine. The lock error appers only on the above dataset.
I’m writing a program which is continously looking for new files in a directory. After it extracts data from each file and makes some treatments with it, the files are moved to another directory containing all scanned files.
Imagine I’m copying a new file in the scanned directory while my program is running. Can a file which has not finished copying be treated (and then produce unforeseen results), or is it locked by the System ?
Now, imagine two instances of the program are running on two different computers, continously scanning the same folder. What can happen if both instances are trying to move the same file ?
Thank you for your help.
I have a project that does much the same thing. Another application is receiving data from a feed and writing files to a folder. My application is processing those files by opening them, acting on them in some way, writing them to another folder, then deleting them.
The strategy I used in the application that does the processing and deleting is to simply open them like this:
TFileStream.Create(AFileName, fmOpenRead OR fmShareDenyWrite);
If the file that is being opened is still being written by another process, the above will fail, and can likely be opened successfully on a subsequent iteration.
I have an app that generates some static data by importing from a json file into a sqlitedb, When running the app the DB file has data in it and is loaded properly, I usually go to the build folder "usually under
/Library/Application Support/iPhone Simulator/7.0.3-64/Applications/
and inspect the sqlite file and verify that it does have data.
Now I copy that same folder and paste it on the desktop, and open it in the same sqlite browser, and the data is gone?? why I don't know!!
I notice that for every sqlite file there is a -shm and -wal file generated.
Why is this happening?
Ok so I tried a little bit more investigating, I have two scenarios :
1 : I put a break point right after I finished generating the SQLite file, and then go the build folder, In the build folder the DB has data in it, if I copy that file to the desktop the db loses its data.
2 : I don;t put a break point, let the app finish normally ( gracefully ) and then go to the build folder, the db file has data, and when I copy and paste it to the desktop it still retains the data.
So I assume there's something that happens when xcode exits ( or the app closes normally ) that I am missing out on when I put a breakpoint !!
Starting with iOS 7 the sqlite database is used in Journaling mode by default - which means that all changes to the database are written to "update files", not to the database directly. You can change the behaviour back to the "old" way - have a look here for a complete explanation
:
Core Data and iOS 7: Different behavior of persistent store
I got an app I'm working on that uses static data from a sqlite database to do various things, While I only need read only access to the database, depending on the episode they pick from the first screen I want it to use a different database file and I want the list of available episodes to be updateable on the fly. and I got help to get the list of available episodes updated, and the proper content downloaded and stored in separate folders, So I know I could when the episode is selected delete the sql file in the documents folder and copy in the new one each time and that would work well enough for what I'm trying to do. but it seems like a bit much extra work to have to check for file, delete file, copy in new one. then open it from there each time the user wants to pick a different episode. and I don't want to put all the sql files together as that will be a bigger hassle then the first route especially if this app stays around long enough to have a long list of episodes.
so my question here is: can I get at least read-only access to an sql file that I've downloaded (or one in the bundle for testing) with out having to first copy it to the documents? and if so how would i open the file?
Can I get at least read-only access to an SQL file that I've downloaded (or one in the bundle for testing) without having to first copy it to the documents directory?
Yes. Files in the app bundle are readable (if they weren't, there would be no point in storing files in the bundle).
And if so, how would I open the file?
It's not clear what you're asking here - if you want to perform SQL queries on the file, you should use the sqlite3 library which is available on iOS.
How can I display an image from file object? The file object holds the location of the image in temp uploaded directory.
I dont want to use any models.
Its for previewing a form having filefield
The problem with most temporary files is that they don't exist. They're in a deleted state and will disappear entirely once the file handle is closed. It's your responsibility to move copy the data out of them and into another file, database, or cache, whatever works best, in order to preserve it.
You don't need to use any models to make this work, but you will need to be able to write to a directory your web server will be able to access. Typically you can make a /uploads directory and copy the file there, removing it later on when it is no longer required.
That clean-up step is easily done by a cron job that deletes all files with an mtime of a day or so, depending on your preference.