I am using fuseki-server to query a large chunk of data. The data is in .nt format and I have stored it in a folder using tdbloader (The folder contains .dat and .idt files). Now when I try to load the data using:
./fuseki-server --loc="/path/to/database_folder/" /ds
I get an error message saying that the database I'm trying to load is locked by another process with PID = xxxxx and therefore can't be loaded. i have checked that no other instance of Fuseki-server is running on my system. Is there a way by which I can see which process is accessing that database folder and free it so that I can load it to the fuseki-server?
I also tried loading another .nt file the same way, by using tdb loader and finally setting its location to --loc and it works fine. The lock error appers only on the above dataset.
Related
I'm currently doing Invoicing and Printing setup on a SAP demo system. I've managed to create Smart Forms based on the standard ones. The problem starts with printing using FPCOPARA transaction and LP01 as Output device. I was able to generate a spool (was able to view it as well) but not printed (no actual file).
I just want to have a file from that Smart Form stored in AL11 and be able to archive it later on. Do you have idea on how can I proceed with this?
Thanks
We actually have an inhouse-developed program for this exact task. I don't have permission to publish the sourcecode of the program, but it involves:
Reading the list of spool requests from database table TSP01
Using the function module RSTS_GET_ATTRIBUTES to obtain the type of the spool request.
Calling the function modules CONVERT_OTFSPOOLJOB_2_PDF or CONVERT_ABAPSPOOLJOB_2_PDF, depending on the type determined by the previous function module. They return a table containing the content of the spool request in the PDF format.
Writing the table returned by the previous function modules to a file using the ABAP statements OPEN DATASET and TRANSFER
I have a package I want to use for internal use. One of the files queries our database, cleans the data, then saves the data with devtools::use_data(sql_data).
The problem is, this file is run every-time I load the library. I would like to hide this file so it does not run when I load. Instead, I am thinking of writing a function to source the file when I would like the data updated.
How can I prevent this file from running everytime I run library(my_package)?
I am using a sql based database in one of my application with Core Data framework. I have not enabled any file protection for persistent store (using NSFileProtectionKey). But I am unable to open my database file store in a directory under 'Caches' folder in Library.
Have you ever come across such an issue. Below is the image I am getting when I try to open the sql file. However, a difference from iOS 6 I could see in that folder is there are two additional files (-shm and -wal) present with the same name of the sql store file.
Could anyone please help me to find a solution to open the file.
The -shm and -wal are journal files created using write-ahead logging. You need all 3 files for a complete database. I haven't seen an encryption error falsely triggered by not obeying this rule, but it doesn't seem out of the question.
See https://developer.apple.com/library/mac/qa/qa1809/_index.html and http://asciiwwdc.com/2013/sessions/207.
I have a D2007 application that uses Windows.CopyFile to copy MS Word and PowerPoint files from one network folder to another network folder. Our organization is migrating to Windows 7 from Vista. One of my migrated users got an error message that displayed a partial local folder (C:\Users\(username)\...\A100203.doc) during the copy. Does the CopyFile function cache a local copy of the document when it is copying from one network folder to another network folder or is it a direct write? I have never seen this error before and the application has been running for years on Win95, Win 98, Win2000, WinXP and Vista.
Windows.CopyFile does NOT cache the file on your hard drive... instead, it instructs Windows to handle the copying of the file itself (rather than you managing the streams in your own program). The output file buffer (destination) is opened, and the input buffer simply read and written. Essentially this means that the source file is spooled into system memory, then offloaded onto the destination... at no point is an additional cache file created (this would slow file copying down).
You need to provide more specific information about your error... such as either the text or an actual screenshot of the offending error message. This will allow people to provide more useful answers.
The user that launches the copy will require read access to the original and write access to the target, regardless of caching (if the user has read access to the file, then the file can be written to a local cache, so caching/no-caching is irrelevant).
It's basic security to disallow someone to be able to copy files/directories among machines just because the security attributes between the machines are compatible.
There's little else to say without the complete text of the error message.
I wanted to run (using Cassini) two copies of my web application from the same computer - not unreasonable (or so I thought!). One using port 80, the other using port 81. So I did the following:
Stopped Cassini and SQL Express
Copy and paste of the site root folder (and renamed it)
Opened Cassini explorer and setup a new site on port 81 and pointed it to the copied location
Changed the web.config of the copied site so that the connection string used "Database=NewAlias" because SQL Express cant attach two databases with the same alias.
Started Cassini and SQL Express again
When I browsed to the NEW site, the first thing that comes up is:
Unable to open the physical file
"C:\site1\App_Data\db_log.ldf".
Operating system error 32: "32(The
process cannot access the file because
it is being used by another
process.)". Cannot create file 'C:\site2
\App_Data\db_log.LDF' because
it already exists. Change the file
path or the file name, and retry the
operation. Cannot open database
"NewAlias" requested by the login. The
login failed. Login failed for user
'NT AUTHORITY\SYSTEM'. File activation
failure. The physical file name
"C:\site1\App_Data\db_log.ldf" may be
incorrect.
Its trying to open the mdf from the OLD location (even if the web.config specifies the exact mdb path to the new location) but trying to create a log in the NEW location. Then to top it all off, drops the hint that it cannot access the ldf from the OLD location, or maybe cant log into it.
Well done Microsoft and your team once again for some truly intuitive errors! Can anyone help?
I don't think you can just copy a live database via the files. If you detach it first, then copy it, you can then reattach (with sp_detach_db) it by mounting the files as a new database.
sp_detach_db OldDb
Then copy the folder, then reattach (with sp_attach_db) the db files as a new database.
sp_attach_db NewDb, "C:\copy of site\App_Data\db_data.mdf", "C:\site1\App_Data\db_log.ldf"
Another big problem that sometimes occurs when doing this kind of thing (and did in the scenario above which is vaguely eluded to by the error message) is that although the copied MDF file is being used, its still linked to the original LDF (log file). You can run this command to get a list of which files are being used for a connected instance:
sp_helpfile
Which will give you something like this as a response:
name fileid filename filegroup size maxsize growth usage
=========================================================
db 1 C:\site2\App_Data\db.mdf PRIMARY 24192 KB Unlimited 1024 KB data only
db_log 2 C:\site1\App_Data\db_log.ldf 78080 KB 2147483648 KB 10% log only
You can see from the output that the log file is being shared with the old database which obviously will cause issues, so you can change it to point to the copied log file as follows:
ALTER DATABASE NewAlias MODIFY FILE (NAME = db_log, FILENAME='c:\site2\App_Data\db_log.ldf')