How do I test if a file is writable (CLI) - dart

I am taking my first steps in Dart today, and the first thing I am not sure about how to proceed is how to test whether a file that is passed as an argument to a CLI tool I'm writing is writable.
So the idea is that I have a tool that accepts an input directory and an output filename. What it does is parsing some files in the input directory, compiles the data into a meaningful JSON config and saves it in the output file.
However, before doing anything, I want to run a sanity check to see that the given output file argument can actually be used as a writable file.
The way I decided to solve this is by opening the file for Append in a try-catch block:
try {
new File(output).writeAsStringSync('', mode: FileMode.APPEND, flush: true);
} on FileSystemException catch(e) {
// do something
}
However, I don't like this solution. Mainly that it creates a file if it doesn't already exist. Also, I don't see why I should write anything into a file when I just want to know whether it is writable or not.
What is the right way to do it in Dart?

You can use file.statSync().mode or file.statSync().modeString(). See FileStat.

This is actually quite hard to do reliably in any language. As Eiko points out, knowing the file permissions is only half the story, since the current user, group and process determines how those permissions apply.
Some edge cases that can occur are:
The file is writable when checked, but becomes unwritable by the time the writing needs to happen (e.g. another process changed the permissions).
The file is not writable when checked, but becomes writable by the time the writing needs to happen.
The file is write-only: it exists and is not readable, but can be written to.
The file doesn't exist and the user/process is not permitted to create a new file in that directory.
The file system has been mounted in read-only mode.
The parent directory/directories don't exist.
So anything you write may produce false positives or false negatives.
Your method of appending nothing is a good simple test. It can be made more complicated to address some of the issues, but there will always be cases where the answer is not what you might want.
For example, if you don't like creating the file before the actual writing, test if it exists first:
bool isWritable;
final f = File(filename);
if (f.existsSync()) {
try {
// try appending nothing
f.writeAsStringSync('', mode: FileMode.APPEND, flush: true);
isWritable = true;
} on FileSystemException {
isWritable = false;
}
} else {
isWritable = ???; // do you prefer false positive or false negative
// check if the parent directory exists?
}
// isWritable now, but might not be by the time writing happens
Or delete it after testing:
bool isWritable;
final f = File(filename);
final didExist = f.existsSync();
try {
// try appending nothing
f.writeAsStringSync('', mode: FileMode.APPEND, flush: true);
isWritable = true;
if (didExist) {
f.deleteSync();
}
} on FileSystemException {
isWritable = false;
}
// isWritable now, but might not be by the time writing happens
Dart introduces an extra complication, with asynchronous code.
If using the openWrite method. It opens a stream, so any problems writing to the file are not raised when the file is opened. They occur later when using the stream or closing it, which can be far away from the file opening code where you want it detected. Or worse, it occurs in a different zone and cannot be caught.
One useful trick there is to open it twice. The first is used to detect if the file is writable when it is closed. The second is to obtain the stream that will be used for writing.
try {
final f = File(filename);
f.parent.createSync(recursive: true); // create parent(s) if they don't exist
final tmp = f.openWrite(mode: FileMode.append);
await tmp.flush();
await tmp.close(); // errors from opening will be thrown at this point
// Open it again
sinkForWritingToTheFile = f.openWrite(mode: FileMode.append);
} on FileSystemException catch (e) {
// exception from `close` will be caught here
// exception from the second `openWrite` cannot be caught here
...
}

Related

Jena read hook not invoked upon duplicate import read

My problem will probably be explained better with code.
Consider the snippet below:
// First read
OntModel m1 = ModelFactory.createOntologyModel();
RDFDataMgr.read(m1,uri0);
m1.loadImports();
// Second read (from the same URI)
OntModel m2 = ModelFactory.createOntologyModel();
RDFDataMgr.read(m2,uri0);
m2.loadImports();
where uri0 points to a valid RDF file describing an ontology model with n imports.
and the following custom ReadHook (which has been set in advance):
#Override
public String beforeRead(Model model, String source, OntDocumentManager odm) {
System.out.println("BEFORE READ CALLED: " + source);
}
Global FileManager and OntDocumentManager are used with the following settings:
processImports = true;
caching = true;
If I run the snippet above, the model will be read from uri0 and beforeRead will be invoked exactly n times (once for each import).
However, in the second read, beforeRead won't be invoked even once.
How, and what should I reset in order for Jena to invoke beforeRead in the second read as well?
What I have tried so far:
At first I thought it was due to caching being on, but turning it off or clearing it between the first and second read didn't do anything.
I have also tried removing all ignoredImport records from m1. Nothing changed.
Finally got to solve this. The problem was in ModelFactory.createOntologyModel(). Ultimately, this gets translated to ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM_RDFS_INF,null).
All ontology models created with the static OntModelSpec.OWL_MEM_RDFS_INF will have their ImportsModelMaker and some of its other objects shared, which results in a shared state. Apparently, this state has blocked the reading hook to be invoked twice for the same imports.
This can be prevented by creating a custom, independent and non-static OntModelSpec instance and using it when creating an OntModel, for example:
new OntModelSpec( ModelFactory.createMemModelMaker(), new OntDocumentManager(), RDFSRuleReasonerFactory.theInstance(), ProfileRegistry.OWL_LANG );

Caching streams in Functional Reactive Programming

I have an application which is written entirely using the FRP paradigm and I think I am having performance issues due to the way that I am creating the streams. It is written in Haxe but the problem is not language specific.
For example, I have this function which returns a stream that resolves every time a config file is updated for that specific section like the following:
function getConfigSection(section:String) : Stream<Map<String, String>> {
return configFileUpdated()
.then(filterForSectionChanged(section))
.then(readFile)
.then(parseYaml);
}
In the reactive programming library I am using called promhx each step of the chain should remember its last resolved value but I think every time I call this function I am recreating the stream and reprocessing each step. This is a problem with the way I am using it rather than the library.
Since this function is called everywhere parsing the YAML every time it is needed is killing the performance and is taking up over 50% of the CPU time according to profiling.
As a fix I have done something like the following using a Map stored as an instance variable that caches the streams:
function getConfigSection(section:String) : Stream<Map<String, String>> {
var cachedStream = this._streamCache.get(section);
if (cachedStream != null) {
return cachedStream;
}
var stream = configFileUpdated()
.filter(sectionFilter(section))
.then(readFile)
.then(parseYaml);
this._streamCache.set(section, stream);
return stream;
}
This might be a good solution to the problem but it doesn't feel right to me. I am wondering if anyone can think of a cleaner solution that maybe uses a more functional approach (closures etc.) or even an extension I can add to the stream like a cache function.
Another way I could do it is to create the streams before hand and store them in fields that can be accessed by consumers. I don't like this approach because I don't want to make a field for every config section, I like being able to call a function with a specific section and get a stream back.
I'd love any ideas that could give me a fresh perspective!
Well, I think one answer is to just abstract away the caching like so:
class Test {
static function main() {
var sideeffects = 0;
var cached = memoize(function (x) return x + sideeffects++);
cached(1);
trace(sideeffects);//1
cached(1);
trace(sideeffects);//1
cached(3);
trace(sideeffects);//2
cached(3);
trace(sideeffects);//2
}
#:generic static function memoize<In, Out>(f:In->Out):In->Out {
var m = new Map<In, Out>();
return
function (input:In)
return switch m[input] {
case null: m[input] = f(input);
case output: output;
}
}
}
You may be able to find a more "functional" implementation for memoize down the road. But the important thing is that it is a separate thing now and you can use it at will.
You may choose to memoize(parseYaml) so that toggling two states in the file actually becomes very cheap after both have been parsed once. You can also tweak memoize to manage the cache size according to whatever strategy proves the most valuable.

How to handle errors while using Glib.Settings in Vala?

I am using Glib.Settings in my Vala application. And I want to make sure that my program will work okay even when the schema or key is not available. So I've added a try/catch block, but if I'm using the key that doesn't exist, the program segfaults. As I understood, it doesn't even reach the catch statement.
Here is the function that uses settings:
GLib.Settings settings;
string token = "";
try
{
settings = new GLib.Settings (my_scheme);
token = settings.get_string("token1");
}
catch (Error e)
{
print("error");
token = "";
}
return token;
And the program output is:
(main:27194): GLib-GIO-ERROR **: Settings schema 'my_scheme' does not contain a key named 'token1'
Trace/breakpoint trap (core dumped)
(of course I'm using my real scheme string instead of my_scheme)
So can you suggest me where I'm wrong?
I know this is super late, but I was looking for the same solution so I thought I'd share one. As #apmasell said, the GLib.Settings methods don't throw exceptions—they just abort instead.
However, you can do a SettingsSchemaSource.lookup to make sure the key exists first. You can then also use has_key for specific keys. For example,
var settings_schema = SettingsSchemaSource.get_default ().lookup ("my_scheme", false);
if (settings_schema != null) {
if (settings_schema.has_key ("token1")) {
var settings = new GLib.Settings ("my_scheme");
token = settings.get_string("token1");
} else {
critical ("Key does not exist");
}
} else {
critical ("Schema does not exist");
}
The methods in GLib.Settings, including get_string do not throw exceptions, they call abort inside the library. This is not an ideal design, but there isn't anything you can do about it.
In this case, the correct thing to do is fix your schema, install into /usr/share/glib-2.0/schemas and run glib-compile-schemas on that directory (as root).
Vala only has checked exceptions, so, unlike C#, a method must declare that it will throw, or it is not possible to do so. You can always double check the Valadoc or the VAPI to see.

Adobe Air null object reference only on iOS

I am building a fairly large Adobe AIR application that targets iOS
The following exception is appearing in my file loading class, but only on iOS (PC works fine):
[Fault] exception, information=TypeError: Error #1009: Cannot access a property or method of a null object reference.
This occurs during the "onComplete" call in the following function. It happened suddenly and there were no recent code changes that would have affected that area of the application.
Files are loaded at the beginning of each "scene". Each scene uses the loader to load several files (asset lists, layouts, etc.). This exception only happens on the second file of the third scene to be loaded. The invocation of file loading is handled by a manager and is identical for all scenes. The data for the erroneous load is not corrupt and is successfully loaded before the exception on the onComplete call.
private function onFileLoaded( evt:Event ):void
{
trace( "onFileLoaded" );
var fs:FileStream = evt.currentTarget as FileStream;
var id:uint = m_fileStreamToId[ fs ];
var onComplete:Function = m_fileStreamToCallback[ fs ];
var retVal:Object = null;
var success:Boolean = false;
try
{
retVal = JSON.parse( fs.readUTFBytes( fs.bytesAvailable ) );
success = true;
}
catch ( error:Error )
{
trace( error );
retVal = null;
}
fs.close();
delete m_fileStreamToId[ fs ];
delete m_fileStreamToCallback[ fs ];
onComplete( id, retVal, success );
}
onFileLoaded is only called by:
private function internalAsyncLoadFileFromDisk( id:uint, filePath:File, onComplete:Function ):void
{
var fs:FileStream = new FileStream();
fs.addEventListener( Event.COMPLETE, onFileLoaded );
fs.addEventListener( IOErrorEvent.IO_ERROR, onIoError );
m_fileStreamToId[ fs ] = id;
m_fileStreamToCallback[ fs ] = onComplete;
fs.openAsync( filePath, FileMode.READ );
}
and the onComplete function argument is always a local private function.
When the debugger announces the null object reference and points to onComplete, it should be noted that onComplete is not null and the class encapsulating the functions has not been disposed of. Also, I do not see the "onFileLoaded" printed in the trace.
The m_fileStreamToCallback and m_fileStreamToId were created to remove the use of nested functions during file loading. I had experienced null object exceptions when attempting to access member variables as well as cross-scope local variables from within nested anonymous functions on iOS (even though it always works fine on PC).
Lastly, when I try to step into the file loading class with the debugger before the erroneous call, the debugger will always throw an internal exception and disconnect from the application. It only throws this exception before the erroneous call. The debugger is able to enter it successfully for all previous loads. It is also able to break inside the erroneous function when the null object error triggers. It simply cannot enter the class by stepping into it.
Environment details:
Adobe AIR: 14
Apache Flex SDK: 4.12.1
Editor: FlashDevelop
Build system: Ant
Build system OS: Windows 7 x64
Target Device: iPad 4
Target OS: iOS 7
Update 1
The following is the public interface and the onComplete function. So cleanupAndFinalCallback is the function that is supposedly null. I will also add that I am able to successfully enter this scene from another path through the application. If I enter via multiplayer, it crashes when loading the layout. When I enter from single player it does not. Both paths are loading the same file from the disk.
//! Async load the json file from the disk.
//! #param onComplete function( functorArgs:*, retVal:Object, success:Boolean )
public function asyncLoadFileFromDisk( filePath:File, onComplete:CallbackFunctor ):void
{
var newId:uint = m_idGenerator.obtainId();
m_idToCallback[ newId ] = onComplete;
internalAsyncLoadFileFromDisk( newId, filePath, cleanupAndFinalCallback );
}
private function cleanupAndFinalCallback( id:uint, retVal:Object, success:Boolean ):void
{
var onComplete:CallbackFunctor = m_idToCallback[ id ];
delete m_idToCallback[ id ];
m_idGenerator.releaseId( id );
onComplete.executeWithArgs( retVal, success );
}
Update 2
Stepping trough the app near the error causes debugger to crash. However, if I set breakpoints along the execution path I can jump (F5) through execution near the error. So, as I stated above, onComplete is not null as is reported by the error. I was able to execute it and pass further along in execution. At some point, the debugger throws the null reference error and snaps back to that point in the code. I feel there may be something funny going on with the stack.
I suspected that there may have been some issue with the stack. So, immediately after the file was loaded and the scene transition occurred, I used a delayed call of 1s to make sure that the load call was a real asynchronous call.
It turned out that the problem was with the debugger. I still received the "null object reference" error. However, this time, it reported it in a swc that was used in my app. It appears that this report is correct.
When I remove the delayed call, the program reverts to reporting the incorrect error.

Is there a better way to "lock" a Port as a semaphore in Dart than this example?

Is it possible in Dart to “lock” a Port other than by starting a server on that Port. In other words I guess, the Port is acting as a semaphore. Alternatively is there another way to achieve the same result?
I posted a question asking for a solution to this problem, and Fox32 suggested starting a server on a specific Port, and in that way determine if another instance of the program is already running. I need to determine the first instance to start actual processing rather than whether actually just running, and that solution works.
While that solution works well, it appears to me that there should be a more tailored solution. Example code is below:
/*
* Attempt to connect to specific port to determine if first process.
*/
async.Future<bool> fTestIfFirstInstance() {
async.Completer<bool> oCompleter = new async.Completer<bool>();
const String S_HOST = "127.0.0.1"; // ie: localhost
const int I_PORT = 8087;
HttpServer.bind(S_HOST, I_PORT).then((oHtServer) {
ogHtServer = oHtServer; // copy to global
oCompleter.complete(true); // this is the first process
return;
}).catchError((oError) {
oCompleter.complete(false); // this is NOT the first process
return;
});
return oCompleter.future;
}
This is often done by using a file, e.g. '/tmp/my_program.lock' or '~/.my_program.lock' depending on global or per-user lock.
I dart in would be as simple as:
bool isRunning() {
return new File(lockPath).existsSync();
}
Starting:
void createLock() {
if (isRunning()) throw "Lock file '$lockPath' exists, program may be running";
new File(lockPath).createSync();
}
And when closing the program:
void deleteLock() {
new File(lockPath).deleteSync();
}
Something to remember is that while the HttpServer will be closed when the program closes, the file won't be deleted. This can be worked around by writing the programs PID to the lock file when creating the file, and check if the PID is alive in isRunning. If it's not alive, delete the file and return false.
I'm unsure whether this (RawServerSocket) is any "better" than the HttpServer solution, however perhaps it makes more sense.
I think that a means to simply "lock" the Port and unlock the Port would be worthwhile. It provides a good means of using a semaphore IMHO.
/*
* Attempt to connect to specific port to determine if first process.
*/
async.Future<bool> fTestIfFirstInstance() {
async.Completer<bool> oCompleter = new async.Completer<bool>();
RawServerSocket.bind("127.0.0.1", 8087).then((oSocket) {
ogSocket = oSocket; // assign to global
oCompleter.complete(true);
}).catchError((oError) {
oCompleter.complete(false);
});
return oCompleter.future;
}

Resources