mnesia_fatal, "Cannot open log file" - erlang

I'm getting the following error when I try to put ~1000 records into one table. I've put >1000 records in a table before. There's plenty of space on the disk. Any ideas what would cause this write to fail? It usually fails trying to write PREVIOUS.LOG, but I have also seen it fail writing to one of the table's .DCL files.
Got event: {mnesia_system_event,
{mnesia_fatal,"Cannot open log file ~p: ~p~n",
["/opt/notifier/rel/notifier/Mnesia.notifier#ql1erldev1/PREVIOUS.LOG",
{file_error,
"/opt/notifier/rel/notifier/Mnesia.notifier#ql1erldev1/PREVIOUS.LOG",
emfile}],

Related

How to expend tAssertCatcher to catch more errors than its default in talend?

I have a job which when I run it- I get this:
[ERROR] 11:47:54 org.talend.components.snowflake.runtime.SnowflakeRowStandalone- Query execution has
failed. Please validate your query.
net.snowflake.client.jdbc.SnowflakeSQLException: Execution error in store procedure
SP_GENERAL:
Numeric value '' is not recognized
but when I'm trying to catch this error- I cant.
I tried tAssertCatcher, tLogCatcher, tStatCatcher- and nothing has worked.
could anybody help please?
ok, finally I got a solution.
tAssertCatcher does not catch errors from stored procedures, so I created a joblet which contain the tAssertCatcher AND in addition, added in there:
input with schema almost identical to tAssertCatcher's schema:
moment, pid, project, job, language, origin, status, substatus, errorCode, errorMessage.
now- I connected between the tDBrow component which I want to catch error from this process- with Reject row to the joblet, and pass errorCode and errorMessage- this will be the exception and description.
in the schema I used tMap to add variables values to almost all the columns:
pid, status, jobName, projectName and the rest I passed hardcoded.
finally I got a solution... every time I'll have error in stored procedure- I'll get 2 detailed records: one that I created manually, and the another one is UNEXPECTED-EXCEPTION.
the additional part in the joblet

realize parameter change after reboot in ESP wifi (Lua)

I want to change the behaviour of my ESP module if some of my parameter was changed and then was restarted. I mean something like this.
if (????) then
print ("default value") else
print ("modified value") end
First I thought of writing a flag into a file, but it causes error during boot if it is not existing yet.
Any better idea?
If you want to store values beyond reboot you have to store them in some non-volatile memory. So using a file is a good way as you already suggested.
Unfortunately you did not provide the error message you get when it is not existing yet and you did not say if the flag or the file does not exist.
What you have to do is handling the error. So if your file does not exist ask the user to create a new one or create a file with default content from your program.
Same goes with the flag. If the file does not contain a flag yet, use a default value or ask the user to give one.
It's not bad or wrong to get errors as long learn from them or handle them properly.
io.open(filename[,mode]) returns nil plus an error message in case of an error.
So simply do something like:
local fileName = "C:\\superfile.txt"
local fileHandle, errorMsg = io.open(fileName)
if not fileHandle then
print("File access error: ", errorMsg)
-- add some error handling here
end
So in case you don't have that file you'll get
File access error: C:\superfile.txt: No such file or directory

SQLite occasionally fails to create :memory: database

In our unit testing suite, we create and destroy a large number of SQLite databases that use the path of ":memory:". Occasionally, and only when running on the iOS simulator, the creation of those databases fails with the rather enigmatic message:
Database ":memory:": unable to open database file
99% of the time, these requests succeed. (Subsequent tests within the same test run typically succeed after this failure occurs.) But when you're using this in an automated build-acceptance test, you want 100%.
We've instrumented for memory consumption (it's within normal limits) and disk-space availability (20GB+ available).
Any ideas?
UPDATE: Captured this happening with extra logging per Richard's suggestion below. Here's the log output:
SQLITE ERROR: (28) attempt to open "/Users/xxx/Library/Developer/CoreSimulator/Devices/CF762060-7D23-4C79-A466-7F20AB6233E7/data/Containers/Data/Application/582E1ED0-81E0-4CC7-A6F6-DBEBC101BBE8/tmp/etilqs_1ghbf1MSTa8ilSj" as
SQLITE ERROR: (14) cannot open file at line 30595 of [f66f7a17b7]
SQLITE ERROR: (14) os_unix.c:30595: (17) open(/Users/xxx/Library/Developer/CoreSimulator/Devices/CF762060-7D23-4C79-A466-7F20AB6233E7/data/Containers/Data/Application/582E1ED0-81E0-4CC7-A6F6-DBEBC101BBE8/tmp/etilqs_1ghbf1MST
I've noticed that even a :memory: database will files on disk if you create a temporary table. The temporary files for unix system are built by a Prng, so there is a non-zero chance of name collision if lots and lots of temporary files are created simultaneously. Or, if the disk is full, the create could fail. Or if for some reason the unix temp directory is not accessible either because it's been deleted or permissions on it are invalid.
For example, I've turned on several loggers in sqlite3 command line by adding these command line arguments to llvc-gcc: -DSQLITE_DEBUG_OS_TRACE=1 -DSQLITE_TEST=1 -DSQLITE_DEBUG=1 then I observed a temp file being created from the command line using this SQL:
$ ./sqlite3
SQLite version 3.8.8.2 2015-01-30 14:30:45
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create temporary table t( x );
OPENX 3 /var/folders/nf/l1cw8sn1707b73zy5nqycrpw0000gn/T//etilqs_fvwR6KbMm518S4w 01002
OPEN 3
WRITE 3 512 0 0
OPENX 4 /var/folders/nf/l1cw8sn1707b73zy5nqycrpw0000gn/T//etilqs_OJJJ1lrTtQIFnUO 05402
OPEN 4
WRITE 4 1024 0 0
WRITE 4 1024 1024 0
WRITE 3 28 0 0
sqlite>
No ideas. But perhaps if you turned on the error and warning log it will give some clues.

ZipForge Native Errors

The archives I'm having trouble with were all created by merging a working archive with a non-existant archive, thereby effectively copying the contents of one into the other. It's part of a merging process we do. Like this...
ZipDestination := TZipForge.Create(nil);
if FileExists(DestinationZipFileName) then
ZipDestination.OpenArchive(fmOpenReadWrite + fmShareDenyWrite)
else
ZipDestination.OpenArchive(fmCreate);
ZipDestination.Zip64Mode := zmAuto;
ZipDestination.MergeWith(SourceZipFileName);
ZipDestination.CloseArchive;
and this is the code that gets a blob from the archive, uncompresses it, and makes it ready for the viewer.
CompressedStream := TMemoryStream.Create;
UnCompressedStream := TMemoryStream.Create;
GetCompressedStream(CompressedStream); // this fetches the blob from the zipfile
ZipForge.InMemory := True;
// Native Error 00035 on next line (sometimes)
ZipForge.OpenArchive(CompressedStream, False);
ZipForge.FindFirst('*.*', ArchiveItem, faAnyFile - faDirectory);
sZipFileName := ArchiveItem.FileName;
sZipPath := ArchiveItem.StoredPath;
ZipForge.ExtractToStream(sZipPath + sZipFileName, UnCompressedStream);
ZipForge.CloseArchive;
but I'm encountering "Native error 00035" sometimes.
Now the strange thing is that I'm getting these errors when I try to view the first blob within the merged archive (ie. trying to view other blobs within the merged archive doesn't raise any exception)
It could be something about ZipForge.MergeWith that I haven't catered for, or it could be a bug in my GetCompressedStream (but if I switch the order of blobs within the archive, it always happens to the first one only). Look like it's time for a test project to see what's really going on.
EDIT
Original question was simply asking for guidance on these Native Errors, for which I'm satisfied with the answer I've chosen. As for my problem, well I'm convinced it's an issue with the CompressedStream I'm passing into OpenArchive.
Native error 00035 is "Invalid archive file". It occurs when ZipForge can't find either the local or central directory headers (that is, when you try to open a file that isn't a zip).
I don't think they're documented in the help, but the translation tables for native error to error code occur in ZFConst.pas. There's a NativeToErrorCode table that converts from the "native" error into an index in the error string array. If that isn't enough to tell you what the problem is just look through ZipForge.pas for the error code in a raise statement. They consistently use the full 5-digit code, so you can search for 00035 instead of just 35 to avoid spurious results.
Free support offered from the ZipForge vendor http://componentace.com/help/zf_guide/gettinghelpfromtechnicalsu.htm

XMLLite parser hangs

I'm parsing an XML using XMLLite. I notice that when its a relatively large file, the reader's pointers fails to locate the next element(tag) of the file. When i reduced the contents of the file, it could successfully parse.
The reader continually shows node type "XmlNodeType_None" and fails to complete parsing, getting stuck in an infinite while loop.
Is it to do something with its file size? Or anything in initializing the IStream? My file has around 9000 bytes of data only.
Thanks
Do not use the SUCCEEDED macro to check if you should continue processing the value returned by IXmlReader::Read. Instead, check that the return value of IXmlReader::Read is equal to S_OK for the condition of your loop.

Resources