typeorm SQLITE_BUSY:database is locked - typeorm

I'm using multiple processes to write and read to an sqlite DB and am running into a busy error:Error: SQLITE_BUSY: database is locked.
Then I try to set the enableWAL parameter to true and the busyErrorRetry parameter to 300 milliseconds. Finally,When the problem occurs again,SQLITE_BUSY error will not be reported at this time,But the sqlite database seems to be locked.I don't know if retries are too frequent, which aggravates the problem.

I think the retry parameter busyErrorRetry is not a good behavior.I want to use the PRAGMA busy_timeout = parameter, but typeomr does not support it.

Related

Redis - monitoring maximum memory before inserts fail?

While this Q/A does not address the actual issue of: How to detect with client (eg redis-py) that redis is running out of memory constraint not by machine but by the maxmem configuration? Before inserts fail which command to use in the programm to detect about to be full?
My first guess is: info and check if used_memory_peak < maxmem setting. Is this correct?
(Besides, for out of machine memory, since defrag, use which setting, none of the returned INFO fields help here)
Well should i just try an insert and see if fail (but that would be after the fact then.)
Trail and error, good enough tested by running
while true; do redis-cli lpush mm longstringhere; done; results on maxmem - used_memory < 0.1MB with insert failures:
(error) OOM command not allowed when used memory > 'maxmemory'.
So i have set i poll it via redis-py client and once the diff goes <1mb threshold throw up, sry raise Error of course. Make sure the user_memory memory addon of your longest command is < threshold too of course otherwise you run into it on insert.
I try to figure how to calc the ~percentage of used mem so i get notification way earlier eg 90% of maxmem, therefore this solution is fine.
Info dump:
# Memory
used_memory:3126272
used_memory_human:2.98M
used_memory_rss:5292032
used_memory_rss_human:5.05M
used_memory_peak:4914296
used_memory_peak_human:4.69M
used_memory_peak_perc:63.62%
used_memory_overhead:696654...
Furthermore maxmem is not a hardcap, when running it further by eg adding members to existing set.
used_memory:3162584
used_memory_human:3.02M
code to get percent 0-100
rmem_info = pipe.info(section='memory')
{'redis_mem_percent': math.ceil(rmem_info['used_memory'] / rmem_info['maxmemory'] *100)}

LabVIEW and Keithley 2635A - Unable to read data

I'm using LabVIEW and its VISA capabilities to control a Keithley 2635A source meter. Whenever I try to identify the device, it works just fine, both in reading and writing.
viWRITE(*IDN?) /* VISA subVI to send the command to the machine */
viREAD /* VISA subVI to read output */
However, as soon as I set the voltage (or current), it does so. Then I send the command to perform a measurement, but I'm not able to read that data, with the error
VISA: (Hex 0xBFFF0015) Timeout expired before operation completed.
After that, I can not read the *IDN? output either anymore.
The source meter is connected to the PC via a National Instrument GPIB-USB-HS adaptor.
EDIT: I forgot to add, this happens in the VISA Interactive Control program as well.
Ok, apparently the documentation is not very clear. What the smua.measure.X() (where X is the needed parameter) command does is, of course, writing the measurement outcome on a buffer. In order to read that buffer, however, the simple viREAD[] is not sufficient.
So basically the answer was to simply add a print command: this way I have
viWRITE[print(smua.measure.X())];
viREAD[]
And I don't have the error anymore. Not sure why such a command is needed, but that's that. Thank you all for your time answering me.
As #Tom Blodget mentions in the comments, the machine may not have any response to read after you set the voltage. The *IDN? string is both command and query. That is, you will write the command *IDN? and read the result. Some commands do not have any response to read. Here's a quick test to see if you should be reading from the instrument. The following code is in python; I made up the GPIB command to set voltage.
sm = SourceMonitor()
# Prints out IDN
sm.query('*IDN?')
# Prints out current voltage (change this to your actual command)
sm.query('SOUR:VOLT?')
# Set a new voltage
sm.write('SOUR:VOLT 1V')
# Read the new voltage
sm.query('SOUR:VOLT?')
Note that question-marked GPIB commands and the query are used when you expect to get a response from the instrument. The instrument won't give a response for the write command. Query is a combination of write(...) and read(...). If you're using LabView, you may have to write the write and read separately.
If you need verification that the machine received your instruction and acted on it, most instruments have the following common commands:
*OPC? query to see if the operation is complete
SYST:ERR? query to see if any error was generated
Add a question mark ? to the end of the GPIB command used to set the voltage

Unable to read from Agilent 53131A by GPIB in the simple way

Hi I am using LabView 2012, Delphi XE7 and GPIB (I think 488.2), Win7 SP1 and Agilent 53131A.
I used the given NI examples.
NI Labview example - Found in LabVIEW's help - GPIB.vi.
I tried writing and reading to query frequencies from 2 channels and they are successful.
They are are sent and read in succession.
*IDN?
:FUNC 'FREQ 1'
:READ:FREQ?
If they are successful, that meant GPIB for Agilent and NI MAX and driver are successfully installed and configured.
I am also able to use KeySight Connection Expert's to write and read, Again it is also successful.
However, When I used the given NI example in Delphi. Orginally it was saved as Delphi 3 or 4.
I used the Scope Simple example for universal counter. I used it mostly for writing and reading in the simple way. All it needs initialization, read/write and cleanup
I changed the following codes as shown below, in SimpleForm.pas
The detected device is at GPIB0::3::INSTR so, at line 32,
PRIMARY_ADDR_OF_COUNTER = 3;
String to write and read so, at line 132,
CommandBox.Text := '*IDN?';
then it was compiled with no error and run.
String to write was successfully
But upon reading, it was not successfully.
The string output is supposed to be ' HEWLETT-PACKARD,53131A,0,4806'.
The error at the end of the program is as follows below:-
Unable to read from device
ibsta = SC000 <ERR TMO>
iberr = 6 <EABO>
ibcntl = 0
From these readings, I figured out as :-
EABO means abort
I am not familiar with working of GPIB. Kindly advise.
You are correct that EABO is the identifier for an abort. In addition, we can see from ibsta = SC000 <ERR TMO> that the cause of the abort was a GPIB timeout error. I am not familiar with Keysight Connection Expert or your instrument, but since the error was from GPIB timeout, the most likely causes are:
The query was improperly formatted and the instrument thought it was just a write statement with no response needed. (That's probably why the write function had no error, but the read function timed out.)
The query was improperly formatted and the instrument returned an error.
Instrument needs to have 'Talker' capability enabled to send data. (Most instruments do this automatically with queries.)
For more information on generic GPIB commands, see this reference from the folks at National Instruments.

Timeout issue using codefirststoredprocs in MVC5/EF 6. Error reading from stored proc dbo.[...]: Timeout expired

In order to process nearly 70K records at a time, I use codefriststoredprocs 2.5.0 with my application. With few records everything works fine but with large set of data, I receive "The wait operation timed out" exception.
I tried modifying default command timeout value from 30 seconds to 600 seconds in following manner.
//Previous approach
((System.Data.Entity.Infrastructure.IObjectContextAdapter)this.db).ObjectContext.CommandTimeout = 600;
//New approach for EF 6
this.db.Database.CommandTimeout = 600;
but still receives connection timeout message after 30 seconds. I also modified web.config setting Connection timeout value to 600 seconds (I know it is a different thing than command timeout value but give it a go).
I feel like the issue is with codefirststoredprocs library that while executing stored procedure change command timeout value to default. Is there any way to fix this issue or should I go to alternate approach of using stored procedures with my application.
Thanks in advance.
First, I would like to thanks CodeFirstStoredProcs team for their effort and collaboration to solve this issue.
I guess, as mentioned earlier, the command timeout values might have been defaulted to 30 seconds inside CodeFirstStoredProcs library.
With their new release (version 2.6),they have added ‘command timeout’ parameter to the CallStoredProc<> method, that helped me set a default value for commandtimeout
and finally solved my issue.
To process nearly 70K records in my case, I set CommandTimeout = 0 to CallStoredProc<> method. This adds an infinite waiting time to execute stored
procedure.
Thanks once again for CodeFirstStoredProcs team. :)
I got the same problem when changing to EF6.
Your might be setting the CommandTimeout in wrong place. Try setting the CommandTimeout where you are creating the context, like this:
var context = new Entities();
context.CommandTimeout = 600;
Also check if the CommandTimeout was changed with your approach after creating the context. If so then there might be some other issue.

rails activerecord transaction blocks don't seem to not commit

I've a rails app that I'm crashing on purpose.. it's local and I'm just hitting ctrl + c and killing it mid way through processing records..
To my mind the records in the block shouldn't have been committed.. Is this a postgres "error" or a rails "error", or a dave ERROR?
ActiveRecord::Base.transaction do
UploadStage.where("id in (#{ids.join(',')})").update_all(:status => 2);
records.each do |record|
record.success = process_line(record.id, klas, record.hash_value).to_s[0..250]
record.status = 1000
record.save();
end
end
I generate my ids by reading out all the records where the status is 1.
Nothing but this function sets the status to 1000..
If the action crashes for what ever reason, I'd expect there to be no records in the database with status = 2...
This is not what I'm seeing though. Half the records have status 1000, the other half have status 2.. .
Am I missing something?
How can I make sure there are no 2's if the app crashes?
EDIT:
I found this link http://coderrr.wordpress.com/2011/05/03/beware-of-threadkill-or-your-activerecord-transactions-are-in-danger-of-being-partially-committed/
As I suspected and as confirmed by dave's update, it looks like ActiveRecord will commit a half-finished transaction under some circumstances when you kill a thread. Woo, safe! See dave's link for detailed explanation and mitigation options.
If you're simulating hard crash (host OS crash or plug-pull), control-C is absolutely not the right approach. Use Control-\ to send a SIGQUIT, which is generally not handled, or use kill -KILL to hard-kill the process with no opportunity to do cleanup. Control-C sends SIGINT which is a gentle signal that's usually attached to a clean shutdown handler.
In general, if you're debugging issues like this, you should enable detailed query logging and see what Rails is doing. Use log_statement = 'all' in postgresql.conf then examine the PostgreSQL logs.

Resources