Read uncommitted using Firebird FIBPlus components - delphi

Using a TpFIBTransaction component, I'm trying to start a READ UNCOMMITTED transaction.
First of all, the TPBMode property has 3 possible values:
tpbDefault
tpbReadCommitted
tpbRepeatableRead
In TpFIBTransaction.StartTransaction I saw that setting tpbReadCommitted forces the following parameters:
write
isc_tpb_nowait
read_committed
rec_version
Using tpbRepeatableRead forces the following parameters instead:
write
isc_tpb_nowait
concurrency
So, it seems the only way to have "custom" transaction parameters is to set the tpbDefault value.
The values allowed for the TrParams property are the following (from fib.pas unit)
TPBConstantNames: array[1..isc_tpb_last_tpb_constant] of String = (
'consistency',
'concurrency',
'shared',
'protected',
'exclusive',
'wait',
'nowait',
'read',
'write',
'lock_read',
'lock_write',
'verb_time',
'commit_time',
'ignore_limbo',
'read_committed',
'autocommit',
'rec_version',
'no_rec_version',
'restart_requests',
'no_auto_undo',
'no_savepoint'
);
I've tried adding the 'read' value only, but it seems it's still unable to read uncommitted data, even if there's not 'read_committed' in TrParams property.
MyTransaction.TrParams.Clear();
MyTransaction.TrParams.Add('read');
Is there some missing value in TPBConstantNames (Something like 'read_uncommitted', if it exists...), or is there another way to setup a Firebird "read uncommitted" transaction?

It is not possible because Firebird does not support read uncomitted isolation level.
You can find the following information in the documentation documentation:
Note
The READ UNCOMMITTED isolation level is a synonym for READ
COMMITTED, and provided only for syntax compatibility. It provides the
exact same semantics as READ COMMITTED, and does not allow you to view
uncommitted changes of other transactions.
and:
The three isolation levels supported in Firebird are:
SNAPSHOT
SNAPSHOT TABLE STABILITY
READ COMMITTED with two specifications (NO RECORD_VERSION and
RECORD_VERSION)

Related

How to Call a Pkg/Procedure Executing an API From a Pkg/Procedure. The API name needs to be Dynamic and Has In and Out Parms

We use APIs, baninst1.PP_DEDUCTION.p_update and baninst1.PP_DEDUCTION.p_create, to maintain our payroll tables of benefit/deduction data. Numerous packages utilize the APIs. We would like to create a package containing the API call that all the existing packages can use and remove the code that is repeated in each package. I tried EXECUTE IMMEDIATE for the purpose of having a dynamic API name. However, I have not been able to get the syntax correct. I’m hoping you will help me.
create or replace PACKAGE BODY "ORBIT"."MM_BENEFITS_COMMON" AS
PROCEDURE PAY_P_EMPLOYEE_BENEFIT_ACTION(pi_benefit_action IN VARCHAR2,
                                                                               pi_pidm                   IN pdrbded.pdrbded_pidm%TYPE,
                                                                               pi_status                  IN pdrdedn.pdrdedn_status%TYPE,
                                                                               pi_bdca_code          IN pdrbded.pdrbded_bdca_code%TYPE,
                                                                               pi_effective_date     IN pdrdedn.pdrdedn_effective_date%TYPE DEFAULT NULL,                                                                               pi_user_id                IN pdrdedn.pdrdedn_user_id%TYPE DEFAULT NULL,                                                                               pi_data_origin          IN pdrdedn.pdrdedn_data_origin%TYPE DEFAULT NULL,
                                                                               po_base_rowid_out  OUT gb_common.internal_record_id_type,
                                                                               po_detail_rowid_out OUT gb_common.internal_record_id_type,
                                                                               pi_amount1              IN pdrdedn.pdrdedn_amount1%TYPE DEFAULT NULL,
                                                                               pi_opt_code1            IN pdrdedn.pdrdedn_opt_code1%TYPE DEFAULT NULL) IS
BEGIN
--Call the API for p_create or p_update.
baninst1.PP_DEDUCTION.pi_benefit_action(p_pidm                  => pi_pidm,
                                                                       p_status                 => pi_status,
                                                                       p_bdca_code          => pi_bdca_code,
                                                                       p_effective_date     => CASE
                                                                                                                 WHEN pi_benefit_action 'p_create' THEN
                                                                                                                       TRUNC(pi_begin_date)
                                                                                                                 ELSE
                                                                                                                       TRUNC(pi_effective_date)
                                                                                                             END,
                                                                      p_user_id                =>   pi_user_id,
                                                                      p_data_origin          =>   pi_data_origin,
                                                                      p_base_rowid_out   =>   po_base_rowid_out,
                                                                      p_detail_rowid_out  =>  po_detail_rowid_out,                                                                      p_amount1              =>  pi_amount1,
                                                                      p_opt_code1            => CASE
                                                                                                                 WHEN LENGTH(pi_opt_code1) = 1 THEN
                                                                                                                        '0' || pi_opt_code1
                                                                                                                 ELSE pi_opt_code1
                                                                                                            END);
END PAY_P_EMPLOYEE_BENEFIT_ACTION;
END MM_BENEFITS_COMMON;
create or replace PACKAGE BODY "ORBIT"."MM_BENEFIT_TEST" AS
PROCEDURE PAY_P_MM_BENEFIT_TEST IS
lv_base_rowid_out    gb_common.internal_record_id_type;
lv_detail_rowid_out   gb_common.internal_record_id_type;
BEGIN
--Pass data to the common benefits package for the api call
MM_BENEFITS_COMMON.PAY_P_EMPLOYEE_BENEFIT_ACTION('p_update', 9999999, 'A', 'VI1', '01-JAN-2022', 'MM_Test',     'MM_TEST', lv_base_rowid_out, lv_detail_rowid_out, 25.82, NULL);
END PAY_P_MM_BENEFIT_TEST;
END MM_BENEFIT_TEST;
I'm not sure what's bothering you, actually. You did post some code, but - I don't know what it represents.
Let's see what you said:
"We use APIs, baninst1.PP_DEDUCTION.p_update and baninst1.PP_DEDUCTION.p_create, to maintain our payroll tables of benefit/deduction data."
OK
"Numerous packages utilize the APIs."
it means that there are many packages and they call those p_update and p_create procedures; that's also OK
"We would like to create a package containing the API call that all the existing packages can use and remove the code that is repeated in each package."
that would be a new package; you'd cut that piece of code from all of your packages and paste it into a new one.
OK, makes sense. Instead of all that code (in every package), you'd put call to newly created procedures (that reside in a newly created package)
"I tried EXECUTE IMMEDIATE for the purpose of having a dynamic API name. However, I have not been able to get the syntax correct."
why dynamic SQL? There's nothing dynamic here. Instead of dozens of lines of code you currently have (that do something), you'd put one line - the one that calls that newly created procedure (and pass parameters)
From my point of view, there's nothing unusual in what you want to do and I can't imagine what problems you could have in doing it; it's pretty much straightforward.

F# SqlProvider - how to access stored procedure results?

I am using SQLProvider from NuGet (https://www.nuget.org/packages/SQLProvider/ v1.1.42) in an F# project to access our MSSQL database.
I am referring to the sample code from here, https://fsprojects.github.io/SQLProvider/core/programmability.html and also the source code tests on GitHub, https://github.com/fsprojects/SQLProvider/blob/master/tests/SqlProvider.Tests/scripts/MySqlTests.fsx.
#r #"....\packages\SQLProvider.1.1.42\lib\net451\FSharp.Data.SqlProvider.dll"
#r #"....\System.Data.Linq.dll"
open System
open FSharp.Data.Sql
open FSharp.Data.Sql.Common
open System.Data.Linq
type SeriesResult = { .. fields .. }
[<Literal>]
let ConnectionString = #"connStr"
type Sql = SqlDataProvider<
ConnectionString = ConnectionString,
DatabaseVendor = Common.DatabaseProviderTypes.MSSQLSERVER>
let db = Sql.GetDataContext()
let test =
[
for f in db.Procedures.MyStoredProcedure.Invoke("param").ResultSet do
yield f.MapTo<SeriesResult>()
]
I need to access results from the call to MyStoredProcedure, but ResultSet errors with error FS0039: The field, constructor or member 'ResultSet' is not defined". I also get this for ColumnValues, and on MapTo (presumably because the type is unknown).
Is there an additional library I should be referencing?
I have: FSharp.Core, FSharp.Data, FSharp.Data.SqlProvider, mscorelib, System, System.Core, System.Data, System.Data.Linq, System.Xml.Linq
Thanks!
(wanted to tag with SQLProvider - but can't!)
One possible reason for this is that the intelli-sense thread probably timed out waiting for a response from the SQL provider.
The stored procs and the set of types to carry the result ResultSet are computed lazily (when you type the .). This is good in one way as it means the provider doesn't introspect the entire database on instantiation, pulling in lots of stuff you're probably not going to use. However it does have the side effect, of needing to do a non-trivial amount of work in the . completion on the first request, we cache the result after that. I believe Microsoft have a metric that says any intelli-sense work should complete in 250ms, but what the actual thread timeout is I'm not sure. With a language like C# and F# hitting a response target of 250ms can be a big ask on large solutions, but throw a database in the mix (even a small local database) this becomes a very hard target to hit.
Quite why it didn't recover and try again until you added the references, will only be known to Visual Studio; Usually however just closing and re-opening the file is enough. In rare cases unload the project from the solution and reload.

Stored procedures in gremlin CosmosDB

I'm new to gremlin and cosmos DB and was trying to use Stored Procedure in cosmos DB gremlin API.
I started with Quick-start-nodejs doc for creating a node.js app, connected with CosmosDB gremlin API. Now I want to use Stored Procedure in that app.
I found only single doc for stored procedures in cosmos DB, and that's only for Document DB (in SQL). I didn't found any doc related to stored procedure in gremlin.
Can anyone guide me, how to do that?
Thanks in advance.
I had the same problem as you. And found that Cosmos DB in Gremlin or Graph mode does not support stored procedures. You can create them from UI but because you can not use any Gremlin query or have trigger - they are useless. Also there is no documentation on it.
I have found a post from March 2019 that stored procedures are on the roadmap for Germlin
https://feedback.azure.com/forums/263030-azure-cosmos-db/suggestions/20115355-gremlin-queries-from-stored-procedures
Personally, for my use case I am considering use of Neo4j instead of Cosmos because of the lack of stored procedures
What's your use case?
Gremlin is a language for traversing graph. Gremlin has no knowledge of CosmosDB stored procedure and hence you can't really execute stored procedure via gremlin.
However, CosmosDB is multi-model. You can talk to it via gremlin as well as native DocumentDB API.
You should look into how to execute stored procedure via DocumentDB API.
Based on your comment on the first answer to the question: "Actually I want to add some edges every time a new vertex is created. For example, whenever a vertex with the label EMPLOYEE is created, an edge to the vertex COMPANY must be automatically created." here, you can look into TinkerPop's EventStrategy.
EDIT:
Adding essential parts from the link above in case the link changes:
The purpose of the EventStrategy is to raise events to one or more MutationListener objects as changes to the underlying Graph occur within a Traversal. Such a strategy is useful for logging changes, triggering certain actions based on change, or any application that needs notification of some mutating operation during a Traversal. If the transaction is rolled back, the event queue is reset.
The following events are raised to the MutationListener:
New vertex
New edge
Vertex property changed
Edge property changed
Vertex property removed
Edge property removed
Vertex removed
Edge removed
To start processing events from a Traversal first implement the MutationListener interface. An example of this implementation is the ConsoleMutationListener which writes output to the console for each event. The following console session displays the basic usage:
gremlin> import org.apache.tinkerpop.gremlin.process.traversal.step.util.event.*
==>org.apache.tinkerpop.gremlin.structure.*, org.apache.tinkerpop.gremlin.structure.util.*, org.apache.tinkerpop.gremlin.process.traversal.*, org.apache.tinkerpop.gremlin.process.traversal.step.*, org.apache.tinkerpop.gremlin.process.remote.*, org.apache.tinkerpop.gremlin.structure.util.empty.*, org.apache.tinkerpop.gremlin.structure.io.*, org.apache.tinkerpop.gremlin.structure.io.graphml.*, org.apache.tinkerpop.gremlin.structure.io.graphson.*, org.apache.tinkerpop.gremlin.structure.io.gryo.*, org.apache.commons.configuration.*, org.apache.tinkerpop.gremlin.process.traversal.strategy.decoration.*, org.apache.tinkerpop.gremlin.process.traversal.strategy.optimization.*, org.apache.tinkerpop.gremlin.process.traversal.strategy.finalization.*, org.apache.tinkerpop.gremlin.process.traversal.strategy.verification.*, org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.*, org.apache.tinkerpop.gremlin.process.traversal.util.*, org.apache.tinkerpop.gremlin.process.computer.*, org.apache.tinkerpop.gremlin.process.computer.bulkdumping.*, org.apache.tinkerpop.gremlin.process.computer.bulkloading.*, org.apache.tinkerpop.gremlin.process.computer.clustering.peerpressure.*, org.apache.tinkerpop.gremlin.process.computer.traversal.*, org.apache.tinkerpop.gremlin.process.computer.ranking.pagerank.*, org.apache.tinkerpop.gremlin.process.computer.traversal.strategy.optimization.*, org.apache.tinkerpop.gremlin.process.computer.traversal.strategy.decoration.*, org.apache.tinkerpop.gremlin.util.*, org.apache.tinkerpop.gremlin.util.iterator.*, static org.apache.tinkerpop.gremlin.structure.io.IoCore.*, static org.apache.tinkerpop.gremlin.process.traversal.P.*, static org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.__.*, static org.apache.tinkerpop.gremlin.process.computer.Computer.*, static org.apache.tinkerpop.gremlin.util.TimeUtil.*, static org.apache.tinkerpop.gremlin.process.traversal.SackFunctions.Barrier.*, static org.apache.tinkerpop.gremlin.structure.VertexProperty.Cardinality.*, static org.apache.tinkerpop.gremlin.structure.Column.*, static org.apache.tinkerpop.gremlin.structure.Direction.*, static org.apache.tinkerpop.gremlin.process.traversal.Operator.*, static org.apache.tinkerpop.gremlin.process.traversal.Order.*, static org.apache.tinkerpop.gremlin.process.traversal.Pop.*, static org.apache.tinkerpop.gremlin.process.traversal.Scope.*, static org.apache.tinkerpop.gremlin.structure.T.*, static org.apache.tinkerpop.gremlin.process.traversal.step.TraversalOptionParent.Pick.*, org.apache.tinkerpop.gremlin.driver.*, org.apache.tinkerpop.gremlin.driver.exception.*, org.apache.tinkerpop.gremlin.driver.message.*, org.apache.tinkerpop.gremlin.driver.ser.*, org.apache.tinkerpop.gremlin.driver.remote.*, groovyx.gbench.*, groovyx.gprof.*, static groovyx.gprof.ProfileStaticExtension.*, org.apache.tinkerpop.gremlin.giraph.process.computer.*, org.apache.hadoop.conf.*, org.apache.hadoop.hdfs.*, org.apache.hadoop.fs.*, org.apache.hadoop.util.*, org.apache.hadoop.io.*, org.apache.hadoop.io.compress.*, org.apache.hadoop.mapreduce.lib.input.*, org.apache.hadoop.mapreduce.lib.output.*, org.apache.tinkerpop.gremlin.hadoop.*, org.apache.tinkerpop.gremlin.hadoop.structure.*, org.apache.tinkerpop.gremlin.hadoop.structure.util.*, org.apache.tinkerpop.gremlin.hadoop.structure.io.*, org.apache.tinkerpop.gremlin.hadoop.structure.io.graphson.*, org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.*, org.apache.tinkerpop.gremlin.hadoop.structure.io.script.*, org.apache.tinkerpop.gremlin.hadoop.process.computer.mapreduce.*, org.apache.tinkerpop.gremlin.groovy.jsr223.dsl.credential.*, static org.apache.tinkerpop.gremlin.groovy.jsr223.dsl.credential.CredentialGraph.*, org.apache.tinkerpop.gremlin.neo4j.structure.*, static org.apache.tinkerpop.gremlin.neo4j.process.traversal.LabelP.*, org.apache.tinkerpop.gremlin.spark.process.computer.*, org.apache.tinkerpop.gremlin.spark.structure.*, org.apache.tinkerpop.gremlin.spark.structure.io.*, org.apache.tinkerpop.gremlin.tinkergraph.structure.*, org.apache.tinkerpop.gremlin.tinkergraph.process.computer.*, org.apache.tinkerpop.gremlin.process.traversal.step.util.event.*
gremlin> graph = TinkerFactory.createModern()
==>tinkergraph[vertices:6 edges:6]
gremlin> l = new ConsoleMutationListener(graph)
==>MutationListener[tinkergraph[vertices:6 edges:6]]
gremlin> strategy = EventStrategy.build().addListener(l).create()
==>EventStrategy
gremlin> g = graph.traversal().withStrategies(strategy)
==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard]
gremlin> g.addV().property('name','stephen')
Vertex [v[13]] added to graph [tinkergraph[vertices:7 edges:6]]
==>v[13]
gremlin> g.E().drop()
Edge [e[7][1-knows->2]] removed from graph [tinkergraph[vertices:7 edges:6]]
Edge [e[8][1-knows->4]] removed from graph [tinkergraph[vertices:7 edges:5]]
Edge [e[9][1-created->3]] removed from graph [tinkergraph[vertices:7 edges:4]]
Edge [e[10][4-created->5]] removed from graph [tinkergraph[vertices:7 edges:3]]
Edge [e[11][4-created->3]] removed from graph [tinkergraph[vertices:7 edges:2]]
Edge [e[12][6-created->3]] removed from graph [tinkergraph[vertices:7 edges:1]]
By default, the EventStrategy is configured with an EventQueue that raises events as they occur within execution of a Step. As such, the final line of Gremlin execution that drops all edges shows a bit of an inconsistent count, where the removed edge count is accounted for after the event is raised. The strategy can also be configured with a TransactionalEventQueue that captures the changes within a transaction and does not allow them to fire until the transaction is committed.
WARNING
EventStrategy is not meant for usage in tracking global mutations across separate processes. In other words, a mutation in one JVM process is not raised as an event in a different JVM process. In addition, events are not raised when mutations occur outside of the Traversal context.

LabVIEW and Keithley 2635A - Unable to read data

I'm using LabVIEW and its VISA capabilities to control a Keithley 2635A source meter. Whenever I try to identify the device, it works just fine, both in reading and writing.
viWRITE(*IDN?) /* VISA subVI to send the command to the machine */
viREAD /* VISA subVI to read output */
However, as soon as I set the voltage (or current), it does so. Then I send the command to perform a measurement, but I'm not able to read that data, with the error
VISA: (Hex 0xBFFF0015) Timeout expired before operation completed.
After that, I can not read the *IDN? output either anymore.
The source meter is connected to the PC via a National Instrument GPIB-USB-HS adaptor.
EDIT: I forgot to add, this happens in the VISA Interactive Control program as well.
Ok, apparently the documentation is not very clear. What the smua.measure.X() (where X is the needed parameter) command does is, of course, writing the measurement outcome on a buffer. In order to read that buffer, however, the simple viREAD[] is not sufficient.
So basically the answer was to simply add a print command: this way I have
viWRITE[print(smua.measure.X())];
viREAD[]
And I don't have the error anymore. Not sure why such a command is needed, but that's that. Thank you all for your time answering me.
As #Tom Blodget mentions in the comments, the machine may not have any response to read after you set the voltage. The *IDN? string is both command and query. That is, you will write the command *IDN? and read the result. Some commands do not have any response to read. Here's a quick test to see if you should be reading from the instrument. The following code is in python; I made up the GPIB command to set voltage.
sm = SourceMonitor()
# Prints out IDN
sm.query('*IDN?')
# Prints out current voltage (change this to your actual command)
sm.query('SOUR:VOLT?')
# Set a new voltage
sm.write('SOUR:VOLT 1V')
# Read the new voltage
sm.query('SOUR:VOLT?')
Note that question-marked GPIB commands and the query are used when you expect to get a response from the instrument. The instrument won't give a response for the write command. Query is a combination of write(...) and read(...). If you're using LabView, you may have to write the write and read separately.
If you need verification that the machine received your instruction and acted on it, most instruments have the following common commands:
*OPC? query to see if the operation is complete
SYST:ERR? query to see if any error was generated
Add a question mark ? to the end of the GPIB command used to set the voltage

How to read the current machine NTFS settings?

Before inserting filestream data I'd like to check the following NTFS settings:
1) 8.3 naming status (this is disabled by using fsutil behavior set disable8dot3 1)
2) last access status (this is disabled by using fsutil behavior set disablelastaccess 1)
3) cluster size (this is set with format F: /FS:NTFS /V:MyFILESTREAMContainer /A:64K)
The filestream recomendation is to disable (1) and (2) and to set (3) at 64kb.
But before setting this I'd like to know the existing settings. How do I check this? Answer can be in Delphi but not necessarly.
The GetDiskFreeSpace Windows API call returns the sector_per_cluster and bytes_per_sector values. I think this function should be in Windows unit.
You can read the registry for points 1 and 2 (using xp_regread in SQL)
Number 3 is not essential but helps and has been SQL Server best practice for a decade or more. You'd have to use sp_OA% or a CLR function to read this in SQL.

Resources