Exception when running raw query for the creation of a trigger on TypeORM - typeorm

I am having an issue creating triggers on a BD created using TypeORM. I have my entities creating the tables without a problem, then I am running a few queries to create virtual tables and triggers on the database like this:
AppDataSource.initialize()
.then(async () => {
try{
console.log("DB ready");
AppDataSource.manager.query(queries.parent_company);
AppDataSource.manager.query(queries.sales_rep);
AppDataSource.manager.query(queries.__parent_company___after_insert);
}
catch(e){
}
})
.catch((error) => console.log(error))
This is not the only trigger I want to create but it's one as an example, all of them get me Exceptions.
Here are some of the queries I am running, one that creates the virtual table and then the trigger that is giving me the issue.
export const parent_company =
"CREATE VIRTUAL TABLE IF NOT EXISTS parent_company USING FTS4(\
id,\
name,\
content='__parent_company'\
);"
export const __parent_company___after_insert =
"CREATE TRIGGER IF NOT EXISTS __parent_company___after_insert\
AFTER INSERT ON __parent_company\
BEGIN\
INSERT INTO parent_company (docid, id, name)\
VALUES (new.rowid, new.id, new.name);\
END;"

Related

Snowflake, Tasks and Session variables problem

I have a problem in Snowflake with Task that executes Stored Procedures and that SP is using a Session variable QUERY_TAG that I want to use for logging purposes.
When the Task executes the SP, I'll get the error:
"Session variable '$QUERY_TAG' does not exist"
EXECUTE AS CALLER is there
It doesn't matter where I try to set the QUERY_TAG (in the first Task precond-code or in the definition).
The Tasks and SP are created by me as SYSADMIN
When I'm executing the SP in a query editor (Snowflake, DBeaver etc) it runs fine, so no coding errors in the SP.
SET QUERY_TAG = 'A nice query tag'
CALL TASK_SCHEMA.SP_TASK_ONE()
This runs fine when I'm calling doing it in the Worksheet or DBeaver or similar.
Both the ways in the SP works (inline SQL or by the getQueryTag function)
Here is the code for Tasks and SP
CREATE OR REPLACE TASK TASK_SCHEMA.TASK_ONE_PRECOND
WAREHOUSE = TASK_WH
SCHEDULE = '2 minute'
QUERY_TAG = 'My Query Tag'
AS
SET QUERY_TAG = 'My Query Tag 2'
CREATE OR REPLACE TASK TASK_SCHEMA.TASK_ONE
WAREHOUSE = TASK_WH
AFTER TASK_SCHEMA.TASK_ONE_PRECOND
AS
CALL TASK_SCHEMA.SP_TASK_ONE()
create or replace procedure TASK_SCHEMA.SP_TASK_ONE()
RETURNS VARCHAR(50)
LANGUAGE JAVASCRIPT
EXECUTE AS CALLER
as $$
function getQueryTag()
{
var QueryTag;
rs_QT = snowflake.execute ( { sqlText: `SELECT $QUERY_TAG;` } );
if( rs_QT.next())
{
QueryTag = rs_QT.getColumnValue(1); // get the QueryTag
}
return QueryTag;
}
var qtag = getQueryTag();
//rs = snowflake.execute ( { sqlText:
//`INSERT INTO "LOG"."TESTSESSIONLOG"
// ("SESSION_NAME")
//SELECT $QUERY_TAG
//` } );
snowflake.execute({
sqlText: `INSERT INTO LOG.TESTSESSIONLOG
(SESSION_NAME)
VALUES (?)`
,binds: [ qtag]
});
return "SESSION_OK";
$$;
Edit 4 Nov 2019: My answer below is not entirely correct, there is a way to pass values between a task and its successor. See doc on SYSTEM$SET_RETURN_VALUE.
Even if you define dependencies between tasks, that doesn't mean a task inherits anything from the predecessor in the task tree.
So if you set a variable in one task, that variable is lost when the task finishes.
This is different from a normal session (like in the GUI) where the session state is preserved between the commands you execute within the session.
Between tasks, the only thing related is the end time of the predecessor and the start time of the successor(s).
When it comes to extracting the query tag, you should preferably ask the system for it:
function getQueryTag()
{
var rs_QT = snowflake.execute ( { sqlText: `SHOW PARAMETERS LIKE 'QUERY_TAG'` } );
return rs_QT.next() && rs_QT.getColumnValue("value"); // get the QueryTag
}

Grails 3.3 execute H2 script command

I'm running a small, trivial Grails 3.3.0 application using a H2 file based database. For simple backup reasons I would like to dump the current database state to a file using the H2 specific SCRIPT command:
SCRIPT TO /path/to/backup/dir/tempoDb.sql;
Currently I am trying to execute the native SQL command like this.
User.withSession { session ->
NativeSQLQuerySpecification nativeSQLQuerySpecification = new NativeSQLQuerySpecification("SCRIPT TO /path/to/backup/dir/tempoDb.sql;", null, null)
session.executeNativeUpdate(nativeSQLQuerySpecification, new QueryParameters())
}
but this does not work.
You can autowire the dataSource and try to run your sql query using the connection obtained from datasource Without going through Hibernate. The dataSource bean is registered in the Grails Spring context and it is an instance of javax.sql.DataSource.
Here is an example of a Grails service that backup the current H2 database to the file system.
#ReadOnly
class BackupService {
DataSource dataSource
def backup() {
def sql = "SCRIPT DROP TO '${System.properties['java.io.tmpdir']}/backup.sql'"
Statement statement = dataSource.connection.createStatement()
boolean result = statement.execute(sql)
}
}

Neo4j:Groovy script is not inserting anything

I am using neo4j in embedded mode. So for some operations in database on server, i am tying to execute groovy script. Groovy script is running successfully without any error,but it is not creating any new record when i am checking neo4j-communinty tool.
Script
/**
* Created by prabjot on 7/1/17.
*/
#Grab(group="org.neo4j", module="neo4j-kernel", version="2.3.6")
#Grab(group="org.neo4j", module="neo4j-lucene-index", version="2.3.6")
#Grab(group='org.neo4j', module='neo4j-shell', version='2.3.6')
#Grab(group='org.neo4j', module='neo4j-cypher', version='2.3.6')
import org.neo4j.graphdb.factory.GraphDatabaseFactory
import org.neo4j.graphdb.Node
import org.neo4j.graphdb.Result
import org.neo4j.graphdb.Transaction
class Neo4jEmbeddedAccess {
public static void main(String[] args) {
def map=[:]
map.put("allow_store_upgrade","true")
map.put("remote_shell_enabled","true")
def db = new GraphDatabaseFactory().newEmbeddedDatabaseBuilder("/opt/neo4j-community-3.0.4/data/databases/graph.db")
.setConfig(map)
.newGraphDatabase()
Transaction tx =db.beginTx()
Node person = db.createNode();
person.setProperty("name","prabjot")
print("id---->" + person.id);
Result result = db.execute("Match (country:Country) where id(country)=73 SET country.modified=true return country")
print(result)
tx.success();
println """starting embedded graph db
use bin/neo4j-shell from a new distribution to connect
we're keeping the graphdb open for 120 secs"""
db.shutdown()
}
Please help what i am doing wrong here, i have checked my db location but is same as i am using in script and tool.
Thanks
You forgot tx.close() which commits the Transaction
Sucess only marks it as successful

Neo4j BatchInserter initializing Db on restart

I am using Neo4j BatchInserters to insert nodes in db.I am using LuceneBatchInserterIndexProvider for indexes. I have multiple files from where i am importing the data. I want if my process break then i should be able to restart the process from next file. But whenever i restart process it creates new db in graph folder and new indexes. My initialization code look like this.
Map<String, String> config = new HashMap<String, String>();
config.put("neostore.nodestore.db.mapped_memory", "2G");
config.put("batch_import.keep_db", "true");
BatchInserter db = BatchInserters.inserter("ttl.db", config);
BatchInserterIndexProvider indexProvider = new LuceneBatchInserterIndexProvider(
db);
index = indexProvider.nodeIndex("ttlIndex",
MapUtil.stringMap("type", "exact"));
index.setCacheCapacity(URI_PROPERTY, indexCache + 1);
Can somebody please help here?
To provide more details. I have multiple files ( around 400) which i want to import to Neo4j.
I want to divide my process into batches. After every batch i want to restart the process.
I used neo4j batch inserter config batch_import.keep_db = "true". This does not clear the graph but after restart indexer has lost information. I have this method to check for node existence. I am sure before restart i have created node.
private Long getNode(String nodeUrl)
{
IndexHits<Long> hits = index.get(URI_PROPERTY, nodeUrl);
if (hits.hasNext()) { // node exists
return hits.next();
}
return null;
}

How to configure tasks in symfony 1.4?

I've used symfony 1.4 and Doctrine to build a sort of mini-CMS. It uses a database to store the different pages and categories.
Now I've added another connection to databases.yml, in order to retrieve client info.
I would like to indicate to symfony that that database is read-only, and that it should never ever write to it.
For the moment, I've created a user on the 2nd database that can only read, it seems to do the trick even though commands like doctrine:build --all still try to write in it.
EDIT : Thanks to Pascal's answer, I use events to tell doctrine to use only one db for the task sfDoctrineDropDbTask.
public function setup()
{
(...)
$this->dispatcher->connect('command.filter_options', array($this, 'filterCommandOptions'));
}
public function filterCommandOptions(sfEvent $event, $options)
{
if ('sfDoctrineDropDbTask' === get_class($event->getSubject()))
$options = array('base1');
elseif ('sfDoctrineBuildDbTask' === get_class($event->getSubject()))
$options = array('base1');
elseif ('sfDoctrineDataDumpTask' === get_class($event->getSubject()))
$options = array('base1');
//elseif ('sfDoctrineInsertSqlTask' === get_class($event->getSubject()))
//$options = array('base1');
elseif ('sfDoctrineCreateModelTables' === get_class($event->getSubject()))
$options = array('base1');
return $options;
}
This trick does not seem to work for the other tasks :
doctrine:data-dump still reads both databases,
doctrine still tries to write during a doctrine:build --all task
sfDoctrineInsertSqlTask complains that I gave too many options.
you could force the name of the database doctrine:build --all can drop using events.
check this : https://gist.github.com/582306

Resources