Is it possible to alter a table to include reference to an upstream table in datajoint for python? - alter

We want to alter a table to include a non-primary key reference to a new table. The old definition is:
#schema
class SpikeSortingParameters(dj.Manual):
definition = """
# Table for holding parameters for each spike sorting run
-> SortGroup
-> SpikeSorterParameters
-> SortInterval
---
-> SpikeSortingMetrics
-> IntervalList
import_path = '': varchar(200) # optional path to previous curated sorting output
"""
We'd like to add
-> SpikeSortingArtifactParameters
as a non primary key, and before we spent time trying to get this to work, we wanted to know if it was possible given that we don't know of a way to assign a default value here.
thanks in advance...
Loren

Unfortunately, foreign key updates are not currently supported with table.alter(). There is a workaround that can be used but there are a few steps and those should be taken carefully. It would be best to file this as a feature request in the issue tracker.
Using Alter
For instance, consider the following:
If you have 2 tables defined as follows:
import datajoint as dj
schema = dj.Schema('rguzman_alter_example')
#schema
class Student(dj.Lookup):
definition = """
student_first_name: varchar(30)
student_last_name: varchar(30)
---
student_location: varchar(30)
"""
contents = [('Joe', 'Schmoe', 'Los Angeles, CA'), ('Suzie', 'Queue', 'Miami, FL')]
#schema
class Assignment (dj.Lookup):
definition = """
assignment_id: int
---
assignment_due_date: date
#-> [nullable] Student # Standard way to define a foreign key on secondary attributes with NULL as default
"""
contents = [dict(assignment_id=100, assignment_due_date='2021-04-21')]
Now suppose that you'd like to have a foreign key on secondary attributes with NULL as the default. You can pass options to foreign keys in secondary attributes (see the comment above where we allow it to default to NULL). Initializing a table from scratch in this way works just fine. For the case where we want to add a foreign key on a secondary attribute after a table has been initialized with existing data, Assignment.alter() would be the best means to achieve this so long as we establish a default value to fill existing records with. Let's see what happens when we uncomment the foreign key on secondary attributes, redefine the Assignment table class, and try to alter.
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-23-09997168281c> in <module>
----> 1 Assignment.alter()
~/.local/lib/python3.7/site-packages/datajoint/table.py in alter(self, prompt, context)
84 del frame
85 old_definition = self.describe(context=context, printout=False)
---> 86 sql, external_stores = alter(self.definition, old_definition, context)
87 if not sql:
88 if prompt:
~/.local/lib/python3.7/site-packages/datajoint/declare.py in alter(definition, old_definition, context)
367 raise NotImplementedError('table.alter cannot alter the primary key (yet).')
368 if foreign_key_sql != foreign_key_sql_:
--> 369 raise NotImplementedError('table.alter cannot alter foreign keys (yet).')
370 if index_sql != index_sql_:
371 raise NotImplementedError('table.alter cannot alter indexes (yet)')
NotImplementedError: table.alter cannot alter foreign keys (yet).
Oh, there's an exception... So it turns out that it is not implemented yet for this use case. However, there is a manual workaround that can be leveraged so let me detail those steps until there is support.
Workaround
Instead of defining the foreign key on secondary attributes as above, define it in this way:
#schema
class Assignment (dj.Lookup):
definition = f"""
assignment_id: int
---
assignment_due_date: date
{newline.join(['' + a.name + '=null: ' + a.type for a in Student.heading.attributes.values() if a.in_key])}
"""
contents = [dict(assignment_id=100, assignment_due_date='2021-04-21')]
This 'copies' over the primary attributes of the parent table into the secondary attributes of your child table.
Perform alter as normal: Assignment.alter()
Manually add the foreign key using a SQL query directly. Like this:
q = """
ALTER TABLE {table}
ADD FOREIGN KEY ({fk})
REFERENCES {ref} ({pk})
ON UPDATE CASCADE
ON DELETE RESTRICT
""".format(table=Assignment.full_table_name,
fk=','.join([f'`{k}`' for k in Student.primary_key]),
ref=f'`{Student.table_name}`',
pk=','.join([f'`{k}`' for k in Student.primary_key]))
dj.conn().query(q)
Make sure to remove the portion that was added in step 1 and replace it with the proper specification i.e. -> [nullable] Student
Restart your kernel
To verify it has been properly set, check Assignment.describe(). If everything worked properly, the result should be:
assignment_id : int
---
assignment_due_date : date
-> [nullable] Student
Additionally, any pre-existing records should now be prefilled with NULL.

Related

Doctrine Assocation Mapping Join Not Executing In ZF3

I am creating my first Association Mapping for a Join. This is also the first time I've used a Foreign Key in pgSQL.
I am working with ZF3. The error I am receiving is:
An exception occurred while executing 'SELECT p0_.reference AS reference_0, p0_.meta_keyword_reference AS meta_keyword_reference_1, p0_.add_date AS add_date_2, p0_.add_membership_reference AS add_membership_reference_3, p0_.remove_date AS remove_date_4, p0_.remove_membership_reference AS remove_membership_reference_5 FROM page_about_meta_keyword_link p0_ INNER JOIN meta_keyword m1_':
SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at end of input LINE 1: ...page_about_meta_keyword_link p0_ INNER JOIN meta_keyword m1_
The query I am trying to create is
SELECT MetaKeywords.Keyword FROM PageAboutMetaKeywordLink INNER JOIN MetaKeywords ON PageAboutMetaKeywordLink.MetaKeywordReference = MetaKeywords.Reference WHERE PageAboutMetaKeywordLink.RemoveDate IS NULL ORDER BY MetaKeywords.Keyword ASC
From my database experience I expect it is creating the error due to the missing
ON p0_.meta_keyword_reference = m1_reference
I don't understand how to communicate the Join. Based on the documentation I had expected this was automatic. Maybe I misunderstood.
The tables I am trying to Join are page_about_meta_keyword_link.meta_keyword_reference ON meta_keyword.reference . This is the first time I've created a foreign key in pgSQL.
This is the table structure for page_about_meta_keyword_link
CREATE TABLE public.page_about_meta_keyword_link
(
reference bigint NOT NULL DEFAULT nextval('page_about_meta_keyword_link_reference_seq'::regclass),
meta_keyword_reference bigint,
add_date timestamp with time zone DEFAULT now(), -- UTC
add_membership_reference bigint,
remove_date timestamp with time zone, -- UTC
remove_membership_reference bigint,
CONSTRAINT page_about_meta_keyword_link_pkey PRIMARY KEY (reference),
CONSTRAINT page_about_meta_keyword_link_fk FOREIGN KEY (meta_keyword_reference)
REFERENCES public.meta_keyword (reference) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT page_about_meta_keyword_link_reference_unique UNIQUE (reference)
)
This is the meta_keyword
CREATE TABLE public.meta_keyword
(
reference bigint NOT NULL DEFAULT nextval('meta_keyword_reference_seq'::regclass),
keyword text,
effective_date timestamp with time zone DEFAULT now(), -- UTC
membership_reference bigint,
CONSTRAINT meta_keyword_pkey PRIMARY KEY (reference),
CONSTRAINT meta_keyword_reference_unique UNIQUE (reference)
)
This is the query I've created in the Service; The complete Service is found here.
$repository = $this->entityManager->getRepository(PageAboutMetaKeywordLink::class);
$keywords = $this->entityManager->getRepository(MetaKeyword::class);
$qb = $repository->createQueryBuilder('l');
$qb ->join('\Application\Entity\MetaKeyword' , 'k')
->expr()->isNull('l.removeDate');
return $qb->getQuery()->getResult();
The Association Mapping I created is for meta_keyword_reference; The complete Entity is found here.
/**
* #var int|null
*
* #ORM\ManyToOne(targetEntity="MetaKeyword")
* #ORM\JoinColumn(name="meta_keyword_reference", referencedColumnName="reference")
* #ORM\Column(name="meta_keyword_reference", type="bigint", nullable=true)
*/
private $metaKeywordReference;
I have not made any changes to the MetaKeywords Entity. It is found here.
Overall the various sections of the web site will share the meta_keywords. If I understand correctly the connection I am trying to make is ManyToOne.
I am wanting to leave a good reference for other newbies as they are their journey with Zend Framework 3 - Doctrine. Please advise of edits I should be making to this post so it is clear, understandable and concise so I receive the help I need and others will benefit from this in the future.
You double declared a column (meta_keyword_reference). Looking at the docs (same page you linked in question), you've made a mistake in your Annotation. Remove the ORM\Column line (the definition is already in JoinColumn). If you need it to be nullable (not required), add nullable=true to the JoinColumn; use either, not both
/**
* #var int|null
*
* #ORM\ManyToOne(targetEntity="MetaKeyword")
* #ORM\JoinColumn(name="meta_keyword_id", referencedColumnName="id", nullable=true)
*/
private $metaKeywordReference;
Do not worry about declaring a "type", Doctrine will automatically match it to the column you're referencing. Also, you should be referencing Primary Keys. I've assumed reference is not the PK, so I've changed it to id, change it to what it actually is.
Next, I think you're also using DBAL QueryBuilder instead of the ORM QueryBuilder.
The Query you need would be like this:
use Doctrine\ORM\Query\Expr\Join;
use Doctrine\ORM\QueryBuilder;
/** #var QueryBuilder $qb */
$qb = $this->entityManager->createQueryBuilder();
$qb->select('l')
->from(PageAboutMetaKeywordLink::class, 'l')
->join(MetaKeyword::class, 'k', Join::ON, 'l.reference = k.id') // check these property names (NOT DB COLUMNS!)
->where('l.removeDate is null');
Might be a few small errors in there, but that should be about it.

Numeric sort in Manual (Legacy) index in Neo4j 3 is not working correctly

I'm using Legacy indexing (now called Manual indexing). After migration from Neo4j 2 to version 3 I have some problems with numeric sorting.
Example of correct statement in Neo4j 2:
queryContext.sort(new Sort(new SortField(AGE, SortField.INT, false)));
This stament should be changed for Neo4j 3 (Lucene 5):
queryContext.sort(new Sort(new SortField(AGE, SortField.Type.INT, false)));
But if you use this sort statement you will get an exception:
java.lang.IllegalStateException: unexpected docvalues type SORTED_SET for field 'firstName' (expected=SORTED). Use UninvertingReader or index with docvalues.
at org.apache.lucene.index.DocValues.checkField(DocValues.java:208)
at org.apache.lucene.index.DocValues.getSorted(DocValues.java:264)
at org.apache.lucene.search.FieldComparator$TermOrdValComparator.getSortedDocValues(FieldComparator.java:762)
at org.apache.lucene.search.FieldComparator$TermOrdValComparator.getLeafComparator(FieldComparator.java:767)
at org.apache.lucene.search.FieldValueHitQueue.getComparators(FieldValueHitQueue.java:183)
at org.apache.lucene.search.TopFieldCollector$SimpleFieldCollector.getLeafCollector(TopFieldCollector.java:164)
at org.neo4j.kernel.api.impl.index.collector.DocValuesCollector.replayTo(DocValuesCollector.java:297)
at org.neo4j.kernel.api.impl.index.collector.DocValuesCollector.getTopDocs(DocValuesCollector.java:275)
at org.neo4j.kernel.api.impl.index.collector.DocValuesCollector.getIndexHits(DocValuesCollector.java:150)
at org.neo4j.index.impl.lucene.legacy.LuceneLegacyIndex.search(LuceneLegacyIndex.java:346)
at org.neo4j.index.impl.lucene.legacy.LuceneLegacyIndex.query(LuceneLegacyIndex.java:261)
at org.neo4j.index.impl.lucene.legacy.LuceneLegacyIndex.query(LuceneLegacyIndex.java:205)
at org.neo4j.index.impl.lucene.legacy.LuceneLegacyIndex.query(LuceneLegacyIndex.java:217)
at org.neo4j.kernel.impl.api.StateHandlingStatementOperations.nodeLegacyIndexQuery(StateHandlingStatementOperations.java:1440)
at org.neo4j.kernel.impl.api.OperationsFacade.nodeLegacyIndexQuery(OperationsFacade.java:1162)
at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy$Type$1.query(LegacyIndexProxy.java:83)
at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.query(LegacyIndexProxy.java:365)
I think this is caused by new added statement in Neo4j indexer class (Neo4j is indexing field for sorting automatically now?). See in:
org.neo4j.index.impl.lucene.legacy.IndexType CustomType addToDocument( Document document, String key, Object value )
new line:
document.add( instantiateSortField( key, value ) );
and method instantiateSortField is creating SortedSetDocValuesField
So I changed my code to:
queryContext.sort(new Sort(new SortedSetSortField(AGE, false)));
This runs OK but sorting is not working because numbers are sorted as string. I see that "value" parameter is String every time in method "addToDocument". I think the root cause is explained it this old comment:
see comment in class org.neo4j.index.impl.lucene.legacy.IndexType CustomType
// TODO We should honor ValueContext instead of doing value.toString() here.
// if changing it, also change #get to honor ValueContext.
Am I missing some new way how to index, search and sort data in Neo4j 3 or this is really problem that values are indexed as string in Neo4j?
Simple unit test for Neo4j 2 and Neo4j 3 can be downloaded
Solution added by MishaDemianenko at GH issue

How to create an update query with Open Office Base?

I want to create basically an update query on Open Office Base (the same way with Ms ACCESS).
Base does not typically use update queries (but see below). Instead, the easiest way to do an update command is to go to Tools -> SQL. Enter something similar to the following, then press Execute:
UPDATE "Table1" SET "Value" = 'BBB' WHERE ID = 0
The other way is to run the command with a macro. Here is an example using Basic:
Sub UpdateSQL
REM Run an SQL command on a table in LibreOffice Base
Context = CreateUnoService("com.sun.star.sdb.DatabaseContext")
databaseURLOrRegisteredName = "file:///C:/Users/JimStandard/Desktop/New Database.odb"
Db = Context.getByName(databaseURLOrRegisteredName )
Conn = Db.getConnection("","") 'username & password pair - HSQL default blank
Stmt = Conn.createStatement()
'strSQL = "INSERT INTO ""Table1"" (ID,""Value"") VALUES (3,'DDD')"
strSQL = "UPDATE ""Table1"" SET ""Value"" = 'CCC' WHERE ID = 0"
Stmt.executeUpdate(strSQL)
Conn.close()
End Sub
Note that the data can also be modified with a form or by editing the table directly.
Under some circumstances it is possible to create an update query. I couldn't get this to work with the default built-in HSQLDB 1.8 engine, but it worked with MYSQL.
In the Queries section, Create Query in SQL View
Click the toolbar button to Run SQL Command directly.
Enter a command like the following:
update mytable set mycolumn = 'This is some text.' where ID = 59;
Hit F5 to run the query.
It gives an error that The data content could not be loaded, but it still performs the update and changes the data. To get rid of the error, the command needs to return a value. For example, I created this stored procedure in MYSQL:
DELIMITER $$
CREATE PROCEDURE update_val
(
IN id_in INT,
IN newval_in VARCHAR(100)
)
BEGIN
UPDATE test_table SET value = newval_in WHERE id = id_in;
SELECT id, value FROM test_table WHERE id = id_in;
END
$$
DELIMITER ;
Then this query in LibreOffice Base modifies the data without giving any errors:
CALL update_val(2,'HHH')
See also:
https://forum.openoffice.org/en/forum/viewtopic.php?f=5&t=75763
https://forum.openoffice.org/en/forum/viewtopic.php?f=61&t=6655
https://ask.libreoffice.org/en/question/32700/how-to-create-an-update-query-in-base-sql/
Modifying table entries from LibreOffice Base, possible?

PXDatabase should accept PXDbType.Udt in Acumatica ERP

How can I call a stored procedure in Acumatica via PXDataBase which has as input parameter User defined type?
For example, I have the following type:
CREATE TYPE [dbo].[string_list_tblType] AS TABLE(
[RefNbr] [nvarchar](10) NOT NULL,
PRIMARY KEY CLUSTERED
(
[RefNbr] ASC
)WITH (IGNORE_DUP_KEY = OFF)
)
GO
I have the following stored procedure:
CREATE PROCEDURE [dbo].[GetListOfAPInvoices]
#APInvoices as string_list_tblType readonly,
AS
BEGIN
select * from APInvoice a where a.RefNbr in (select RefNbr from #APInvoices)
END
and following fragment of C# code:
var par = new SqlParameter("APInvoices", dt);
par.SqlDbType = SqlDbType.Structured;
par.TypeName = "dbo.string_list_tblType";
par.UdtTypeName = "dbo.string_list_tblType";
par.ParameterName = "APInvoices";
PXSPParameter p1 = new PXSPInParameter("#APInvoices", PXDbType.Udt, par);
var pars = new List<PXSPParameter> { p1};
var results = PXDatabase.Execute(sqlCommand, pars.ToArray());
but when I execute my C# code I'm receiving error message:
UdtTypeName property must be set for UDT parameters
When I debugged with reflector class PXSqlDatabaseProvider, method
public override object[] Execute(string procedureName, params PXSPParameter[] pars)
I noticed that
using (new PXLongOperation.PXUntouchedScope(Thread.CurrentThread))
{
command.ExecuteNonQuery();
}
command.Parameters.Items has my method parameters, but item which is related to Udt type is null. I need to know how to pass user defined table type. Has anybody tried this approach?
Unfortunately UDT parameters are not supported in Acumatica's PXDatabase.Execute(..) method and there is no way to pass one to a stored procedure using the built-in functionality of the platform.
Besides, when writing data-retrieval procedures like the one in your example, you should acknowledge that BQL-based data-retrieval facilities do a lot of work to match company masks, filter out records marked as DeletedDatabaseRecord and apply some other internal logic. If you chose to fetch data with plain select wrapped into a stored procedure you bypass all this functionality. Hardly is this something that you want to achieve.
If you absolutely want to use a stored procedure to get some records from the database but don't want the above side-effect, one option is to create an auxiliary table in the DB and select records into it using a procedure. Then in the application you add a DAC mapped to this new table and use it to get data from the table by means of PXSelect or similar thing.
Coming back to your particular example of fetching some ARInvoices by the list of their numbers, you could try using dynamic BQL composition to achieve something like this with Acumatica data access facilities.

autogenerated keys for entities with multipart keys

I was trying to create a new object with multicolumn primary key formed by a foreign key and a self-generated field and I found this error:
Ids can not be autogenerated for entities with multipart keys.
For now, and although not the most appropriate, I will change the key but the question is:
Are you planning support multicolumn primary key autogenerated soon?
I will add the request to uservoice too.
A greeting.
Edit to explain the use case:
Hello,
True, it may not make sense to have a primary key composed by a foreign key and a self-generated field.
My idea was to build a table like this:
ParentID ChildID Data
1 1 Some Data...
1 2 Some Data...
2 1 Some Data...
2 2 Some Data...
As a first step I did a table like this:
ParentID ChildID Data
1 1 Some Data...
1 2 Some Data...
2 3 Some Data...
2 4 Some Data...
Where ChildID was a self-generated field.
So you can ignore my question.
A greeting.
I've had the same problem but I have successfully solved it without changing the Primary Key (PK).
My PK had 2 columns
ProductId (identity)
TenantId (multitenant application)
so the first problem was Breeze couldn't generate new PK when the new item was added to the EntityManager. And I have defined custom entity initialization function:
var manager = new breeze.EntityManager(breezeServiceUrl);
var store = manager.metadataStore;
function itemInitializer(item) {
if (item.ProductId() == 0) {
item.ProductId(/*Some counter value*/);
}
if (item.TenantId() == 0) {
item.TenantId(TenantId);
}
if (item.isBeingEdited === undefined) {
item.isBeingEdited = ko.observable(false);
}
};
store.registerEntityTypeCtor("ProductItem", function() {/*Leave empty to use default constructor*/}, itemInitializer);
The second problem is that Breeze couldn't update the ProductId with the real value from EntityFramework because it gets the entity by PK and the returned value had only ProductId. So you need to override the function:
var entityType = store.getEntityType("ProductItem");
var entityGroup = manager.findEntityGroup(entityType);
entityGroup._fixupKey = function (tempValue, realValue) {
var ix = this._indexMap[tempValue + ":::" + TenantId]; //Changed line
if (ix === undefined) {
throw new Error("Internal Error in key fixup - unable to locate entity");
}
var entity = this._entities[ix];
var keyPropName = entity.entityType.keyProperties[0].name;
entity.setProperty(keyPropName, realValue);
delete entity.entityAspect.hasTempKey;
delete this._indexMap[tempValue];
this._indexMap[realValue] = ix;
};
If you have any other questions about multipart keys, you can read it in the blog.
This feature is not currently on the roadmap, but I want to better understand your use case for this. As you already know, Breeze supports auto-generated keys, and Breeze supports multi-part keys, but what scenarios do you find using both at the same time to be helpful?
Thanks!
Actually having a multipart key where part of the key is autogenerated is actually pretty common. This usually occurs with legacy databases where the primary key consists of a foreign key property and a 'autogenerated' sequence number property. Usually this autogenerated key is not globally unique by itself, but only in conjunction with the foreign key property. Think of an orderDetail with a foreign key of "OrderId, SequenceNumber".
What doesn't seem to make as much sense is when the primary key consists of more than one autogenerated property.
In Breeze the autogenerated keys are intended to be globally unique. Whereas, in the multipart key mentioned above the SequenceNumber would not be globally unique ( and if it were then why not make it the primary key all by itself).
Does this make sense?

Resources