OKR Board for Jira Error on link issue to KR: maximum number of expressions in a list is 1000 - jira

When I try to link an issue to KR I get an error:
There was a SQL exception thrown by the Active Objects library: Database: - name: Oracle - version: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production - minor version:0 - major version:19 Driver: - name: Oracle JDBC driver - version:12.2.0.1.0 java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000
by reviewing the log file I found OKR plugin generates one SELECT command that has one WHERE clause with more than 1000 numbers in IN condition:
SELECT ....... FROM ........ ....... WHERE OBJECTIVE."ISS" = :1 AND OBJECTIVE."DELETED" = 0 AND (OBJECTIVE."ID" in (20,71,79,92,93,105, ...........,1683,1684,1687)
I use Jira version 8.5.7 and OKR Board Plugin version [f.3.0.0, b.6.1.2] and use Oracle for the database
How can I solve my problem?
Thanks.

Related

Arangodb container reaches memory limit and crashes while filtering using 'path' for graph traversal

My Environment
ArangoDB Version: 3.6.2
Storage Engine: RocksDB
Deployment Mode: Single Server
Deployment Strategy: Manual Start in Docker
Infrastructure: Own
Operating System: Linux version 4.4.0-154-generic (gcc version 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) )
Total RAM in your machine: 4GB
Disks in use: HDD
Used Package: Docker-Official Docker library
My Problem:
I have a graph with 60k nodes and 4*60k edges. Whenever I try using 'path' for Filter or Return, the memory limit reaches, arangodb container crashes and it gets restarted. However, if I don't use 'path' and use 'vertex' or 'edge' only for filter or return the query executes and produces result as expected. This issue is seen in version 3.6.2.
However, in arangodb 3.1.18 this issue is not seen and everything is
working fine.
Sample Query:
FOR v, e, p IN 6 OUTBOUND "root_node" GRAPH "my_graph_db"
FILTER (
LENGTH(p.edges) == 6 &&
LIKE (p.edges[3]._from,"Data_level_3%",true) &&
(LIKE (p.edges[3]._to, "Data_level_4%") || LIKE (p.edges[3]._to, "Data_4%")) &&
...................................................................
)
LIMIT 0,10
RETURN {
result: Merge(
{data: v},
{parent_id: p.edge[5]._id}
),
.................
}
Expected result:
The arangodb container should not reach memory limit crash. 'Path' attributes needs to accessed while doing queries.
Please refer to https://github.com/arangodb/arangodb/issues/11277

Mysql Cluster using Docker: Error 708 'No more attribute metadata records (increase MaxNoOfAttributes)'

I'm setting up a mysql cluster using Docker. I have 1 management node, 2 data nodes, and 2 sql nodes. When I create a database on one sql node, it gets replicated to the other sql node which is perfectly fine.
The problem is when I import an sql file which contains many tables into one sql node, I encounter the error: 'No more attribute metadata records (increase MaxNoOfAttributes)'. I tried increasing the value of MaxNoOfAttributes to its maximum (4294967039), and also increasing the value of MaxNoOfTables to its maximum (20320), restarted the management node container, then tried again. But I'm still getting the same error. Here's my config.ini file:
[ndbd default]
NoOfReplicas=2
DataMemory=5G
IndexMemory=64M
MaxNoOfTables = 20320
MaxNoOfAttributes = 4294967039
MaxNoOfOrderedIndexes=5242
[mysqld default]
[ndb_mgmd default]
[tcp default]
[ndb_mgmd]
NodeId=2
hostname=180.168.0.2
[ndbd]
NodeId=3
hostname=180.168.0.3
DataDir= /var/lib/mysql-cluster
[ndbd]
NodeId=4
HostName=180.168.0.4
DataDir=/var/lib/mysql-cluster
[mysqld]
NodeId=5
hostname=180.168.0.10
[mysqld]
NodeId=6
hostname=180.168.0.11
The sql file contains more than 90 tables.
I've been searching this for quite a while now and I can't seem to find a working solution. Any help would be gladly appreciated.
root#swrcmsdbm:/# /usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini --initial
MySQL Cluster Management Server mysql-5.7.32 ndb-7.6.16
root#swrcmsdbm:/# usr/bin/ndb_config -q MaxNoOfAttributes
2560000 2560000

Error: Request to Elasticsearch failed

Kibana version: 5.4.2
Elasticsearch version: 5.4.2
Logstash version: 5.4.2
Server OS version: Linux Red Hat, docker container kibana, logstash and elasticsearch.
Logs does not display in ELK. What is the problem? How to fix the error?
Discover: Request to Elasticsearch failed:
{"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all
shards
failed","phase":"query","grouped":true,"failed_shards":[]},"status":503}
Less
Error: Request to Elasticsearch failed:
{"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all
shards
failed","phase":"query","grouped":true,"failed_shards":[]},"status":503}
at ip-address:5601/bundles/kibana.bundle.js?v=15117:28:10760 at
Function.Promise.try
(http://:5601/bundles/commons.bundle.js?v=15117:82:22203)
at ip-address:5601/bundles/commons.bundle.js?v=15117:82:21573 at
Array.map (native) at Function.Promise.map
(http://ip-address:5601/bundles/commons.bundle.js?v=15117:82:21528) at
callResponseHandlers
(http://:5601/bundles/kibana.bundle.js?v=15117:28:10376)
atip-address:5601/bundles/kibana.bundle.js?v=15117:27:29944 at
processQueue
(ip-address:5601/bundles/commons.bundle.js?v=15117:38:23621) at
ip-address:5601/bundles/commons.bundle.js?v=15117:38:23888 at
Scope.$eval
(ip-address:5601/bundles/commons.bundle.js?v=15117:39:4619)
If you have multiple websites
Go to
STORES > CONFIGURATION > CATALOG > CATALOG > CATALOG SEARCH > Elasticsearch Index Prefix.
Set a different prefix for each website, any random one for each. Reindex and flush cache on each store.
Another Official Solution is
Base on the bug filed in my previous reply, I modified the following file to fix the search problem.
./vendor/magento/module-elasticsearch/Model/Adapter/FieldMapper/Product/FieldProvider/FieldType/Converter.php
private const ES_DATA_TYPE_DOUBLE = 'double';
--> private const ES_DATA_TYPE_FLOAT = 'float';
Are the indices being created ? What does this show ?
GET /_cat/indices/

How to use the "recovery tool" to repair a perfino h2 database?

Our Perfino server crashed recently, logging since then the ERROR shown below. (There are some clues hinting to an OutOfMemory resulting in a corrupt db.)
It is suggested: 'Possible solution: use the recovery tool'. But neither the official perfino documentation nor the logs offer more instructions on how to proceed.
So here the question: how to use the recovery tool?
Stacktrace:
ERROR [collector] server: could not load transaction data
org.h2.jdbc.JdbcSQLException: File corrupted while reading record: "[495834] stream data key:64898 pos:11 remaining:0". Possible solution: use the recovery tool; SQL statement:
SELECT value FROM transaction_names WHERE id=? [90030-176]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:344)
at org.h2.message.DbException.get(DbException.java:178)
at org.h2.message.DbException.get(DbException.java:154)
at org.h2.index.PageDataIndex.getPage(PageDataIndex.java:242)
at org.h2.index.PageDataNode.getNextPage(PageDataNode.java:233)
at org.h2.index.PageDataLeaf.getNextPage(PageDataLeaf.java:400)
at org.h2.index.PageDataCursor.nextRow(PageDataCursor.java:95)
at org.h2.index.PageDataCursor.next(PageDataCursor.java:53)
at org.h2.index.IndexCursor.next(IndexCursor.java:278)
at org.h2.table.TableFilter.next(TableFilter.java:361)
at org.h2.command.dml.Select.queryFlat(Select.java:533)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:646)
at org.h2.command.dml.Query.query(Query.java:323)
at org.h2.command.dml.Query.query(Query.java:291)
at org.h2.command.dml.Query.query(Query.java:37)
at org.h2.command.CommandContainer.query(CommandContainer.java:91)
at org.h2.command.Command.executeQuery(Command.java:197)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:109)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:353)
at com.perfino.a.f.b.a.a(ejt:70)
at com.perfino.a.f.o.a(ejt:880)
at com.perfino.a.f.o.a(ejt:928)
at com.perfino.a.f.o.a(ejt:60)
at com.perfino.a.f.aa.a(ejt:783)
at com.perfino.a.f.o.a(ejt:847)
at com.perfino.a.f.o.a(ejt:792)
at com.perfino.a.f.o.a(ejt:787)
at com.perfino.a.f.o.a(ejt:60)
at com.perfino.a.f.ac.a(ejt:1011)
at com.perfino.b.a.b(ejt:68)
at com.perfino.b.a.c(ejt:82)
at com.perfino.a.f.o.a(ejt:1006)
at com.perfino.a.i.b.d.a(ejt:168)
at com.perfino.a.i.b.d.b(ejt:155)
at com.perfino.a.i.b.d.b(ejt:52)
at com.perfino.a.i.b.d.a(ejt:45)
at com.perfino.a.i.a.b.a(ejt:94)
at com.perfino.a.c.a.b(ejt:105)
at com.perfino.a.c.a.a(ejt:37)
at com.perfino.a.c.c.run(ejt:57)
at java.lang.Thread.run(Thread.java:745)
Notice: I couldn't recover my database with the procedure described below. I'm still keeping this post as reference, as the probability of a successful recovery will depend on how broken the database is, and there is no evidence that this procedure is invalid.
Perfino uses by default the H2 Database Engine as its persistence storage. H2 has a recovery tool and a run script tool to import sql statements:
# 1. Create a dump of the current database using the tool [1]
# This tool creates a 'config.h2.sql' and a 'perfino.h2.sql' db dump
cd ${PERFINO_DATA_DIR}
java -cp ${PATH_TO_H2_LIB}/h2*.jar org.h2.tools.Recover
# 2. Rename the corrupt database file to e.g. *bkp
mv perfino.h2.db perfino.h2.db.bkp
# 3. Import the dump from step 1, ignoring errors
java -cp ${PATH_TO_H2_LIB}/h2*.jar \
org.h2.tools.RunScript \
-url jdbc:h2:${PERFINO_DATA_DIR}/db/perfino \
-script perfino.h2.sql -checkResults
[1]: Perfino includes a version of the h2.jar under ${PERFINO_INSTALL_DIR}/lib/common/h2.jar. You could of course download the official jar and try with it, but in my case, I could only restore the database with the jar supplied with perfino.
This failed for me with a
Exception in thread "main" org.h2.jdbc.JdbcSQLException: Feature not supported: "Restore page store recovery SQL script can only be restored to a PageStore file".
If this happens to you, try:
# 1. Delete database and mv files
cd ${PERFINO_DATA_DIR}
rm perfino.h2.db perfino.mv.db
# 2. Create a PageStore database manually
touch perfino.h2.db
# 3. try with MV_STORE=FALSE on the url [2]
java -cp ${PATH_TO_H2_LIB}/h2*.jar \
org.h2.tools.RunScript \
-url jdbc:h2:${PERFINO_DATA_DIR}/db/perfino;MV_STORE=FALSE \
-checkResults \
-continueOnError
[2]: force h2 to recreate a pagestore db instead of the new storage engine (See this thread in metabase)
I found this article trying to repair a Confluence internal h2 database and this worked for me. Here's a shell script as a gist on my GitHub with what I did - you'll have to adjust for your environment.

Neo4j 2.1.2 incremental backup fails but full backup succeeds

We recently upgraded our database from 2.0.1 to 2.1.2 (Enterprise) using the explicit upgrade procedure.
When trying to take a backup post-upgrade, full backups succeed, but incremental backups fail.
When running this command the first time, it succeeds:
~/neo4j-enterprise-2.1.2/bin/neo4j-backup -from single://127.0.0.1 -to /mnt/backups/neo4j-test-backup
Running it a second time gives the following error:
Performing backup from '127.0.0.1'
00:18:44.907 [main] INFO o.n.k.InternalAbstractGraphDatabase - No locking implementation specified, defaulting to 'forseti'
Transactions applied
Exception in thread "main" org.neo4j.consistency.ConsistencyCheckingError: Inconsistencies in transaction:
Start[3,xid=GlobalId[NEOKERNL|2772027681176372421|40044|-1], BranchId[ 52 49 52 49 52 49 ],master=-1,me=-1,time=2014-06-23 23:56:53.637+0000/1403567813637,lastCommittedTxWhenTransactionStarted=752027]
1PC[3, txId=752028, 2014-06-23 23:56:53.647+0000/1403567813647]
ConsistencySummaryStatistics{
Number of errors: 2
Number of warnings: 0
Number of inconsistent RELATIONSHIP records: 2
}
at org.neo4j.consistency.checking.incremental.intercept.CheckingTransactionInterceptor.complete(CheckingTransactionInterceptor.java:181)
at org.neo4j.kernel.impl.transaction.xaframework.LogEntryVisitorAdapter.apply(LogEntryVisitorAdapter.java:62)
at org.neo4j.kernel.impl.transaction.xaframework.LogEntryVisitorAdapter.apply(LogEntryVisitorAdapter.java:28)
at org.neo4j.kernel.impl.nioneo.xa.command.LogFilter.endLog(LogFilter.java:87)
at org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog.applyTransaction(XaLogicalLog.java:1120)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.applyCommittedTransaction(XaResourceManager.java:856)
at org.neo4j.kernel.impl.transaction.xaframework.XaDataSource.applyCommittedTransaction(XaDataSource.java:246)
at org.neo4j.com.ServerUtil.applyReceivedTransactions(ServerUtil.java:461)
at org.neo4j.backup.BackupService.unpackResponse(BackupService.java:401)
at org.neo4j.backup.BackupService.incrementalWithContext(BackupService.java:315)
at org.neo4j.backup.BackupService.doIncrementalBackup(BackupService.java:257)
at org.neo4j.backup.BackupService.doIncrementalBackup(BackupService.java:210)
at org.neo4j.backup.BackupService.doIncrementalBackupOrFallbackToFull(BackupService.java:231)
at org.neo4j.backup.BackupTool.doBackup(BackupTool.java:240)
at org.neo4j.backup.BackupTool.run(BackupTool.java:168)
at org.neo4j.backup.BackupTool.main(BackupTool.java:71)
Any help/workarounds are appreciated.
Update: The same behavior persists after upgrading to 2.1.3
Could you please check again in the issue is resolved with 2.1.4? I darkly remember a resolved issue regarding incremental backups.

Resources