Rails: Wrong number of arguments (given 2, expected 1) MongoID - ruby-on-rails

I'm trying to open my MongoDB models, however, I'm getting the following error:
MONGODB | xxx.xx.x.xxx:27017 | db.find | FAILED | wrong number of arguments (given 2, expected 1) | 0.013306s
My Mongo credentials are correct, and I can connect to the database's collections outside of Rails.
The first few lines of the error are:
Started GET "/admin/xsl_sheet" for xxx.xxx.xxx.xxx at 2020-03-03 13:49:54 UTC
Processing by RailsAdmin::MainController#index as HTML
Parameters: {"model_name"=>"xsl_sheet"}
(5.0ms) SELECT `companies`.`name` FROM `companies` WHERE `companies`.`id` = 4
CACHE (0.1ms) SELECT `companies`.`name` FROM `companies` WHERE `companies`.`id` = 4 [["id", "4"]]
CACHE (0.2ms) SELECT `companies`.`name` FROM `companies` WHERE `companies`.`id` = 4 [["id", "4"]]
MONGODB | xxx.xx.x.xxx:27017 | db.saslStart | STARTED | {}
MONGODB | xxx.xx.x.xxx:27017 | db.saslStart | SUCCEEDED | 0.007s
MONGODB | xxx.xx.x.xxx:27017 | db.saslContinue | STARTED | {}
MONGODB | xxx.xx.x.xxx:27017 | db.saslContinue | SUCCEEDED | 0.006s
MONGODB | xxx.xx.x.xxx:27017 | db.saslContinue | STARTED | {}
MONGODB | xxx.xx.x.xxx:27017 | db.saslContinue | SUCCEEDED | 0.006s
MONGODB | xxx.xx.x.xxx:27017 | db.find | STARTED | {"find"=>"TestCompanyNumber2_xsl_sheets", "filter"=>{"assetable_id"=>4}, "limit"=>1, "skip"=>0, "sort"=>{"_id"=>-1}, "projection"=>{"_id"=>1}}
MONGODB | xxx.xx.x.xxx:27017 | db.find | FAILED | wrong number of arguments (given 2, expected 1) | 0.013306s
Rendered /Project/app/views/rails_admin/main/index.html.haml within
layouts/rails_admin/application (349.7ms)
Rendered public/500.html (64.4ms)
wrong number of arguments (given 2, expected 1)
/GEMS/gems/bson-4.8.0-java/lib/bson/hash.rb:115:in `from_bson'
Edit:
Here is the code inside rails_admin.rb which (I believe) is in charge of pulling objects from MongoDB:
c.model XslSheet do
label Proc.new {"Xsl Sheet"}
navigation_label Proc.new {I18n.t('navigation.actions')}
weight 303
navigation_icon 'fa fa-file-excel-o'
list do
scopes [:applicationId]
field :data_file_name
field :updated_at
end
end

This is https://jira.mongodb.org/browse/RUBY-2146. Downgrade to bson 4.7.0 until 4.8.2 is released.
To help people answer your questions, include the versions of the software you are using (in this case, mongoid, mongo and bson versions are relevant), as well as the fact you are using JRuby.

Related

Neo4j Cypher: How to optimize a NOT EXISTS Query when cardinality is high

The below query takes over 1 second & consumer about 7 MB when cardinality b/w users to posts is about 8000 (one user views about 8000 posts). It is difficult to scale this due to high & linearly growing latencies & memory consumption. Is there a possibility to model this differently and/or optimise the query?
Query
PROFILE MATCH (u:User)-[:CREATED]->(p:Post) WHERE NOT (:User{ID: 2})-[:VIEWED]->(p) RETURN p.ID
Plan
| Plan | Statement | Version | Planner | Runtime | Time | DbHits | Rows | Memory (Bytes) |
+-----------------------------------------------------------------------------------------------------------+
| "PROFILE" | "READ_ONLY" | "CYPHER 4.1" | "COST" | "INTERPRETED" | 1033 | 3721750 | 10 | 6696240 |
+-----------------------------------------------------------------------------------------------------------+
+------------------------------+-----------------------------------------------+----------------+------+---------+-----------+----------------+----------------+
| Operator | Details | Estimated Rows | Rows | DB Hits | Cache H/M | Memory (Bytes) | Ordered by |
+------------------------------+-----------------------------------------------+----------------+------+---------+-----------+----------------+----------------+
| +ProduceResults#neo4j | `p.ID` | 2158 | 10 | 0 | 0/0 | | |
| | +-----------------------------------------------+----------------+------+---------+-----------+----------------+----------------+
| +Projection#neo4j | p.ID AS `p.ID` | 2158 | 10 | 10 | 0/0 | | |
| | +-----------------------------------------------+----------------+------+---------+-----------+----------------+----------------+
| +Filter#neo4j | u:User | 2158 | 10 | 10 | 0/0 | | |
| | +-----------------------------------------------+----------------+------+---------+-----------+----------------+----------------+
| +Expand(All)#neo4j | (p)<-[anon_15:CREATED]-(u) | 2158 | 10 | 20 | 0/0 | | |
| | +-----------------------------------------------+----------------+------+---------+-----------+----------------+----------------+
| +AntiSemiApply#neo4j | | 2158 | 10 | 0 | 0/0 | | |
| |\ +-----------------------------------------------+----------------+------+---------+-----------+----------------+----------------+
| | +Expand(Into)#neo4j | (anon_47)-[anon_61:VIEWED]->(p) | 233 | 0 | 3695819 | 0/0 | 6696240 | anon_47.ID ASC |
| | | +-----------------------------------------------+----------------+------+---------+-----------+----------------+----------------+
| | +NodeUniqueIndexSeek#neo4j | UNIQUE anon_47:User(ID) WHERE ID = $autoint_0 | 8630 | 8630 | 17260 | 0/0 | | anon_47.ID ASC |
| | +-----------------------------------------------+----------------+------+---------+-----------+----------------+----------------+
| +NodeByLabelScan#neo4j | p:Post | 8630 | 8630 | 8631 | 0/0 | | |
+------------------------------+-----------------------------------------------+----------------+------+---------+-----------+----------------+----------------+
Yes, this can be improved.
First, let's understand what this is doing.
First, it starts with a NodeByLabelScan. That makes sense, there's no avoiding that.
But then, for every node of the label (the following executes PER ROW!), it matches to user 2, and expands all :VIEWED relationships from user 2 to see if any of them is the post for that particular row.
Can you see why this is inefficient? There are 8630 post nodes according to the PROFILE plan, so user 2 is looked up by index 8630 times, and their :VIEWED relationships are expanded 8630 times. Why 8630 times? Because this is happening per :Post node.
Instead, try this:
MATCH (:User{ID: 2})-[:VIEWED]->(viewedPost)
WITH collect(viewedPost) as viewedPosts
MATCH (:User)-[:CREATED]->(p:Post)
WHERE NOT p IN viewedPosts
RETURN p.ID
This changes things up a bit.
First it matches to user 2's viewed posts (the lookup and expansion is performed only once), then those viewed posts are collected.
Then it will do a label scan, and filter such that the post isn't in the collection of viewed posts.

MongoDB error on Logstash start admin.listCollections | FAILED | wrong number of arguments (given 2, expected 1) [duplicate]

I'm trying to open my MongoDB models, however, I'm getting the following error:
MONGODB | xxx.xx.x.xxx:27017 | db.find | FAILED | wrong number of arguments (given 2, expected 1) | 0.013306s
My Mongo credentials are correct, and I can connect to the database's collections outside of Rails.
The first few lines of the error are:
Started GET "/admin/xsl_sheet" for xxx.xxx.xxx.xxx at 2020-03-03 13:49:54 UTC
Processing by RailsAdmin::MainController#index as HTML
Parameters: {"model_name"=>"xsl_sheet"}
(5.0ms) SELECT `companies`.`name` FROM `companies` WHERE `companies`.`id` = 4
CACHE (0.1ms) SELECT `companies`.`name` FROM `companies` WHERE `companies`.`id` = 4 [["id", "4"]]
CACHE (0.2ms) SELECT `companies`.`name` FROM `companies` WHERE `companies`.`id` = 4 [["id", "4"]]
MONGODB | xxx.xx.x.xxx:27017 | db.saslStart | STARTED | {}
MONGODB | xxx.xx.x.xxx:27017 | db.saslStart | SUCCEEDED | 0.007s
MONGODB | xxx.xx.x.xxx:27017 | db.saslContinue | STARTED | {}
MONGODB | xxx.xx.x.xxx:27017 | db.saslContinue | SUCCEEDED | 0.006s
MONGODB | xxx.xx.x.xxx:27017 | db.saslContinue | STARTED | {}
MONGODB | xxx.xx.x.xxx:27017 | db.saslContinue | SUCCEEDED | 0.006s
MONGODB | xxx.xx.x.xxx:27017 | db.find | STARTED | {"find"=>"TestCompanyNumber2_xsl_sheets", "filter"=>{"assetable_id"=>4}, "limit"=>1, "skip"=>0, "sort"=>{"_id"=>-1}, "projection"=>{"_id"=>1}}
MONGODB | xxx.xx.x.xxx:27017 | db.find | FAILED | wrong number of arguments (given 2, expected 1) | 0.013306s
Rendered /Project/app/views/rails_admin/main/index.html.haml within
layouts/rails_admin/application (349.7ms)
Rendered public/500.html (64.4ms)
wrong number of arguments (given 2, expected 1)
/GEMS/gems/bson-4.8.0-java/lib/bson/hash.rb:115:in `from_bson'
Edit:
Here is the code inside rails_admin.rb which (I believe) is in charge of pulling objects from MongoDB:
c.model XslSheet do
label Proc.new {"Xsl Sheet"}
navigation_label Proc.new {I18n.t('navigation.actions')}
weight 303
navigation_icon 'fa fa-file-excel-o'
list do
scopes [:applicationId]
field :data_file_name
field :updated_at
end
end
This is https://jira.mongodb.org/browse/RUBY-2146. Downgrade to bson 4.7.0 until 4.8.2 is released.
To help people answer your questions, include the versions of the software you are using (in this case, mongoid, mongo and bson versions are relevant), as well as the fact you are using JRuby.

Jenkins failure build: Internal Error: Unhandled kpi type with performance plugin

I'm running Jenkins with the Performance Plugin.
I have a multiple JMeter jmx scripts which run on jenkins and I'm trying to add this one. But the build is always failing. With this message
Internal Error: Unhandled kpi type: <type 'long'>
bzt installation is also done.
I can't seem to find much info about this on google. Any help?
After the shutdown I'm getting this:
19:42:27 INFO: Shutting down...
19:42:27 INFO: Post-processing...
19:42:29 INFO: Test duration: 0:47:04
19:42:29 INFO: Samples count: 3200, 3.25% failures
19:42:29 INFO: Average times: total 4.369, latency 0.000, connect 0.000
19:42:29 INFO: Percentiles:
+---------------+---------------+
| Percentile, % | Resp. Time, s |
+---------------+---------------+
| 0.0 | 0.258 |
| 50.0 | 3.251 |
| 90.0 | 8.799 |
| 95.0 | 14.375 |
| 99.0 | 24.239 |
| 99.9 | 30.031 |
| 100.0 | 35.743 |
+---------------+---------------+
19:42:29 INFO: Request label stats:
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
| label | status | succ | avg_rt | error |
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
| Click_ToonSelectie | FAIL | 94.00% | 4.984 | Number of samples in transaction : 4, number of failing samples : 1 |
| | | | | Non HTTP response message: Connection timed out: connect |
| | | | | Not Modified |
| FilterBrand | FAIL | 99.00% | 0.832 | Number of samples in transaction : 1, number of failing samples : 1 |
| LoadFilterPage | FAIL | 96.00% | 6.306 | Non HTTP response message: Connection timed out: connect |
| | | | | Number of samples in transaction : 3, number of failing samples : 1 |
| OpenRandomCarDetails | FAIL | 96.00% | 5.355 | Number of samples in transaction : 1, number of failing samples : 1 |
| | | | | Moved Permanently |
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
19:42:29 INFO: Request label stats:
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
| label | status | succ | avg_rt | error |
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
| Click_ToonSelectie | FAIL | 94.00% | 4.984 | Number of samples in transaction : 4, number of failing samples : 1 |
| | | | | Non HTTP response message: Connection timed out: connect |
| | | | | Not Modified |
| FilterBrand | FAIL | 99.00% | 0.832 | Number of samples in transaction : 1, number of failing samples : 1 |
| LoadFilterPage | FAIL | 96.00% | 6.306 | Non HTTP response message: Connection timed out: connect |
| | | | | Number of samples in transaction : 3, number of failing samples : 1 |
| OpenRandomCarDetails | FAIL | 96.00% | 5.355 | Number of samples in transaction : 1, number of failing samples : 1 |
| | | | | Moved Permanently |
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
19:42:29 INFO: Dumping final status as XML: aggregate-results.xml
19:42:29 ERROR: Internal Error: Unhandled kpi type: <type 'long'>
19:42:29 INFO: Artifacts dir: C:\Users\Kristof\.jenkins\workspace\ACC-Tweedehands\2018-07-19_18-55-13.298000
19:42:29 WARNING: Done performing with code: 1
Build step 'Run Performance Test' changed build result to FAILURE
Finished: FAILURE
First 2 lines of JTL:
<?xml version="1.0" encoding="UTF-8"?>
<testResults version="1.2">
If you're using Taurus tool my expectation is that you should be providing kpi.jtl CSV file instead of .jtl results file in XML format.
I cannot reproduce your issue using normal kpi.jtl results file from Taurus and latest Performance Plugin version 3.10
I neither can reproduce your issue, my output is totally different:
Started by user anonymous
Building on master in workspace /Users/jenkins/Projects/temp/Jenkins/.jenkins/workspace/Taurus
Performance: Recording JMeterCsv reports '/Users/jenkins/Projects/temp/Jenkins/.jenkins/jobs/Taurus/builds/2/temp/kpi.jtl'
Performance: Parsing JMeter report file '/Users/jenkins/Projects/temp/Jenkins/.jenkins/jobs/Taurus/builds/2/performance-reports/JMeterCSV/kpi.jtl'.
Performance: No threshold configured for making the test unstable
Performance: No threshold configured for making the test failure
Performance: File kpi.jtl reported 0.0% of errors [SUCCESS]. Build status is: SUCCESS
Finished: SUCCESS
See How to Run Taurus with the Jenkins Performance Plugin article for comprehensive information and instructions just in case.

Why autoextend on Oracle XE not worked

We had a problem with our prod environment. Suddenly the exception began to appear.
ORA-01654: unable to extend index EMA.TRANSFERI2 by 128 in tablespace SYSTEM
As the solution of the problem my collegue added new datafile. But the question is, why the autoextend mechanism didn't worked? I'm not DBA, but I checked the configuration and it seems ok to me. It occurs only on prod environment, so I would rather avoid experimenting.
We have the table in system tablespace, which I already know, should be moved to users tablespace. But anyway, autoextend should work also on system tablepsace. Here is my config of table, datafiles and tablespace
TABLESPACE_NAME | PCT_FREE | PCT_USED | INITIAL_EXTENT | NEXT_EXTENT | MIN_EXTENTS | MAX_EXTENTS | PCT_INCREASE
SYSTEM | 10 | 40 | 65536 | 1048576 | 1 | 2147483645 | null
FILE_NAME | FILE_ID | TABLESPACE_NAME | BYTES | BLOCKS | STATUS | RELATIVE_FNO | AUTOEXTENSIBLE | MAXBYTES | MAXBLOCKS | INCREMENT_BY | USER_BYTES | USER_BLOCKS | ONLINE_STATUS
/u01/app/oracle/oradata/XE/system.dbf | 1 | SYSTEM | 629145600 | 76800 | AVAILABLE | 1 | YES | 629145600 | 76800 | 1280 | 628097024 | 76672 | SYSTEM
/u01/app/oracle/oradata/XE/system2.dbf | 5 | SYSTEM | 1048576000 | 128000 | AVAILABLE | 5 | YES | 2147483648 | 262144 | 25600 | 1047527424 | 127872 | SYSTEM
TABLESPACE_NAME | BLOCK_SIZE | INITIAL_EXTENT | NEXT_EXTENT | MIN_EXTENTS | MAX_EXTENTS | MAX_SIZE | PCT_INCREASE | MIN_EXTLEN | STATUS | CONTENTS | ALLOCATION_TYPE | SEGMENT_SPACE_MANAGEMENT | BIGFILE
SYSTEM | 8192 | 65536 | null | 1 | 2147483645 | 2147483645 | 65536 | ONLINE | PERMANENT | LOCAL | SYSTEM | MANUAL | NO
The MAXBYTES value for your system.dbf file is set to 629145600, so when your file size reached that limit, it couldn't be extended any further. It had autoextended up to that point, but wouldn't extend beyond the soft limit that had been specified for the file. That was set when the tablespace was created, using the autoextend MAXSIZE clause.
The limit may have been set because of the size of the underlying file system, to cause an error in case of runaway/unexpected growth, unintentionally, or for some other reason now known only to whoever set the database up.
As an alternative to adding a second data file, your DBA could have increased the soft limit on the existing file with alter database. But neither should be done lightly; the reason for the original restriction should be understood (especially if the filesystem could run out of space as a result of an increase) and the reason for growth should be examined too.

Heroku not listing pg_search_documents table

This is pretty weird but for some reason, heroku doesn't seem to show the pg_search_documents table when when I list tables using the heroku-sql-console.
>> heroku sql
SQL> show tables
+------------------------+
| table_name |
+------------------------+
| activity_notifications |
| attachments |
| businesses |
| color_modes |
| comments |
| counties |
| customer_employees |
| customers |
| delayed_jobs |
| file_imports |
| invitations |
| invoices |
| jobs |
| paper_stocks |
| paper_weights |
| quotes |
| rails_admin_histories |
| schema_migrations |
| tax_rates |
| users |
+------------------------+
As you can see, no mention of pg_search.
Then, in the same session,
SQL> select * from pg_search_documents;
+---------------------------------------------------------------------------------------------------------------------+
| id | content | searchable_id | searchable_type | created_at | updated_at |
+---------------------------------------------------------------------------------------------------------------------+
| 3 | Energy Centre | 3 | Customer | 2012-12-03 19:33:55 -0800 | 2012-12-03 19:33:55 -0800 |
+---------------------------------------------------------------------------------------------------------------------+
It's also interesting that the show tables command lists only 20 tables whereas heroku pg:info says there are 21.
The reason this is a problem rather than a curiosity is because I can't get heroku db:pull to pull down the pg_search_documents table (everything else pulls fine) and I can't test migrations on production data.
I'm using PG Version: 9.1.6 on heroku and PostgreSQL 9.2.1 locally. Also PgSearch 0.5.7.
Any ideas what the issue is?

Resources