Randomly lost data on activeadmin - ruby-on-rails

I have a weird problem with activeadmin. Randomly and unexpectedly some data on a resource not showed (like it has benn empty in DB). When I check the DB, all data it is on db and it is correct.
Also the data when take some .json resource is empty too.
The only solution I found for this problem is restart nginx, but it is a problem because the client not see the information on the system.
I check the logs, but I do not see any relevant information. Is there any way to get information about it for solve this kind of problems?
System versions:
Rails 5.1.3
ruby 2.2.5p319 (2016-04-26 revision 54774) [x86_64-linux]
nginx/1.10.3 Phusion_Passenger/5.1.4
Finally I found the problem but I do not understand what is wrong on it....
This are the involucrated files:
https://gist.github.com/cpfarher/bfde79dd9c3772575b03712c0a397110
The problem occured when you open the route: lines/1/edit action. After open it, data on https://gist.github.com/cpfarher/bfde79dd9c3772575b03712c0a397110#file-admin_recipe-rb is not showed.
If you change de sentence "select distinct" on line:
https://gist.github.com/cpfarher/bfde79dd9c3772575b03712c0a397110#file-recipe_fail-rb-L10
all works fine, the field on https://gist.github.com/cpfarher/bfde79dd9c3772575b03712c0a397110#file-admin_recipe-rb-L13 showed ok. But, if you use the clause with select distinct, the field it is not showed....
On the other hand if you use the file: https://gist.github.com/cpfarher/bfde79dd9c3772575b03712c0a397110#file-recipe_ok-rb instead of https://gist.github.com/cpfarher/bfde79dd9c3772575b03712c0a397110#file-recipe_fail-rb all works fine.... :|

Related

Ruby on Rails database column output is mangled

I'm connecting my Rails app to a SQL Server database that I do not control. One of the tables has a column of the type geometry. I can view it, and in BeeKeeper Studio just fine, but when I attempt to view the column/attribute in my Rails console, I get garbled output.
Here is some of the data as seen in BeeKeeper Studio:
{"srid":4248,"version":1,"points":[{"x":-60.161088,"y":4.53526,"z":null,"m":null},{"x":-60.160342,"y":...
And here is some of the output from the Rails console:
\x98\u0010\u0000\u0000\u0001\u0004\xE5\b\u0000\u0000\x97\xE3\u0015\x88\x9E\u0014N\xC0\x87m\
Now, Rails tells me that that string is encoded as "UTF-8" -- I find that hard to believe.
Can someone help me figure out what's wrong here? I get the feeling that either Rails is doing something bad to the string when it's taken from the database, or that I need to specify some encoding/collation in my database settings, but I don't know what.
If it helps, the database, and table's collation are both "SQL_Latin1_General_CP1_CI_AS".

neo4jrb DeprecatedSchemaDefinitionError

I've a problem with Neo4jrb 8.1.1, Rails 5.1.1, Neo4j 3.2.0 CE
I have a City model with an Int id, the DB is read only with data imported from csv files.
What should I declare to get rid of the error?
So far, I thought that declaring
id_property :id
property :name
would be fine but it doesn't work.
Overall, I'm annoyed with these new migrations files because the Neo4J DB is already done, I'm not supposed to write or modify indexes or constraints.
What's the error message you're seeing? I imagine you can solve the issue by creating an initializer and manually adding the relevant constraint(s) to the ModelSchema. Something like Neo4j::ModelSchema.add_defined_constraint(City, :id). It's also possible that this could be done inside the Model itself. Some experimentation should solve the problem.
See the source code for more info:
https://github.com/neo4jrb/neo4j/blob/8.1.x/lib/neo4j/model_schema.rb
http://www.rubydoc.info/gems/neo4j/Neo4j/ModelSchema

What is the source of "unknown OID" errors in Rails?

When replicating an app to production, my POSTGIS table columns started misbehaving, with Rails informing me there was an "unknown OID 26865" and that the fields would be treated as String.
Instead of current_pos yielding e. g.
#<RGeo::Geographic::SphericalPointImpl:0x22fabdc "POINT (13.39318248760133 52.52908798020595)"> I would get 0101000020E6100000FFDD958664C92A403619DEE6B2434A40. It looked like the activerecord-postgis-adapter was not installed, or installed badly, but I eliminated that possibility by testing for the existence of data type RGeo::Feature::Point and by test-assigning
current_pos = "POINT (13.39318248760133 52.52908798020595)"
to the field - which proceeded without error but then yielded another incomprehensible hex string like the above.
Also, strangely enough, POSTGIS was working correctly within the database, e.g. giving correct results for a ST_DISTANCE query. A very limited problem thus, where writing, writing-parsing (from Point to hex format), manipulating by SQL and reading all worked, only the parsing upon read didn't.
When I tried to use migrations to ensure the database column would have the correct type, the migrations failed, giving
undefined method `st_point' for #<ActiveRecord::ConnectionAdapters::PostgreSQL::TableDefinition:0x00000005cb80b8>
I spent several hours trying all kinds of solutions, even re-installing the server from scratch, double-checking version numbers of everything, installing a slightly newer version of Ruby and a slightly older version of POSTGIS (to match my other environment), exporting the database and starting with a clean one, and so on. After I had done migrations and arrived at the "undefined method st_point" error, I was finally able to find the solution via Google, way down in a Github issue, and it's really simple:
In config/database.yml, swap out postgres:// for postgis:// in the database url. If you're using Heroku, this may require some ugly manipulation:
production:
url: <%= ENV.fetch('DATABASE_URL', '').sub(/^postgres/, "postgis") %>
So silly...
Do not forget to add activerecord-postgis-adapter to your Gemfile so #Sprachprofi's solution can run.

Updating a Rails model's attributes through mongoid using normal persistence methods

I have been chasing an issue down for a while now, and still cannot figure out what's happening. I am unable to edit documents made from my gem through normal persistence methods, like update or even just editing attributes and calling save.
For example, calling:
Scram::Policy.where(id: a.id).first.update!(priority: 12345)
Will not work at all (there are no errors, but the document has not updated). But the following will work fine:
Scram::Policy.collection.find( { "_id" => a.id } ).update_one( { "$set" => {"priority" => 12345}})
I am not sure what I'm doing wrong. Calling update and save on any other model works fine. The document in question is from my gem: https://github.com/skreem/scram/blob/master/lib/scram/app/models/policy.rb
I cannot edit its embedded documents either (targets). I have tried removing the store_in macro, and specifying exactly what class to use using inverse_of and class_name in a fake app to reimplement these classes: https://github.com/skreem/scram-implementation/blob/master/lib/scram/lib/scram/app/models/policy.rb
I've tried reimplementing the entire gem into a clean fake rails application: https://github.com/skreem/scram-implementation
Running these in rails console demonstrates how updating does not work:
https://gist.github.com/skreem/c70f9ddcc269e78015dd31c92917fafa
Is this an issue with mongoid concerning embedded documents, or is there some small intricacy I am missing in my code?
EDIT:
The issue continues if you run irb from the root of my gem (scram) and then run the following:
require "scram.rb"
Mongoid.load!('./spec/config/mongoid.yml', :test)
Scram::Policy.first.update!(priority: 32) #=> doesn't update the document at all
Scram::Policy.where(id: "58af256f366a3536f0d54a61").update(priority: 322) #=> works just fine
Oddly enough, the following doesn't work:
Scram::Policy.where(id: "58af256f366a3536f0d54a61").first.update(priority: 322)
It seems like first isn't retrieving what I want. Doing an equality comparison shows that the first document is equal to the first returned by the where query.
Well. As it turns out, you cannot call a field collection_name or else mongoid will ensure bad things happen to you. Just renaming the field solved all my issues. Here's the code within mongoid that was responsible for the collision: https://github.com/mongodb/mongoid/blob/master/lib/mongoid/persistence_context.rb#L82
Here's the commit within my gem that fixed my issue: https://github.com/skreem/scram/commit/25995e955c235b24ac86d389dca59996fc60d822
Edit:
Make sure to update your Mongoid version if you have dealt with this issue and did not get any warnings! After creating an issue on the mongoid issue tracker, PersistenceContext was added to a list of prohibited methods. Now, attempting to use collection_name or collection as a field will cause mongoid to spit out a couple of warnings.
Fix commit: https://github.com/mongodb/mongoid/commit/6831518193321d2cb1642512432a19ec91f4b56d

Using mongomapper to execute server runCommand geoNear

I would very much like to use the mongo geoNear command as discussed here.
Here is the command I entered in my rails console with the accompanying error message.
MongoMapper.database.command({ 'geoNear' => "trips", 'near' => [45,45]})
Mongo::OperationFailure: Database command 'geoNear' failed:
(errmsg: 'more than 1 geo indexes :('; ok: '0.0').
I can not make sense of the error message, it is supposedly impossible to have more than 1 geo index and I am certain that I have only created one.
Based on this stackoverflow question I believe I am wording the query correctly. Does anyone understand that error message? How would I go about destroying and recreating my indexes?
I am using rails 3.1 with mongodb v2.0 and the mongo ruby gem v1.5.1.
I really asked this too soon, maybe I should delete it? Somehow there were in fact too many geo indexes, because deleting the index and recreating it fixed the problem.
MongoMapper.database.collection('trips').drop_indexes
Trip.ensure_index [[:route, '2d']]

Resources