Let's say I have a database table called 'options' with corresponding model called Option. Structure of this table is simple and as follows ...
id -> primary key, auto increment
name -> key
value -> value for the key
Sample data rows could be as follows ...
id name value
---- ---------------------------- -----------
1 default_view DAILY
2 show_registration_number 0
3 notification_method IMMEDIATE
What I want is that all the options (keys) should be accessible to me as the method names.
For example if do as following ...
#options = Options.find(:all)
is it possible to access the data like #options.default_view which should return me the value as 'DAILY' and similarly #options.show_registration_number which should return the value as 0.
Also if that is possible, whether modification would be permissible like if #options.default_view = 'MONTHLY' and should update the corresponding record in the database.
This will get you almost the answer you were looking for: http://code.dblock.org/how-to-define-enums-in-ruby
It relies on const_missing and assumes that elements of your "enum" are defined as constants, in your case Option::default_view
However, it is easy to see how to adapt this code to use method_missing so that you can do Option.default_view
Another example of this same approach is contained in rails-settings gem, so you can browse this code for the answer you are looking for
Related
I have a table that looks like the following:
ID City Code
"1005AE" "Oakland" "Value1"
"1006BR" "St.Louis" "Value2"
"102AC" "Miami" "Value1"
"103AE" "Denver" "Value3"
And I want to transpose/pivot the Code examples/values into column attributes like this:
ID City Value1 Value2 Value3
"1005" "Oakland" 1 0 0
"1006" "St.Louis" 0 1 0
"1012" "Miami" 1 0 0
"1030" "Denver" 0 0 1
Note that the ID field is numeric values encoded as strings because Rapidminer had trouble importing bigint datatypes. So that is a separate issue I need to fix--but my focus here is the pivoting or transposing of the data.
I read through a few different Stackoverflow posts listed below. They suggested the Pivot or Transpose operations. I tried both of these, but for some reason I am getting either a huge table which creates City as a dummy variable as well, or just some subset of attribute columns.
How can I set the rows to be the attributes and columns the samples in rapidminer?
Rapidminer data transpose equivalent to melt in R
Any suggestions would be appreciated.
In pivoting, the group attribute parameter dictates how many rows there will be and the index attribute parameter dictates what the last part of the name of new attributes will be. The first part of the name of each new attribute is driven by any other regular attributes that are neither group nor index and the value within the cell is the value found in the original example set.
This means you have to create a new attribute with a constant value of 1; use Generate Attributes for this. Set the role of the ID attribute to be ID so that it is no longer a regular attribute; use Set Role for this. In the Pivot operator, set the group attribute to be City and the index attribute to be Code. The end result is close to what you want. The final steps are, firstly to set missing values to be 0; use Replace Missing Values for this and, secondly to rename the attributes to match what you want; use Rename for this.
You will have to join the result back to the original since the pivot operation loses the ID.
You can find a worked example here http://rapidminernotes.blogspot.co.uk/2011/05/worked-example-using-pivot-operator.html
My postgres json data looks like this:
changes: {"data"=>[nil, {"margin"=>0.0, "target"=>0.77777}]}
The field name is changes and it's a postgres json data type.
How do I write an active record query to check for rows where data has a key named margin?
For example, the first record would be included while the second would not:
# record 1
changes: {"data"=>[nil, {"margin"=>0.0, "target"=>0.77777}]}
# record 2
changes: {"data"=>[nil, {"foo"=>0.0, "target"=>0.77777}]}
I've tried something like the following but it's not working:
ModelName.where("changes -> 'data' ?| array[:keys]", keys: ['margin'])
Okay, this is what I found
Query based on JSON document
The -> operator returns the original JSON type (which might be an object), whereas ->> returns text
Also, this post is probably what you look for.
Maybe try something like
ModelName.where("changes->>'margin' > -1")
The answer:
ModelName.where("changes -> 'data' -> 1 ? 'margin'")
The query logic is something like...in the changes field, move to object data, get second element (zero based index), see if it contains a key named margin.
I'm building a rails app for managing a queue of work items. I have several types of users ("access levels") to whom I want to auto-assign these work items.
The end goal is an "Auto-assign" button on one of my views that will automatically grab the next work item based on a priority, which is defined by the users's access level.
I'm trying to set up a class method in my work_item model to automatically sort work items by type based on the user's access level. I am looking at something like this:
def self.auto_assign_next(access_level)
case
when access_level = 2
where("completed = 'f'").order("requested_time ASC").limit(1)
when access_level > 2
where("completed = 'f'").order("CASE WHEN form='supervisor' THEN 1 WHEN form='installer' THEN 2 WHEN form='repair' THEN 3 WHEN form='mail' THEN 4 WHEN form='hp' THEN 5 ELSE 6 END").limit(1)
end
This isn't very DRY, though. Ideally I'd like the sort order to be configurable by administrators, so maybe setting up a separate table on which the sort order is kept would be best. The problem with that idea is that I have no idea how to pass the priority order on that table to the [postgre]SQL query. I'm new to SQL in general and somewhat lost with this one. Does anybody have any suggestions as to how this should be handled?
One fairly simple approach starts with turning your case statement into a new table, listing form values versus what precedence value they should be sorted by:
id | form | precedence
-----------------------------------
1 | supervisor | 1
2 | installer | 2
(etc)
Create a model for this, say, FormPrecedences (not a great name, but I don't totally grok your data model so pick one that better describes it). Then, your query can look like this (note: I'm assuming your current model is called WorkItems):
when access_level > 2
joins("LEFT JOIN form_precedences ON form_precedences.form = work_items.form")
.where("completed = 'f'")
.order("COALESCE(form_precedences.precedence, 6)")
.limit(1)
The way this works isn't as complicated as it looks. A "left join" in SQL simply takes all the rows of the table on the left (in this case, work_items) and, for each row, finds all the matching rows from the table on the right (form_precedences, where "matching" is defined by the bit after the "ON" keyword: form_precedences.form = work_items.form), and emits one combined row. If no match is found, a LEFT JOIN will still emit a row, but with all the right-hand values being NULL. A normal join would skip any rows with no right-hand match found.
Anyway, with the precedence data joined on to our work items, we can just sort by the precedence value. But, in case no match was found during the join above, that value will be NULL -- so, I use COALESCE (which returns the first of its arguments that's not NULL) to default to a precedence of 6.
Hope that helps!
I have three rails objects: User, DemoUser and Stats. Both the User and the DemoUser have many stats associated with them. The User and Stats tables are stored on Postgresql (using ActiveRecord). The DemoUser is stored in redis. The id for the DemoUser is a (random) string. The id for the User is a (standard-rails) incrementing integer.
The stats table has a user_id column that can contain either the User id or the DemoUser id. For that reason, the user_id column is a string, rather than an integer.
There isn't an easy way to translate from the random string to an integer, but there's a very easy way to translate the integer id to a string (42 -> "42"). The ids are guaranteed not to overlap (there won't be a User instance with the same id as a DemoUser, ever).
I have some code that manages those stats. I'd like to be able to pass over a some_user instance (which can either be a DemoUser or a User) and then be able to use the id to fetch Stats, update them etc. Also would be nice to be able to define a has_many for the User model, so I can do things like user.stats
However, operations like user.stats would create a query like
SELECT "stats".* FROM "stats" WHERE "stats"."user_id" = 42
which then breaks with PG::UndefinedFunction: ERROR: operator does not exist: character varying = integer
Is there a way to either let the database (Postgresql), or Rails do auto-translation of the ids on JOIN? (the translation from integer to string should be simple, e.g. 42 -> "42")
EDIT: updated the question to try to make things as clear as possible. Happy to accept edits or answer questions to clarify anything.
You can't define a foreign key between two types that don't have built-in equality operators.
The correct solution is to change the string column to be an integer.
In your case you could create a user-defined = operator for varchar = string, but that would have messy side effects elsewhere in the database; for example, it would allow bogus code like:
SELECT 2014-01-02 = '2014-01-02'
to run without an error. So I'm not going to give you the code to do that. If you truly feel it's the only solution (which I don't think is likely to be correct) then see CREATE OPERATOR and CREATE FUNCTION.
One option would be to have separate user_id and demo_user_id columns in your stats table. The user_id would be an integer that you could use as a foreign key to the users table in PostgreSQL and the demo_user_id would be a string that would link to your Redis database. If you wanted to treat the database properly, you'd use a real FK to link stats.user_id to users.id to ensure referential integrity and you'd include a CHECK constraint to ensure that exactly one of stats.user_id and stats.demo_user_id was NULL:
check (user_id is null <> demo_user_id is null)
You'll have to fight ActiveRecord a bit to properly constrain your database of course, AR doesn't believe in fancy things like FKs and CHECKs even though they are necessary for data integrity. You'd have to keep demo_user_id under control by hand though, some sort of periodic scan to make sure they link up with values in Redis would be a good idea.
Now your User can look up stats using a standard association to the stats.user_id column and your DemoUser can use stats.demo_user_id.
For the time being, my 'solution' is not to use a has_many in Rails, but I can define some helper functions in the models if necessary. e.g.
class User < ActiveRecord::Base
# ...
def stats
Stats.where(user_id: self.id.to_s)
end
# ...
end
also, I would define some helper scopes to help enforce the to_s translation
class Stats < ActiveRecord::Base
scope :for_user_id, -> (id) { where(user_id: id.to_s) }
# ...
end
This should allow calls like
user.stats and Stats.for_user_id(user.id)
I think I misunderstood a detail of your issue before because it was buried in the comments.
(I strongly suggest editing your question to clarify points when comments show that there's something confusing/incomplete in the question).
You seem to want a foreign key from an integer column to a string column because the string column might be an integer, or might be some unrelated string. That's why you can't make it an integer column - it's not necessarily a valid number value, it might be a textual key from a different system.
The typical solution in this case would be to have a synthetic primary key and two UNIQUE constraints instead, one for keys from each system, plus a CHECK constraint preventing both from being set. E.g.
CREATE TABLE my_referenced_table (
id serial,
system1_key integer,
system2_key varchar,
CONSTRAINT exactly_one_key_must_be_set
CHECK (system1_key IS NULL != system2_key IS NULL),
UNIQUE(system1_key),
UNIQUE(system2_key),
PRIMARY KEY (id),
... other values ...
);
You can then have a foreign key referencing system1_key from your integer-keyed table.
It's not perfect, as it doesn't prevent the same value appearing in two different rows, one for system1_key and one for system2_key.
So an alternative might be:
CREATE TABLE my_referenced_table (
the_key varchar primary key,
the_key_ifinteger integer,
CONSTRAINT integerkey_must_equal_key_if_set
CHECK (the_key_ifinteger IS NULL OR (the_key_ifinteger::varchar = the_key)),
UNIQUE(the_key_ifinteger),
... other values ...
);
CREATE OR REPLACE FUNCTION my_referenced_table_copy_int_key()
RETURNS trigger LANGUAGE plpgsql STRICT
AS $$
BEGIN
IF NEW.the_key ~ '^[\d]+$' THEN
NEW.the_key_ifinteger := CAST(NEW.the_key AS integer);
END IF;
RETURN NEW;
END;
$$;
CREATE TRIGGER copy_int_key
BEFORE INSERT OR UPDATE ON my_referenced_table
FOR EACH ROW EXECUTE PROCEDURE my_referenced_table_copy_int_key();
which copies the integer value if it's an integer, so you can reference it.
All in all though I think the whole idea is a bit iffy.
I think I may have a solution for your problem, but maybe not a massively better one:
class User < ActiveRecord::Base
has_many :stats, primary_key: "id_s"
def id_s
read_attribute(:id).to_s
end
end
Still uses a second virtual column, but maybe more handy to use with Rails associations and is database agnostic.
I have heard that specifying records through tuples in the code is a bad practice: I should always use record fields (#record_name{record_field = something}) instead of plain tuples {record_name, value1, value2, something}.
But how do I match the record against an ETS table? If I have a table with records, I can only match with the following:
ets:match(Table, {$1,$2,$3,something}
It is obvious that once I add some new fields to the record definition this pattern match will stop working.
Instead, I would like to use something like this:
ets:match(Table, #record_name{record_field=something})
Unfortunately, it returns an empty list.
The cause of your problem is what the unspecified fields are set to when you do a #record_name{record_field=something}. This is the syntax for creating a record, here you are creating a record/tuple which ETS will interpret as a pattern. When you create a record then all the unspecified fields will get their default values, either ones defined in the record definition or the default default value undefined.
So if you want to give fields specific values then you must explicitly do this in the record, for example #record_name{f1='$1',f2='$2',record_field=something}. Often when using records and ets you want to set all the unspecified fields to '_', the "don't care variable" for ets matching. There is a special syntax for this using the special, and otherwise illegal, field name _. For example #record_name{record_field=something,_='_'}.
Note that in your example you have set the the record name element in the tuple to '$1'. The tuple representing a record always has the record name as the first element. This means that when you create the ets table you should set the key position with {keypos,Pos} to something other than the default 1 otherwise there won't be any indexing and worse if you have a table of type 'set' or 'ordered_set' you will only get 1 element in the table. To get the index of a record field you can use the syntax #Record.Field, in your example #record_name.record_field.
Try using
ets:match(Table, #record_name{record_field=something, _='_'})
See this for explanation.
Format you are looking for is #record_name{record_field=something, _ = '_'}
http://www.erlang.org/doc/man/ets.html#match-2
http://www.erlang.org/doc/programming_examples/records.html (see 1.3 Creating a record)