See the following output:
1.9.3p194 :001 > player = Player.randomize_for_market
=> #<Player id: nil, name: "Gale Bridges", age: 19, energy: 100, attack: 6, defense: 4, stamina: 5, goal_keeping: 3, power: 4, accuracy: 5, speed: 5, short_pass: 5, ball_controll: 4, long_pass: 6, regain_ball: 5, contract_id: nil, created_at: nil, updated_at: nil>
1.9.3p194 :002 > player.save!
(0.2ms) BEGIN
SQL (20.5ms) INSERT INTO "players" ("accuracy", "age", "attack", "ball_controll", "contract_id", "created_at", "defense", "energy", "goal_keeping", "long_pass", "name", "power", "regain_ball", "short_pass", "speed", "stamina", "updated_at") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17) RETURNING "id" [["accuracy", 5], ["age", 19], ["attack", 6], ["ball_controll", 4], ["contract_id", nil], ["created_at", Fri, 29 Jun 2012 04:02:34 UTC +00:00], ["defense", 4], ["energy", 100], ["goal_keeping", 3], ["long_pass", 6], ["name", "Gale Bridges"], ["power", 4], ["regain_ball", 5], ["short_pass", 5], ["speed", 5], ["stamina", 5], ["updated_at", Fri, 29 Jun 2012 04:02:34 UTC +00:00]]
(16.6ms) COMMIT
=> true
1.9.3p194 :003 > YAML::load(YAML::dump(Player.randomize_for_market)).save!
(0.2ms) BEGIN
(0.2ms) COMMIT
=> true
Why this happens and how can I avoid it?
There is no ((before|after)+(save|create|commit)) on the model. I'm using rails 3.2.
Table "public.players"
Column | Type | Modifiers
--------------+-----------------------------+------------------------------------------------------
id | integer | not null default nextval('players_id_seq'::regclass)
name | character varying(255) | not null
age | integer | not null
energy | integer | not null
attack | integer | not null
defense | integer | not null
stamina | integer | not null
goal_keeping | integer | not null
power | integer | not null
accuracy | integer | not null
speed | integer | not null
short_pass | integer | not null
ball_controll | integer | not null
long_pass | integer | not null
regain_ball | integer | not null
contract_id | integer |
created_at | timestamp without time zone | not null
updated_at | timestamp without time zone | not null
Indexes:
"players_pkey" PRIMARY KEY, btree (id)
Edit: Answering "Why do you expect YAML::load(YAML::dump(Player.randomize_for_market)).save! to do anything?"
Because it serializes a object and recovers it?
example:
1.9.3p194 :006 > p = Player.randomize_for_market
=> #<Player id: nil, name: "Vincenzo Allen", age: 23, energy: 100, attack: 2, defense: 8, stamina: 6, goal_keeping: 3, power: 5, accuracy: 6, speed: 5, short_pass: 6, ball_controll: 5, long_pass: 6, regain_ball: 5, contract_id: nil, created_at: nil, updated_at: nil>
1.9.3p194 :007 > p
=> #<Player id: nil, name: "Vincenzo Allen", age: 23, energy: 100, attack: 2, defense: 8, stamina: 6, goal_keeping: 3, power: 5, accuracy: 6, speed: 5, short_pass: 6, ball_controll: 5, long_pass: 6, regain_ball: 5, contract_id: nil, created_at: nil, updated_at: nil>
1.9.3p194 :008 > YAML::load(YAML::dump(p))
=> #<Player id: nil, name: "Vincenzo Allen", age: 23, energy: 100, attack: 2, defense: 8, stamina: 6, goal_keeping: 3, power: 5, accuracy: 6, speed: 5, short_pass: 6, ball_controll: 5, long_pass: 6, regain_ball: 5, contract_id: nil, created_at: nil, updated_at: nil>
Note that the return of p is the same of the return from YAML::load
This may help to answer your question:
:001 > article = Article.new
#<Article:0x102d16b10> { ... }
:002 > article.persisted?
false
:003 > dumped = YAML::dump(article)
"--- !ruby/object:Article ... "
:004 > loaded = YAML::load(dumped)
#<Article:0x102cf5500> { ... }
:005 > loaded.persisted?
true
Looking into the Rails source code for ActiveRecord::Base#persisted?:
def persisted?
!(new_record? || destroyed?)
end
And for ActiveRecord::Base#new_record?:
def new_record?
#new_record
end
The #new_record instance variable is not saved when you dump the object to Yaml, and therefore it's nil when you load the object from Yaml. So ActiveRecord thinks it's already been persisted to the database and doesn't attempt to save it.
Brandan's answer is very relevant, the object de-serialized from YAML thinks it's already been persisted. Assuming #loaded_obj is the object you loaded from YAML (the object you want to save), try #loaded_obj.dup.save
Related
I am working on a rails app with vue in front end.
I have the following data as below
records = [["2020-07-01", "Jamie", 66],
["2020-07-01", "Rob", 12],
["2020-08-01", "Jamie", 31],
["2020-09-01", "Jamie", 46],
["2020-09-01", "Rob", 10],
["2020-10-01", "Rob", 4]]
I want to organise this data as
final_result = [{:name=>"Jamie", :data=>[66, 31, 46, 0]},
{:name=>"Rob", :data=>[12, 0, 10, 4]}]
I was thinking of using .map something like records.map but I am not able to figure-out how can I combine the values in the data field
like in the final_result I have combined the values 66, 31, 46, 0 for Jamie and 12, 0, 10, 4 for Rob based on the dates: July, August, September, October.
If the value is not present in any month then it is set to 0. Please help me resolve this issue.
Update 1
Answer not working for this data set below
records = [
["2020-01-01", nil, nil, 0],
["2020-02-01", nil, nil, 0],
["2020-03-01", nil, nil, 0],
["2020-04-01", nil, nil, 0],
["2020-05-01", nil, nil, 0],
["2020-06-01", nil, nil, 0],
["2020-07-01", 2, "Jamie", 66],
["2020-07-01", 7, "Rob", 1],
["2020-08-01", 2, "Jamie", 29],
["2020-08-01", 7, "Rob", 2]]
in the case about the output should be
[{:name=>"Jamie", :data=>[0, 0, 0, 0, 0, 0, 66, 29]},
{:name=>"Rob", :data=>[0, 0, 0, 0, 0, 0, 1, 2]}]
But according to Answer
the output is
[{:name=>"Jamie", :data=>[0, 0, 0, 0, nil, nil, 66, 29]},
{:name=>"Rob", :data=>[0, 0, 0, 0, nil, nil, 1, 2]}]
I am not able to figure out why is it showing nil values instead of zero.
records = [["2020-07-01", "Jamie", 66],
["2020-07-01", "Rob", 12],
["2020-08-01", "Jamie", 31],
["2020-09-01", "Jamie", 46],
["2020-09-01", "Rob", 10],
["2020-10-01", "Rob", 4]]
months = records.map { |record| Date.parse(record.first, '').month }.uniq.sort
result = []
result_hash = records.inject({}) do |memo, record|
memo[record.second] ||= Array.new(4).map { |_| 0 }
memo[record.second][Date.parse(record.first).month - months.first] = record.last
memo
end
result_hash.each { |key, value| result << { name: key, data: value }}
p result
=>
[{:name=>"Jamie", :data=>[66, 31, 46, 0]}, {:name=>"Rob", :data=>[12, 0, 10, 4]}]
Here's my stat model.
Stat(id: integer, points: float, user_id: integer, match_id: integer, team_id: integer)
For team model
Team(id: integer, name: string)
I'm getting error on teams.name part here's the error.
#<ActiveRecord::StatementInvalid: Mysql2::Error: Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'db-name.teams.name' which is not functionally dependent on columns in GROUP BY clause;
Sample stat:
{id: 1, points: 2, user_id: 1, match_id: 1, team_id: 1}
{id: 2, points: 3, user_id: 3, match_id: 1, team_id: 2}
{id: 3, points: 4, user_id: 1, match_id: 2, team_id: 1}
My current code:
sample = Stat
.joins(:user)
.joins(:team)
.select('teams.name as team_name, users.id as user_id, match_id, SUM(points) as points')
.where(user_id: params[:user_id])
.group(:match_id)
.where.not(match_id: nil)
.order("match_id DESC")
.limit(10)
I am trying to order my listing object by two column, the created_at column and a popularity column. The popularity column is either equal to 1 => popular listing or 0 = > normal listing.
I'd like listings to be ordered in a way that will put any popular listing at the top first (taking into account created_at).
I have a search variable which contains a collection of listing object (here is one of them):
#<Listing:0x007fe4031f2540
id: 7,
title: "",
description: "test description",
full_address: nil,
neighborhood_id: nil,
created_at: Fri, 09 Oct 2015 18:51:47 EDT -04:00,
updated_at: Fri, 09 Oct 2015 18:53:07 EDT -04:00,
slug: "252-7th-ave-6e",
bedrooms: 3,
bathrooms: 3,
sqft: 1200,
lat: 40.7453,
lng: -73.9955,
price: 2500,
available_date: Tue, 13 Oct 2015,
status: "available",
apartment: "6e",
street_address: "252 7th Ave",
zip_code: "10001",
user_id: 41,
rooms: 7,
building_type: nil,
showing: "Monday around 6pm",
landlord_phone: "91780322202",
landlord_email: "cghazanfar10#gmail.com",
landlord_detail: "",
landlord_name: "Cyrus",
internal_notes: "",
active_at: Fri, 09 Oct 2015 18:53:07 EDT -04:00,
landlord_id: 4,
building_id: 7,
video_viewings: nil,
deposit: 3000,
normalized_address: "252 7th Ave",
contact_phone: nil,
min_income: nil,
min_credit: nil,
lease_id: nil,
tenant_fee: 1200,
security: 2500,
other_fees: nil,
fee_descriptions: "",
listing_type: "full",
popularity: 0>
I've tried search.order("popularity DESC, created_at DESC") but that doesn't seem to meet both criteria which is: order by the popular listing first disregarding the fact that their created_at date might be before other non popular listing; then just sort by created_at in desc order.
I have table with some columns: id, user_id, message_id, message_type; for example:
id: 1, user_id: 1, message_id: 4, message_type: 'Warning'
id: 2, user_id: 1, message_id: 5, message_type: 'Warning'
id: 3, user_id: 1, message_id: 6, message_type: 'Warning'
id: 4, user_id: 2, message_id: 4, message_type: 'Error'
id: 5, user_id: 2, message_id: 1, message_type: 'Exception'
id: 6, user_id: 1, message_id: 2, message_type: 'Exception'
id: 7, user_id: 1, message_id: 3, message_type: 'Exception'
id: 8, user_id: 2, message_id: 4, message_type: 'Exception'
I want to get grouping result like news in social networks. On columns user_id and message_type, while message_type repeating. And need LIMIT 20 ORDER BY id DESC.
Example:
id: 8, user_id: 2, message_id: 4, message_type: 'Exception'
id: {6,7} user_id: 1, message_id: {2,3}, message_type: 'Exception'
id: 5, user_id: 2, message_id: 1, message_type: 'Exception'
id: 4, user_id: 2, message_id: 4, message_type: 'Error'
id: {1, 2, 3}, user_id: 1, message_id: {4, 5, 6}, message_type: 'Warning'
How to do it with best performance?
I found only 1 way:
With window function lead() find a moment when was changed dict (user, message type)
With window function sum() set sequnce number for each new dict
Group by sequence and select what you need:
Checking:
create table test (
id serial primary key,
user_id integer,
message_id integer,
message_type varchar
);
insert into test (user_id, message_id, message_type)
values
(1, 4, 'Warning'),
(1, 5, 'Warning'),
(1, 6, 'Warning'),
(2, 4, 'Error'),
(2, 1, 'Exception'),
(1, 2, 'Exception'),
(1, 3, 'Exception'),
(2, 4, 'Exception')
;
select
array_agg(grouped.id) as record_ids,
grouped.user_id,
array_agg(grouped.message_id) as message_ids,
grouped.message_type
from (
select changed.*,
sum(changed.changed) over (order by changed.id desc) as group_n
from (
select tt.*,
case when lag((user_id, message_type)) over (order by tt.id desc) is distinct from (user_id, message_type) then 1 else 0 end as changed
from test tt
) changed
order by id desc
) grouped
group by grouped.group_n, grouped.user_id, grouped.message_type
order by grouped.group_n
;
Result:
record_ids | user_id | message_ids | message_type
------------+---------+-------------+--------------
{8} | 2 | {4} | Exception
{7,6} | 1 | {3,2} | Exception
{5} | 2 | {1} | Exception
{4} | 2 | {4} | Error
{3,2,1} | 1 | {6,5,4} | Warning
(5 rows)
The array_agg function should do the trick:
SELECT user_id,
message_type,
ARRAY_AGG (DISTINCT id),
ARRAY_AGG (DISTINCT message_id)
FROM mytable
GROUP BY user_id, message_type
I'm using:
ranking = Ranking.create()
ranking.send("#{month}=", rank)
ranking.save!
I'd like to append whatever value is in the #{month} column, not replace it. For example, if I am performing:
month = 'january'
ranking.send("#{month}=", 500)
ranking.save!
And then again later on:
month = 'january'
ranking.send("#{month}=", 250)
ranking.save!
The value for the column january for that particular ranking should be 750.
Is this possible with the current ActiveRecord API?
You could do this with increment! method
month = 'january'
ranking.increment!(month, 250)
updated:
to proof comments question (e.g. month = 'jan'):
irb(main):011:0> p.increment!(month, 70)
(0.0ms) begin transaction
SQL (0.0ms) UPDATE "products" SET "jan" = ?, "updated_at" = ? WHERE "product
"."id" = 1 [["jan", 171], ["updated_at", Sun, 06 Oct 2013 04:23:54 UTC +00:00]
(0.0ms) commit transaction
=> true
irb(main):012:0> p
=> #<Product id: 1, name: nil, description: nil, jan: 171, created_at: "2013-10-
06 04:22:50", updated_at: "2013-10-06 04:23:54">
and another case
irb(main):013:0> p.increment!("#{month}", 70)
(0.0ms) begin transaction
SQL (0.0ms) UPDATE "products" SET "jan" = ?, "updated_at" = ? WHERE "products
"."id" = 1 [["jan", 241], ["updated_at", Sun, 06 Oct 2013 04:24:10 UTC +00:00]]
(0.0ms) commit transaction
=> true
irb(main):014:0> p
=> #<Product id: 1, name: nil, description: nil, jan: 241, created_at: "2013-10-
06 04:22:50", updated_at: "2013-10-06 04:24:10">