Does "first" mean first in this run of the app (until the app terminates and restarts), or first across runs?
I thought that these fields will have only one value, but they often have two. When I run this query:
SELECT
user_pseudo_id,
COUNT(*) AS the_count
FROM (
SELECT
DISTINCT user_pseudo_id,
user_first_touch_timestamp AS user_first_touch_timestamp
FROM
`noctacam.<my project>.events*`
WHERE
app_info.id = "<my bundle ID>"
ORDER BY
user_pseudo_id)
GROUP BY
user_pseudo_id
ORDER BY
the_count DESC
I find that 0.6% of my users have two different values for user_first_touch_timestamp. Is this a bug in Firebase?
Likewise for first_open_time:
SELECT
user_pseudo_id,
COUNT(*) AS the_count
FROM (
SELECT
DISTINCT user_pseudo_id,
user_properties.value.int_value AS first_open_time
FROM
`noctacam.<my project>.events*`,
UNNEST(user_properties) AS user_properties
WHERE
app_info.id = "<my bundle ID>"
AND user_properties.key = "first_open_time"
ORDER BY
user_pseudo_id)
GROUP BY
user_pseudo_id
ORDER BY
the_count DESC
Exactly the same 0.6% of users have two different values for this field, too.
References:
https://support.google.com/firebase/answer/7029846?hl=en
https://support.google.com/firebase/answer/6317486?hl=en
I started wondering about the difference in these 2 params too and found this difference.
From User Properties:
First Open Time - The time (in milliseconds, UTC) at which the user first opened the app, rounded up to the next hour.
From BigQuery Export Schema:
user_first_touch_timestamp - The time (in microseconds) at which the user first opened the app.
In my case, the rounding was the difference. I envision that Firebase needed to have first_open_time as a User Property for some reason so they just rounded and copied user_first_touch_timestamp.
I know it still doesn't answer your whole question and doesn't explain why 0.6% of your users have 2 different values. I still thought that this may help someone here.
There is also a difference in the description of the two parameters:
first_open = "the first time a user launches an app after installing or re-installing it"
whereas first_touch_timestamp has no mention of the value updating for re-installs. It is likely your 0.6% difference is users who have re-installed the app.
The difference is in the accuracy of the data:
while User_first_touch_timestamp gives the exact time,
First_open_time gives the rounded-up time.
Take a look at the following examples:
User 1:
User_first_touch_timestamp: 1670263710266000
Mon Dec 05 2022 20:08:30 GMT+0200
First_open_time : 1670266800000
Mon Dec 05 2022 21:00:00 GMT+0200
User 2:
User_first_touch_timestamp: 1670248060903000
Mon Dec 05 2022 15:47:40 GMT+0200
First_open_time: 1670248800000
Mon Dec 05 2022 16:00:00 GMT+0200
Related
Hi I am using this code bellow
Companion.where(companion_type: 1).joins(:tasks).where(tasks: {status: 3}).group_by_month(:created_at).size
and i am having a data looks like this
Jan, 2017 => 89
.
.
.
Aug, 2021 => 300
But i need data's above from Jan, 2019
Is there any nice ways to solve it out?
If you are only interested in Jan 2019, then you don't need to pull all the Companion table, and then group them by month. Instead, select only those in the month of interest...
Companion.
where(companion_type: 1).
joins(:tasks).
where(tasks: {status: 3}).
where("MONTH(tasks.created_at) = 1 AND DAY(tasks.created_at) = 19").
count
I assume that it's the created_at of tasks that you want to filter, vs. the created_at of companions, is that right?
Also, it's better to use count than size because that is directly inserted into the sql, as opposed to instantiating all the objects from Jan, 2019 and then applying .size to the result.
I have a ROR app: ruby 2.3.0, rails 4.2.5.1 and mongoid 5.0, and in one of my models I have :
field :statement_month, default: 1.month.ago.strftime('%m') , but only on 1st of March it saves a wrong result: "01" instead of "02" .
I have no problems for other months in first day of the month.
I also added some logs, before_create and after_create , printing:
"-------1_month_ago_month------------------------" + 1.month.ago.strftime('%m') => in logs it show "02" but in DB object is "01". It is a mongoid issue, or maybe a TimeZone issue ?
The correct syntax for dynamic defaults uses Procs. See https://docs.mongodb.com/mongoid/master/tutorials/mongoid-documents/#defaults.
MongoDB stores times as UTC timestamps, your program does not explicitly convert to UTC thus it is 1) potentially misbehaving with respect to time zones and 2) potentially misbehaving with respect to daylight savings time. Date math generally must be explicitly performed in either local time (and you should know the time zone you are operating in) or in UTC. Mixing the two eventually causes problems.
To troubleshoot the wrong month, set your system time to March 1 and debug the program. In particular, try March 1 01:00 and March 1 23:00. Those are often different dates in UTC for the same local date.
I am reading list of Google Sheet cells containing DateTime, using Google Apps Script.
The values in the cells are:
A1: Jul 26 13:00
A2: Jul 27 0:00
var dateValues = SpreadsheetApp.getActiveSheet().getRange("A1:A2").getValues();
However, values read are 1 hour behind. This is what I see in the debugger:
dateValues[0] = Wed Jul 26 2017 12:00:00 GMT+0200 (EET)
dateValues[1] = Wed Jul 26 2017 23:00:00 GMT+0200 (EET)
I guess this is a time zone issue, but I don't really understand the concept.
My time zone is currently (due to DTS) GMT + 3. Indeed, outside the DTS period it is GMT +2. The spread sheet time zone is Jerusalem GMT+2.
EET - don't underrated why it is being used.
Basically, I would expect to get in code the values with in the sheet.
What is the concept?
there are two ways to solved this
Use getDisplayValues() rather than getValues(). This will force to do some conversions, as getDisplayValues() returns strings not dates
make you script editor time zone match the sheet tz.
In my case the sheet was (GMT+2 Jerusalem), but the script editor was different (GMT+2 Moscow) for some reason.
Setting the script editor TZ, solved the problem.
When I run:
psql great_dev -c 'show timezone'
The following is returned:
TimeZone
------------
US/Eastern
(1 row)
I want the timezone to be EST, so I think that is correct. The problem is that whenever I run some kind of database operation (IE update_attributes, save etc.), the timestamps are in UTC like this:
SQL (0.6ms) UPDATE "schedules" SET "accepted" = $1, "updated_at" = $2 WHERE
"schedules"."id" = 46 [["accepted", "only_freelancer"],
["updated_at", Mon, 02 Dec 2013 16:51:07 UTC +00:00]]
I am starting to implement several background jobs and it is essential that I understand what timezone Postgres is in.
Is Postgres in EST or UST?
If it is in UST, how do I convert it to EST and is it dangerous to do so?
I am going to allow a user to enter in timestamps from a select menu - should the menu list UTC times or EST times?
The default timestamp type in Postgres is timestamp without time zone.
To deal with this, employ the data type timestamp with time zone.
Or run your database with the time zone set to UTC and use timestamp [without time zone].
Be aware that the time zone setting of the session is relevant to what Postgres actually returns for your queries.
This related answer provides all the details you might need:
Ignoring timezones altogether in Rails and PostgreSQL
I create multiple scheduled objects with different scheduled_on attributes. For example, each object would have a date to land on 4:00pm the first of every month.
Once one of those objects hits a timezone change. The app intelligently configures it an hour ahead or behind so that its relative to its parent's timezone.
The problem is that the app will save an object as 4:00PM (in Pacific Standard) for times that will eventually be displayed as (PDT or an hour ahead or 5:00pm). This would mean that I need it to save an hour off in UTC so that when the time comes about, it will display as 4PM regardless of what timezone we are in.
Whats the best technique for ensuring this in Rails?
I'm going to answer this question by pointing out some good things to know about adding time in Rails in relation to timezone.
When you add time, time is allocated in UTC to stay the same time despite timezone changes :
t = Time.now
-> 2012-08-10 13:17:01 +0200
t + 90.days
-> 2012-11-08 13:17:01 +0100
A DateTime will not do this. A DateTime will go up an hour or down an hour in the same TimeZone it began in :
dt = DateTime.now
=> Fri, 10 Aug 2012 13:16:54 +0200
dt + 90.days
=> Thu, 08 Nov 2012 13:16:54 +0200
But a DateTime is the only way to get the number of days between two dates which you can do by subtracting two DateTimes. This, you can't do with a Time, because when substracting a time, it will not divide by a perfect 24 hours, you'll get an irrational number because of the timezone switch.
This is specific to my issue. But I solved my problem by converting my Time to DateTimes to find the number of days in distance, and then reconverted back to time to find a later time in UTC relative to a TimeZone change :
original_job.to_time + ( new_date.to_datetime - original_job.to_datetime ).to_i.days