Searched a lot of examples but still can't get this to work
Seems pretty straightforward, I need to create a record, and then travel an hour in time ahead to make sure it has expired.
Records are being set in Redis and set to expire after 1 hour, I have actually verified this is happening by manual test.
#redis.expire("#{params[:key]}", 3600) # Expire inserts after 1 hour
but the test just keeps adding up the values
it 'it does not get results that are older than one hour' do
key = 'active_books'
post("/inventory/#{key}?value=30")
travel_to(1.hours.from_now) do
get("/inventory/#{key}/sum")
parsed_body = JSON.parse(response.body)
expect(parsed_body).to eql(0)
end
end
So when getting the records here it should return 0 because they expire. In real time I have tested this with the app, and at the moment it is actually set to expire in 60 seconds, which is happening. But in this test it returns the value 30 which was just created, even though if manually testing this it expires after a minute. If I run the test after a minutes it also only returns 30, not 60, so the record is expiring as expected, this is just not traveling forward in time (which is what I assumed it was suppose to do)
parsed_body returns a count, a sum of all the records. But none should be found, and again, on the app it is actually doing this but I want to get this test to reflect that accurately.
Related
I'm using the GoogleFinance() functions on a Google spreadsheet to keep track of my stocks. With the "datadelay" attribute I can check how long ago the data has been updated for the last time. But it only returns a raw number, like "54000" for one ticker and "15" for another. What time unit is that supposed to be? minutes? seconds? milliseconds?
When I check the documentation for the Google Finance I saw that there is a page explains there might be delay up to 20 minutes. They also mentioned that they are using different exchanges to retrieve market data and all this different exchanges might have different data delay. It can explain the differences in the "datadelay" column.
For the unit of this column, my assumption is it should be shorter than seconds since 54000 seconds = 900 min, which is far higher than the maximum delay defined in the help page. But I am not sure what would be value for this column when you query in the not-trading days.
The page shows delays for each exchange.
GoogleFinance() function updates every minute (if it is set so), but keep in mind that results may be delayed up to 20 minutes. so the answer is between 1-20 minutes
We have CreditCard related rake tasks that are supposed to be run on the 1st every month, to remind our clients to update their payment method if it expired at some point during the previous month.
class SomeJob < ApplicationJob
def perform
::CreditCard.
.where(expiration_date: Date.yesterday.all_month)
.find_each do |credit_card|
Email::CreditCard::SendExpiredReminderJob.perform_later(credit_card.id)
end
end
end
I'm concerned that this particular Job as we currently have it might not be Time zone safe due to the Date.yesterday.all_month we use to get last month's date range (remember the rake task is run on the 1st every month).
For example, if for some reason the Job were to be run past midnight (on the 2nd), it would incorrectly notify clients with cards expiring this month when it should of notified last month's expired cards.
The safest bet would be to substract more than 1 day from Date, but I'm not sure that's the cleanest way to go (and later on someone would not understand why we are substracting 5, 7 days or whatever).
Are there safer ways to do it?
Date.current.prev_month.all_month
.current may be better than .today because it checks if it is set time zone, see answer.
.prev_month or .months_ago(1) see https://apidock.com/rails/v3.2.13/Date/prev_month
I use spring-session-jdbc with spring-security. At this moment I have logged 20 users (with correct session id and principal_name) and about 11k rows with session-id and empty principal_name. Is it a normal behavior? My settings:
security.sessions= (Default)
#EnableJdbcHttpSession(maxInactiveIntervalInSeconds = 86400)
There isn't anything abnormal by itself with having a big number of session records in the database, especially since you've verified that clean up of expired sessions works OK.
You have configured a fairly big maxInactiveIntervalInSeconds of 86400 seconds (1 day) so it isn't unreasonable to have that number of anonymous sessions (i.e. unauthenticated sessions, or session records with no principal_name set) initiated over the period of one day.
My goal is to add +1 every day to a global variable in Firebase to track how many days have passed. I'm building an app that give new facts every day, and at the 19:00 UTC time marker, I want the case statement number (the day global day variable) to increment by +1.
Some have suggested that I compare two dates and get the days that have passed that way. If I were to do that, I could hard code the initial time when I first want the app to start at 19:00 some day. Then when the function reached1900UTC() is called everyday thereafter, compare it to a Firebase timestamp of that current time which should be 19:00. In theory, it should show that 1 day or more day has passed.
This is the best solution so far, thanks to #DavidSeek and #Jay, but I would still like to figure it out with concurrent writes if anyone has a solution in that front. Until then, I'm marking David's answer as the correct one.
How would I make it so it can't increase more than +1 if multiple people call this? Because my fear is that, when say, 100 people calls this function, it increases by + 1 for every person that has called it.
My app works on a global time, and this function is called every day at 19:00 UTC. So when that function is called I want the day count to increase by one.
You should use transactions to handle concurrent writes:
https://firebase.google.com/docs/database/ios/read-and-write#save_data_as_transactions
You may know this but Firebase doesn't have a way to auto-increment a counter as there's no server side logic, so having a counter increment at 19:00 UTC isn't going to be possible without interaction from a client that happens to be logged on at that time.
That being said, it's fairly straightforward to have the first user that logs in increment that counter - then any other clients logging in after that would not increment it and would have access to that day's new content.
Take a look at Zapier.com - that's a service that can fire time based triggers for your app which may do the trick.
As of this writing, Zapier and Firebase don't play nice together, however, there are a number of other trigger options that Zapier can do with your app while continuing to use Firebase for storage.
One other thought...
Instead of dealing with counters and counting days, why not just have each day's content stored within a node for each day and when each user logs on, the app get's that days content:
2016-10-10
fact: "The Earth is an Oblate Spheroid"
2016-10-11
fact: "Milli Vanilli is neither a Milli or a Vanilli. Discuss."
2016-10-12
fact: "George Washington did not have a middle name"
This would eliminate a number of issues such as counters, updates, concurrent writing to Firebase, triggers etc.
It's also dynamic and expandable and a user could easily see that day's facts or the fact for any prior day(s)
I'm trying to split your question into different sections.
1) If you want to use a global variable to count the days from, let's say, today. Then I would set a timestamp hardcoded into the App that sets the NSDate.
Then In my App, when I need to know the days that have been passed by, I would call a function counting the days from the timestamp to NSDate().
2) If you have a function in your App that counts a +1 into a Firebase, then your fear is correct. It would count +1 for every person that uses the App.
3) If you want every User to have a variable count since when they use their App, then I would handle User registration. So I have a "UserID" and then I would set a Firebase tree like that:
UserID
------->
FirstOpen
-------> Date
That way you could handle each User's first open.
Then you are able to set a timestamp AND call +1 for every user independently. Because then you set the +1 for every user into their UserID .child
I have a RoR application that runs reports on a database of phone calls logged from a help desk. I've been asked to provide a report that shows the percentage of time, each hour, that more than one technician is on the phone. The database logs the call id, technician name, and call created at and end time in Y-M-D-H-M-S. Can anyone suggest a way I can do this? Thank you.
I don't see any problem at all from what I understand.
for each technician
look up the entries in the database and get the call created_at and end time
for each entry above
total_time += end_time.to_seconds - created_at_time.to_seconds
Find the difference between the starting time and the end time, you want to track the activity of a technician.
You get the %age of time this way. To get this for each hour, simply convert the time in hours and you are set to go.