YouTube Player IFrame API, hooking event for currentTime - youtube-api

I'm aware of the player.GetCurrentTime() method. For my context, it would be better to hook this changing value as an event as to having to poll for it.
It's an angular2/typescript use. I'm hooking three other events that get hooked correctly. In the following, the first three subscriptions work, while the currentTime-changed does not. I've also tried currentTime-changed.
this._ytPlayer.addEventListener("onReady", e=> this.onPlayerReady(e)) ;
this._ytPlayer.addEventListener('onStateChange', e => this.onStateChanged(e));
this._ytPlayer.addEventListener('onError', e => this.onError(e));
this._ytPlayer.addEventListener('currentTime-change', e => this.onCurrentTimeChanged(e));
Is it possible to hook the property change on the IFrame api and if so, what am I doing wrong?

Currently, changes in the current time of a video are not exposed via the iframe API. Your only option would be to set up your own little polling interval (using setInterval or requestAnimationFrame) that will query the api for currentTime every so often (maybe 10 times a second, as the API will not update the time more often than that) and update a running variable. You could then watch that variable for changes to get your hooks to run.

Related

Synchronizing new calendar events always have the #removed field

I'm syncing calendar events using the #microsoft/microsoft-graph-client npm package with the base url /me/calendarview/delta. It's been working fine until a few days ago. For some reason whenever I create a new calendar event in outlook.office.com and my app syncs, the newly created calendar event has the #removed: {reason: "deleted"} field set.
However when I lookup that same calendar event using the Microsoft Graph Explorer that same event does NOT have the #removed field set. Is there any reason a newly created calendar event would look like it's being deleted during a sync?
I'm using #microsoft/microsoft-graph-client v1.3.0
Steps to recreate:
Create an event using the node graph client by POSTing to /me/calendar/events
Grab a delta of calendar events using /me/calendarview/delta with appropriate deltaLink and access token.
I receive 1 calendar event that has 3 fields, #odata.type, id and #removed. The id field matches the id of the created event in step 1.
If you need more information, let me know. This is affecting some of our users.
Update: I tried a workaround for this issue by calling /me/events/<id> for each #removed calendar entry I receive on a delta sync to verify if the event was truly deleted. However when I call that API via the microsoft-graph-client it returns null. If I make the same GET call via MSFT Graph Explorer then the event is returned.
I left an answer on another question here: https://stackoverflow.com/a/65348721/6806302
In short, I went off yesterday on a hunch inspired by #mattlaabs's comment on the question above, that the startDateTime..endDateTime range of the events delta was to blame.
And in practice, that is exactly the problem. The answer is two part:
Changes to events not in the window always show up in the delta stream as #removed.
The events delta parameters are captured in a "closure", meaning subsequent requests (with a $deltatoken) ignore the startDateTime..endDateTime query parameter.
Understanding both of the above, it seems that the answer is to:
Create wide enough initial startDateTime..endDateTime windows to suite your application's needs
Start new events delta streams (by not providing a $deltatoken) at some defined interval instead of reusing the same one indefinitely

Session Windows behave with Kafka Stream is not as expected

I am a bit newbie working with kafka stream but what I have noticed is a behave I am not expecting. I have developed an app which is consuming from 6 topics. My goal is to group (or join) an event on every topic by an internal field. That is working fine. But my issue is with window time, it looks like the end time of every cycle affect to all the aggregations are taking on that time. Is only one timer for all aggregation are taking at the same time ?. I was expecting that just when the stream get the 30 seconds configured get out of the aggregation process. I think it is possible because I have seen data on Windowed windowedRegion variable and the windowedRegion.window().start() and windowedRegion.window().end() values are different per every stream.
This is my code:
streamsBuilder
.stream(topicList, Consumed.with(Serdes.String(), Serdes.String()))
.groupBy(new MyGroupByKeyValueMapper(), Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(SessionWindows.with(windowInactivity).until(windowDuration))
.aggregate(
new MyInitializer(),
new MyAggregator(),
new MyMerger(),
Materialized.with(new Serdes.StringSerde(), new PaymentListSerde())
)
.mapValues(
new MyMapper()
)
.toStream(new MyKeyValueMapper())
.to(consolidationTopic,Produced.with(Serdes.String(), Serdes.String()));
I'm not sure if this is what you're asking but every aggregation (every per-key session window) may indeed be updated multiple times. You will not generally get just one message per window with the final result for that session window on your "consolidation" topic. This is explained in more detail here:
https://stackoverflow.com/a/38945277/7897191

Umbraco7 - ContentService.SaveAndPublishWithStatus VS ContentService.SendToPublication

I have an application that uses a combination of ContentService.Saved & ContentService.Saving to extend Umbraco to manage content.
I have two websites in one Umbraco installation I am using those methods to keep content up to date in different parts of the tree.
So far I have got everything working the way I wanted to.
Now I want to add a feature that: depending on which Umbraco User is logged in, will either publish the content or simply send it for approval.
So I have changed some lines of code from:
cs.SaveAndPublishWithStatus(savedNode, 0, false)
To this:
cs.SendToPublication(savedNode);
Now the problem that I am finding is that unlike the SaveAndPublishWithStatus() method, the cs.SendToPublication(); doesn't have the option of passing false so that a save event is not raised. So I get into an infinite loop.
When I attach the debugger and manually stop the infinite loop the first time it calls cs.SendToPublication(savedNode); I get exactly the behavior I want.
Any ideas about how I can get round this problem? Is there a different method that I should be using?
You are correct in saying that it currently isn't possible to set raiseEvents to false when sending an item to publication - that's a problem.
I've added that overload in v. 7.6 (http://issues.umbraco.org/issue/U4-9490).
However considering that you need this now, an interim solution could be that you make sure your code is only run once when triggered by the .Saved / .Saving events.
One way to do this would be to check the last saved date (UpdateDate) in your code. If the content was saved within the last second of the current save operation, you know that this is a save event triggered by the save happening in SendToPublication action. Then you also know that the item has already been sent to publication and that this doesn't need to be done again - thereby preventing the endless loop from happening.

Realtime timer that counts down time for ruby on rails app

Im in the need on some advise for a realtime serverside timer that would countdown from say 100 seconds. Reason for serverside timer is no tampering.
currently use delayed job but problem its not realtime i could mimic a timer by creatind job every second, but dirty solution
need to display time in view by getting timer value with ajax call to method that returns servertime on page. Know how to do this just to give idea. reload timer would still countdown correctly even on reload page.
Anyone could advise to get a realtime counter in rails app serverside? I would want to create one or more independant timers i can get a value from in railsapp.
Why don't you just create the record you need, but add a "don't open before" DateTime field you can check to see if it's okay to show it?
This sort of thing is trivial to do, and you can set a timer on the client to count-down in JavaScript, then reload the page with the final data at the appropriate time. If someone is impatient and reloads early, you can compute the number of seconds remaining before it can be shown using simple math:
time_left_in_seconds = record.show_at.to_i - Time.now.to_i
Then all you have to do is show a JavaScript timer for that number of seconds, then trigger a page refresh.

RxSwift: Receive events immediately, unless the last event was processed within a certain interval

New to RxSwift / Reactivex. Basically what I'm trying to do is to make a server call whenever something happens, but make sure it's not done more often than every 10 seconds. Less often if possible.
For instance, whenever an event ("needs update") is generated I'd like to call the server immediately if more than 10 seconds have passed since my last call. If less time has passed I'd like to make the call on the 10 second mark from the last one. It doesn't matter how many events have been generated within these 10 seconds.
I looked at the description of throttle but it appears to starve if events happen very quickly, which isn't desirable.
How can I achieve this?
There's a proposed new operator for RxSwiftExt that would give you something you're looking for, I think. However, it doesn't exist yet. You might want to keep an eye on it, though.
https://github.com/RxSwiftCommunity/RxSwiftExt/issues/10

Resources