Is there a possibility to calculate video views generated only by subscribers? In the Youtube Analytics user interface in Traffic Source section there is possible to grab this count, as in screenshot below:
There is the insightTrafficSourceDetail dimension, which allows you to specify the SUBSCRIBER traffic source. You can further break down the numbers by traffic from subscriber emails, the "my subscriptions" feed that subscribers have on their homepage, views that originated from a "new uploads" feed that subscribers see, etc. The only thing you WON'T get from this dimension are the views from users who are subscribing to your channel but who navigate directly to your video (i.e. if a friend emailed a link or something); in other words, the tracking is really on the source of the click (aggregating all sources that indicate subscriber) rather than actually looking at a username and determining they've subscribed at some point.
However, having said that, I wouldn't be surprised if the interface you've indicated above uses the same numbers for calculating its report.
https://developers.google.com/youtube/analytics/v1/dimsmets/dims#Traffic_Source_Dimensions
Related
Trying to get subset of Teams messages created between two dates to a review set and then export them. Collection committing data to review set contains only needed Teams messages, but review set pulls whole channel history by default. And review set filters seem to not be able to filter messages inside html transcript.
New case format (now the only one), introducing html transcript format for teams conversations, forces 'collecting contextual messages around search results' as mandatory, which results in committing whole channel history to a review set.
And then it seem to be impossible to filter messages from html transcript file using review set filter query to export only needed ones. Filter queries apply to html transcript as a single file, so you get all or nothing, not a subset of messages.
Is there a way to avoid 'gathering contextual messages' when adding collection to review set? Is there a way to filter html transcript of a Teams channel on review set level before export? Could anything exposed with (beta) Graph API help with that?
We are developing a chat system where users can be in many chat rooms, and I'd like to be able to show the most recent channels first.
This could be either by the time the last message was sent, or even by the number of unread messages, as long as there is some order and I don't need to go through all the pages of channels and get additional metadata to sort it manually.
I can't see any options in the docs and even though the response metadata has a "key" set to "channels", I haven't been able to figure out a query parameter that can change that.
It seems like channels will always be returned ordered by the random unique channel ID, so for pretty much every use case you'd need to get all channels and sort manually. Is that the case or am I missing something?
Twilio developer evangelist here.
I'm afraid you cannot order the channels within the API right now. This feature is on the roadmap though, however I can't give any time estimates for it.
The solution for now is sorting manually. I will update once that changes though.
Based on people who interact with an email campaign through marketo I would to create a retargetting campaign in Adwords.
Is it even possible using RTP?
What kind of interactions you'd like to capture?
If it's clicks so you can pass the leads through some page that will send events to GA and eventually redirect them to the desired page.
if you're interested to capture opens it's more complicated, you will need to capture GA's client ID for each one of your leads when they fill out a form.
There are lead data onboarding tools that might help with this challenge.
I am developing an ios app like Tinder. Users can chat only in private 1:1.
Should I have to open one channel for every single "match"? Is this the correct design pattern for this case study? What about performance if i have one channel per "match".
*Match" is when a user matches to another and can start a private chat.
If one person can have multiple matches, you can ask PubNub client to open separate channel for each nothing person. So, when you have two matching persons, you take some unique identifiers from both of them and using known algorithm create unique name of the channel for which both clients will subscribe to communicate.
One channel for whole application - really bad idea, because of possible massive flow of data, which for most of subscribers will be useless, because consumer is one of other subscribers.
Yes, the best approach is that every "match" should have it's own channel on which both participants publish/subscribe to communicate. PubNub has no limit on channels (nor does it charge based on channels), so this shouldn't create a performance or cost issue.
To add access control to the "match" channel (if you want to ensure no one else can access that channel), use PubNub Access Manager, documented here: http://www.pubnub.com/docs/javascript/tutorial/access-manager.html (use dropdown to change programming language)
If you want to provide chat history, so that the two participants can see messages from previous chat sessions, enable PubNub Storage & Playback, and use the PubNub.History() API, documented here: http://www.pubnub.com/docs/javascript/overview/storage-playback.html
If you want to see when those two participants are connected to the Match channel, use PubNub Presence, documented in the same place.
I am using the Twitter public stream API to search for some keywords. I am writing my script in Java and therefore I use twitter4j. Now I stumbled over the information about status deletion notices:
Status deletion notices (delete)
These messages indicate that a given Tweet has been deleted. Client
code must honor these messages by clearing the referenced Tweet from
memory and any storage or archive, even in the rare case where a
deletion message arrives earlier in the stream that the Tweet it
references.
https://dev.twitter.com/docs/streaming-apis/messages#Status_deletion_notices_delete
So I created methods to remove records from my database when such a notice occurs. Unfortunately such a notice never occurs. I searched to find out what I am doing wrong and found some posts in the twitter developer section concerning the same problem:
https://dev.twitter.com/discussions/17393
https://dev.twitter.com/discussions/19943
https://dev.twitter.com/issues/1355
https://dev.twitter.com/discussions/12836
but unfortunately all these discussions got no answer. So for me it seems like I did no mistake with my code but twitter4j never sends me an deletion notice.
I want to respect the privacy of the twitter users - at least for legal reasons. So my question is:
What can I do to respect the privacy of the users ?
What do I have to do to satisfy my legal duties ?
One alternative seems to be to periodically iterate through all saved Tweets in my Database and request them from twitter to see whether I get a result back or not (so they were deleted). But this doesn't seem to be a practicable way because the data will get more and more and therefore at some point of time I will have limitations (in time, allowed twitter requests, ...). So what should I do?
Thanks in advance! Your help is greatly appreciated.
Ludwig
twitter4j v.3.0.6
Given the nature of the volume of tweets, it's unreasonable to assume that you would check to see if all the tweets are still there. You should make sure that you properly act on a delete notice from twitter. The onus is on them to actually send the delete notification.
That being said, I receive delete notifications from twitter. However, we aren't using the public stream, we are using sitestreams, which relies on authorizing specific social accounts and streaming all updates for those accounts (e.g. favorites, follows, blocks, tweets, retweets, etc) to us in realtime.
If you are doing a stream with filters, for example, it's probably not feasible (or at least very taxing) to run all deleted items through the same pipeline as new items. Or perhaps, to guess at which you were sent based on the times that you were running your filter.
As noted in the issue you linked to, the public streaming API will not necessarily send them out. I'd endeavor to handle them, and possibly provide a tool to manually remove any if a request comes in through another channel, but not worry too much about it, given that twitter doesn't provide the proper facility to be notified of such instances.