Ziggeo - what data is passed to webhook url? - ziggeo-api

I'd like to create a webhook that is notified when a video transcription occurs. I'm storing some of the video data on my end, and part of that is the transcription text of the video.
I have it set up to be passed as JSON
The only problem is that I don't know what that data structure looks like, this is the only information in the docs about it:
What data gets passed when the event occurs? Is it just the Video Data that you can see in the Admin view of the video?

Ziggeo have a detailed Page that shows how to use the webhooks here https://support.ziggeo.com/hc/en-us/community/posts/115006656247-Using-Ziggeo-webhooks-aka-server-side-events
And the site have a sandbox to generate code that will retrieve the webhook data https://ziggeo.com/sandbox/webhooks (right now, it's only for PHP and NodeJS though)
Furthermore, you can use service like this https://webhook.site that would retrieve the webhook and you can inspect the sent data. Make sure you put your own unique URL created for you on the Ziggeo dashboard.
Ibnu (from Ziggeo)

Related

Youtube - url source to channel banner WITHOUT API

Trying to write a tool to grab YT information, and I'm starting out with the banner image. Usually I use Inspect Element and it will tell me the source, however on a channel like this I see no such URL. Id like to be able to, in my script, use something like wget and have it grab the image.
My hopes are to NOT have to rely on the YT API to save users from having to go through that.
I'm just trying to see if there is a URL scheme to retrieve a channel's banner. Thanks
In case it's better for me to add it all in one question, here it goes: I'd like to be able to grab all of the following w/o the API:
user avatar (highest quality)
video watermark (also in highest qual.)
channel description
related urls (at the top right, usually a channel's twitter account, main website, etc.)

Limitations of Qualtrics API

With respect to the Qualtrics API (v3) documentation (https://api.qualtrics.com/docs/overview) there does not appear to be any means to send a GET request through a REST client to get the individual survey responses for a specific survey (I suppose that the developers figured that no would be interested in decoupling the survey results from the admin panel).
The reason why I would like to be able to submit a GET request to get survey results is for real-time data visualization purposes that do not depend on me exporting the data every so often to re-update the visualization. If Qualtrics does not support such a GET request, which service (perhaps SurveyMonkey or its ilk) best facilitates what I'm trying to build? Or do I have to build an entire survey module from scratch? (shudders)
I agree that v3.0 has some big short comings. I have no idea what they are thinking. There should be a way to retrieve a specific response using Response ID.
You can still use v2.5 of the api to do what you want.
SurveyMonkey has a REST API that allows you to fetch all your responses.
You can fetch all your responses by doing:
GET /v3/surveys/<survey_id>/responses
Which will give you a skinny payload (usually IDs only, and maybe a name or title but not in this case).
You can then get a specific response by doing:
GET /v3/responses/<response_id>
You can also fetch all responses as fatter payloads by doing:
GET /v3/surveys/<survey_id>/responses/bulk
Or, depending on your use case, for example if you have some visualization that you want to update in real-time without polling for responses you can set up a webhook.
POST /v3/webhooks
{
"name": "My Response Webhook",
"event_type": "response_completed",
"object_type": "survey",
"object_ids": ["<survey_id1>", "<survey_id2>", ...],
"subscription_url": "https://mycallback.url"
}
Where subscription_url is your callback url, and then whenever any new responses for the defined surveys come in you'll be notified to the subscription_url provided and you can then know to refresh your charts.
I have done that by getting contacts, in every contact there are object of Response history where you can get the information of a survey assigned to a respondents.

Downloading images directly from Parse.com, as opposed to using their API

I noticed that for a PFFile type Object stored on my Parse.com developer account, the link is open and accessible for anyone to view/download.
Example a PFFile Object that is named, name.jpg, representing an image could be a URL like :
http://files.parsetfss.com/<some garbled class UUID>/<some garbled image uuid>-name.jpg
Where <some garbled class UUID> appears to be the same for all name.jpg, stored on a class
and <some garbled image uuid>-name.jpg appears to be unique uuid's appended with the actual object name which is 'name.jpg'
Using the above URL, anyone/any client can download the object
So I have some questions regarding this:
Is this normal? Is this by design?
Will the URL for an object change, if nothing else changes?
Am I being unwise in using this information to download images directly, thereby saving the cost of one API call from Parse (although I think I'll make one API call anyways to get the URL) ?
Will downloading directly from this URL, perform better/acceptable compared to downloading via Parse.com API
Yes it's normal. Protection is via the object, don't give access to anyone who shouldn't have it.
No the URL shouldn't change, though strictly you should query from the file object each time you want it to be sure.
You should care more about network calls that API calls in general. You could use cloud code to aggregate responses, or batch requests, but that doesn't reduce API calls.
The download is unchanged as you're always downloading the same file from the same link no matter which API you use to do it.

Extract video id of preceding youtube advertisement

I'm writing a script that does stuff with the youtube advertisment videos that people have to watch before the actual videos. (These ad-vids at the start are just simple youtube videos from the brand's channel)
I've searched the whole source code and scripts, but I can't find the video id of those ads anywhere. It must be somewhere, but it seems to be hidden well.
Anybody got an idea where to look?
I did some research on your case.
The video id of the ad video is definitively not part of the initial source code, as you already figured out. Youtube makes an ajax-Request to the http://googleads.g.doubleclick.net/ API to get information about related ad videos.
If you take a look into the source code, you can see a lot of javascript that is related to the google ads part. By looking into the code, you can find URL routes to the API. See screenshot, it is just an excerpt:
But unfortunately you cannot simply copy the url and make a remote-call to it. By doing so, you get a 400 Bad Request response.
As i figured out, there are missing params, which are dynamically added by youtube's javascript.
If you compare the request, that is actually made by youtube, you can see that there are more params sent:
compared to the request, that is directly copied from source code:
The result of the working request looks like this:
I tried several ways to make the invalid API request work, but have not found a way. Debugging the javascript is not that easy, because its obfuscated and minified. But additionally the variables are scoped within the function. So anyways, you would not have access to them.
If you make a javascript breakpoint right before the XHR request, you can see the actual API request:
But it is within the Local scope, without access to it.
Later on, there is even a second request to the youtube data API to fetch information about the ad video
In my opinion, there are only two ways to get the video, both require javascript to run.
Look into the source code AFTER Javascript has run. Then you can get the id directly from your markup:
Hook onto the Ajax Requests and grab the data directly from your network traffic.

How can I use the YouTube SUP API to retrieve recent uploads of some predefined users?

I wish to be able to check for the latest videos (in near realtime or at most a couple of minutes out) for a set of users (up to 200 or so) in a single call to the YouTube API and then store the IDs of uploaded videos in my own database. The only solution I believe there is for this is the YouTube SUP API but I'm not entirely clear on how it works and was wondering if someone could please explain it. I have read the entire API documentation on it but am still not completely clear.
I was assuming that one can call the SUP URL (http://gdata.youtube.com/sup) and check if the user hash has had any activity recently and if they have, then do something with that. My issue is I don't understand how you interpret the activity from ["b305e88","afd4"] in the SUP feed and is there any way to specify a subset of users or must you search through the entire feed? It seems to take a fair few seconds to load the SUP feed.
On the SUP API page it also states that you can visit a URL such as https://gdata.youtube.com/feeds/api/users/bbc/events?v=2 to obtain the hash key for a user's feed, but as you can see if you try to visit it, the link appears to be broken. How else could I obtain the hash?
I'm currently wanting to do this in a Rails project while using the youtube_it gem but I don't believe this has support for it. Correct me if I'm wrong.
Edit
My mistake. The developer key is required to obtain the events of a user such as https://gdata.youtube.com/feeds/api/users/bbc/events?v=2&key=YOUR_DEVELOPER_KEY
Still no progress with the SUP method although I'm potentially considering using a channel and just automatically subscribing to each user. Every minute I will then poll for the list of new videos by the users.
I'd suggest using PubSubHubbub: http://apiblog.youtube.com/2010/10/pubsubhubbub-for-youtube-activities.html
A handler in your web application will automatically receive a POST whenever one of the feeds you're watching is updated, and the content of the POST will be the updated feed itself, saving you the trouble of having to fetch it.
There isn't much documentation specific to using PuSH and the YouTube API beyond that blog post, but the general PuSH docs all apply: https://pubsubhubbub.appspot.com/
Failing that, SUP should still work, so we could try to debug that further if you'd rather use that.

Resources