Make authenticated requests to YouTube from iOS always with same user - ios

Recently, YouTube decided to make video tags unavailable publicly. So to get the tags for a given video, I need to make an authenticated request to the API as the owner of the video. This is not a problem in my case as I'm fetching my own videos.
However, I'm confused about the authentication flow since YouTube strongly recommends to use OAuth2. Since I'm always going to authenticate as the same user (the owner of the video, aka myself), I definitely don't need to have any browser page for the actual user of the app to do anything. I see how I could have done it using ClientLogin (hardcoding login and password into the app) but I'm not sure how to approach this using OAuth2.
One last detail - that is not necessarily relevant since a high-level answer would be enough - is that I'm developing on iOS. Also I looked at this https://developers.google.com/accounts/docs/OAuth2 and particularly the web server case which seems closest to mine but was not able to get a clear idea from it.
Thanks in advance for your help and don't hesitate if you need me to be more specific.

There is no OAuth flow that supports your use case.
In general, you should not be distributing your YouTube login as part of your application. Even if this were available via ClientLogin, after a certain number of logins, you would likely be presented with a challenge because the authentication servers would detect a strange usage pattern.
OAuth is not for distribution a single user's login to a large N, where N is the number of users of your application. OAuth is meant for your application to act on behalf of an end user, and because tags are no longer exposed to end users through the UI, it does not make sense to expose them to users via the API either. More details can be found here:
http://apiblog.youtube.com/2012/08/video-tags-just-for-uploaders.html
How many videos do you have? What is the purpose for needing the tag metadata? From a pragmatic perspective, here are a few alternative implementations that would be easier and would not require users to log in as you:
Store a single file mapping video IDs to tags on a server somewhere and fetch this periodically. Google App Engine is a good place to do this.
Put the tag data in the description in a predictable format (you host the videos), and generate the metadata from this.

Related

Use CloudKit Web Services' authentication flow in combination with Zapier

I'm developing a Mac app that uses CloudKit as its back-end. Some of my users are requesting the ability to ingest and extract data via an automation/integration service such as Zapier. For this, I need to introduce a web API.
I am planning to use CloudKit Web Services to access the app's data. This data is user-specific and hence, resides in a private database. As a result, CloudKit requires user authentication as described here.
Essentially the user needs to be redirected to an Apple-hosted authentication page. After successful authentication, an authentication token is provided that can be used for data operations. Similar to how OAuth2 works, but different enough to not work with Zapier's (or probably any other similar services) supported authentication schemes.
Who has done something similar? What are my options? I want to keep things as simple as possible and make my web API's implementation as thin as possible.
Thanks.
Niels
This is definitely doable and you are on-track with your thinking. Here's how I envision it working:
You could do all of this with a front-end web app (no server-side app needed). I personally prefer Vue.js but you probably have something in mind already.
Your app will need to authenticate the user to CloudKit using the flow you mentioned. I highly recommend you use the Web Services API and not try to wrestle Apple's neglected CloudKit JS API. For this, you are going to need to generate an API token in the CloudKit Dashboard.
You app would then prompt the user to authenticate to Zapier.
You should now have user credentials for both CloudKit and Zapier in place in the user's browser cache (you can save, for example, the CloudKit token to sessionStorage and likewise with Zapier).
Make API calls to Zapier, pull down the data, and then save it to CloudKit all within your JS app. It's all API transactions at this point. I'm a fan of Axios for making the HTTP requests.
If you are downloading files, transacting huge amounts of data, or doing processor-intensive stuff, you might consider using a server for that work. But if you just need a place to pull and push reasonable chunks of data, I see no reason why you can't do it all in a front-end app.
Alternatively, if you don't want a web app at all, and want to only have the user work in the Mac app, that can be done, too. Just make API calls directly to Zapier from within your Cocoa app. Whether or not this is feasible depends some on how you want it to work.
If you have more specific questions or need help with any of the implementation details, feel free to add a follow-up comment or ask a new question.
Good luck!
I think the other answer is mostly correct. I don't know much about CloudKit, but we can talk through what you'd need for it to work.
Let's say you had a simple iOS app that stored contacts. On the iOS side, Apple presumably abstracts the upload and download operations.
If you wanted to make a web viewer for synced contacts using CloudKit, you'd need an endpoint to fetch all rows belonging to the authenticated user (each of which would have a UUID, name, and a phone number). I believe that's possible with CloudKit code Apple provides (but let me know if I'm off base).
Now, we want to integrate with Zapier. Say, a "New Contact" trigger. You make some sort of authenticated HTTP request from Zapier to Apple on behalf of an authenticated user. It gives back a list of contacts and Zapier can trigger on the ones it hasn't seen before. To do that, Zapier needs some sort of user token.
That's where the little front-end page the other answerer mentioned comes into play. If you've got a web page that can surface a user's token to them, they can paste it into Zapier and all of the above becomes possible. I'm not sure what the lifespan of the token is, but hopefully it can be automatically refreshed as needed (rather than the user needing to take any manual action).
I'm not sure if what I've described is possible, but do let me know if it is. It would be huge if it were possible to integrate Zapier and the iOS ecosystem!
Edit to respond to comments:
Zapier won't be able to interact with CloudKit in a way sufficient for me (some minor business logic is needed)
I'm not sure what that entails for you, but it's common to put logic in the Zapier integration to structure data in a way Zapier expects. There's a full Node.js execution environment, so the sky's the limit here.
I don't think Zapier can authenticate against CloudKit as it uses a non-standard authentication scheme
Once you've got a user's token (described above, which is unusual), you will almost certainly be able to use it in requests to cloudkit. Zapier provides a "custom" authentication scheme which lets you do basically anything you want. So unless Apple uses something that fetch can't handle (unlikely), it should be fine (once you get the token).
I would like to push data directly from my app into Zapier and have it done whatever magic the user has configured
This is also probably possible. Zapier ingests data in two ways:
polling, where Zapier frequently makes a web request, store the IDs of items we've seen before, and act on the new ones. You can read more about that here. Assuming you can work your business logic into the integration, this is doesn't require an external server besides Apple's
webhooks, where Zapier registers a subscription with you and you send new data, on demand, to that address. This would probably require a webserver on your end to handle. It's optional though - you can do polling instead.
Hopefully this cleared it up a bit. Feel free to reach out to partners#zapier.com and reference this question to talk more about it.

Microsoft Graph API: List applications for signed-in user

While the Microsoft Graph API seems to be very complete feature wise, it seems like I am stuck at a fairly easy request. For a small web application I want to list apps that are registered in Azure. What a want to do with them is a little bit out of scope, but in the end I want to show the user some important applications (which we flag in some way - using tags or something like that) that the user has access to.
Now, using the /applications resource in the beta endpoint of the Graph API I can retrieve a list of applications. Now, the application does not need admin consent. When requesting the apps, it retrieves all registered apps, which is a bit odd I think. Why would it return all apps and not just the ones that are assigned to me?
But okay, lets move on. Now I have the list of apps (or the metadata of it). How can I determine if the signed-in user has access to this application (or it doesn't require assignment). Am I missing something or is this nowhere to be found?
You can use query parameters to customize responses. Please check the link https://learn.microsoft.com/en-in/graph/query-parameters
I have used appRoleAssignment with the neccessary parameters to retrieve all apps a user has access to. Turned out to be quite simple. Beta only, but stable.

Using Azure AD to secure a aspnet webapi

I'm writing an application that will be the backend for a react website. The website is to be used by our customers, but we will fully control the permissions of the user. We have decided to use Azure AD to secure requests, but will also be exposing the API for end users to use directly if desired.
My understanding is in Azure AD I will have to create an application that will allow web based implicit authentication (for the react site), as well as a native application that will allow a dameon based application to authenticate to the API.
This I believe means I will have two audience ids in my application.
I'm trying to get claims to include groups, and I can see if I edit the meta data of both applicaitons in azure AD to include "groupMembershipClaims": "SecurityGroup" I can get claims with the group IDs in, but no names.
I think I can also use appRoles to set roles the application uses, but I've yet to get that to come through as claims in the JWT, but I'm assuming it can be done, however I'd need to setup the roles on each applicaiton, then add the user twice which isn't really ideal. I also think that because my app is multi-teanated that external users could use this to set their own permissions, which isn't what I want to do.
Sorry I'm just totally lost and the documentation is beyond confusing given how frequently this appears to change!
TLDR: Do I need two applicaitons configured in azure ad, and if so whats the best way to set permissions (claims). Also is oAuth 2 the right choice here, or should I look at open id?
Right away I gotta fix one misunderstanding.
Daemon apps usually have to be registered as Web/API, i.e. publicClient: false.
That's because a native app can't have client secrets.
Of course the daemon can't run on a user's device then.
Since that's what a native app. An app that runs on a user's device.
This I believe means I will have two audience ids in my application.
You will have two applications, at least. If you want, the back-end and React front can share one app (with implicit flow enabled). And the daemon will need another registration.
I'm trying to get claims to include groups, and I can see if I edit the meta data of both applicaitons in azure AD to include "groupMembershipClaims": "SecurityGroup" I can get claims with the group IDs in, but no names.
Yes, ids are included only. If you need names, you go to Graph API to get them. But why do you need them? For display? Otherwise, you need to be using the ids to setup permissions. Names always change and then your code breaks.
I think I can also use appRoles to set roles the application uses, but I've yet to get that to come through as claims in the JWT, but I'm assuming it can be done, however I'd need to setup the roles on each applicaiton, then add the user twice which isn't really ideal. I also think that because my app is multi-teanated that external users could use this to set their own permissions, which isn't what I want to do.
Your thoughts for multi-tenant scenarios are correct. If you did want to implement these though, I made an article on it: https://joonasw.net/view/defining-permissions-and-roles-in-aad.
Why would you need to setup the roles in multiple apps though? Wouldn't they only apply in the web app?
If the native app is a daemon, there is no user.
Overall, I can see your problem. You have people from other orgs, who want access to your app, but you want to control their access rights.
Honestly, the best way might be to make the app single-tenant in some tenant which you control. Then invite the external users there as guests (there's an API for this). Then you can assign them roles by using groups or appRoles.
If I misunderstood something, drop a comment and I'll fix up my answer.
Azure AD is of course a powerful system, though I also find the OAuth aspects confusing since these aspects are very mixed up:
Standards Based OAuth 2.0 and Open Id Connect
Microsoft Vendor Specific Behaviour
ROLE RELATED ANSWERS
This is not an area I know much about - Juunas seems like a great guy to help you with this.
OAUTH STANDARDS AND AZURE
I struggled through this a while back for a tutorial based OAuth blog I'm writing. Maybe some of the stuff I learned and wrote up is useful to you.
AZURE SPA AND API CODE SAMPLE
My sample shows how to use the Implicit Flow in an SPA to log the user in via Azure AD, then how to validate received tokens in a custom API:
Code Sample
Write Up
Not sure how much of this is relevant to your use case, but I hope it helps a little on the tech side of things...

Twitter account linking with an Alexa skill for API read-only access

I'm working on a fairly basic Alexa skill that, in essence, searches through a specific Twitter feed looking for a hashtag, parses that tweet, and reads it back. The simplest way to do this seems to be using the Twitter API, since scraping appears to be against the TOS.
... crawling the Services is permissible if done in accordance with the provisions of the robots.txt file, however, scraping the Services without the prior consent of Twitter is expressly prohibited.
I've been having some trouble understanding how account linking works, as I've never dealt with OAuth before. I've been trying to follow the one tutorial around, but neither the text or video version were clear me.
Why the need for an external webapp?
...we need an OAuth implementation of our own to make the integration work correctly
What's wrong with the one provided by Twitter? Why can't any issues be fixed within the Lambda method, since the account integration isn't being touched otherwise AFAIK? Isn't having the tokens passed around via the URL a bad idea too? Their example code seems to require that the Consumer Secret be hard coded too.
Enter: “https://alexa-twitter-airport-info.herokuapp.com/oauth/request_token?vendor_id=XXXXXX&consumer_key=YYYYYY&consumer_secret=ZZZZZZ”.
At the very least, their webapp seems to be down for the time being, and it'd be nice to have an option that doesn't require paying money to host another copy.
I've seen this post discussing a Node.js OAuth implementation, but the necessity for such an implementation still escapes me.

Ruby Twitter Applications

I'm using https://github.com/jnunemaker/twitter to tweet to a users twitter when they post on their blog running on ROR. .e.g
Tweet : "I just posted a blog - 'I love ruby on rails' http://link-to-blog.com"
My question is, as I'm making many sites for different people do I have to create a new twitter developer application, with individual consumer keys & secrets, for each blog or is there a way to use the same twitter application?
Thanks,
Alex
You technically can use the same application in a variety of websites. Just use the keys/tokens twitter gives you in all your sites.
Nonetheless, this is a bad practice, since twitter will not be accounting your accesses to the API from the pages that are not the one you specify in the Callback URL. Furthermore, your users will return to that (and only to that) page that you specified in the callback URL, which can be very misleading for those that are in other site.
And finally the most important reasons are the following two:
You'll get to the request limit quicker than if you had several applications
You'll get to the user limit quicker than if you had several applications
The limits that twitter manages are not very big so I can tell you that the twitter functionalities won't work if you get a good peak of visits (happened to me twice). Or they may not work if you're site receives a lot of visits at a certain time. No matter if your cache your API or not, you'll end up filling the limit.
Here is the twitter documentation about this:
Caching. We recommend that you cache API responses in your application or on your site if you expect high-volume usage. For example, don't try to call the Twitter API on every page load of your hugely popular website. Instead, call our API once a minute and save the response to your local server, displaying your cached version on your site. Refer to the Terms of Service for specific information about caching limitations.
Rate limiting by active user. If your site keeps track of many Twitter users (for example, fetching their current status or statistics about their Twitter usage), please consider only requesting data for users who have recently signed in to your site.
Scale your use of the API with the number of users you have. When using OAuth to authenticate requests with the API, the rate limit applied is specific to that user_token. This means, every user who authorizes your application to act on their behalf, has their own bucket of API requests for you to use.
Request only what you need, and only when you need it. For example, polling the REST API looking for new data is inefficient for both your application, and the Twitter API. Instead consider using one of the Streaming APIs as a signal of when to make REST API requests.
If you have any question, don't hesitate to comment below. I had terrible experiences with this when my site got mentioned by a few important twitter accounts

Resources