Time ago I searched informations about the integration of youtube into an ios application.
Now I need to do this again so I started looking for information on google.
After a short time are already confused.
Can I use this iOS youtube sample
or have I to use YouTube Data API (v3)?
And this?
Short answer:
The API refers to the HTTP interface for consuming Google's funcionality.
One can use these APIs by issuing HTTP requests directly, according to the
specification of the API, or by using one of the client libraries. The client libraries are a layer on top of HTTP that issue the HTTP requests and parse the responses. They give a simpler interface for invoking the API (e.g. using standard function calls in the given programming language rather than building HTTP requests) and they also simplify a lot of the complex parts such as authentication, refreshing tokens, etc.
Long answer:
An application programming interface or API is the "contract" between a provider of some functionality and the consumer of some functionality that allows both the provider and consumer of that functionality to interoperate without knowledge of the underlying implementation of the other party. This "contract" includes such things as the number and types of the inputs, the names of the inputs (if it is required to invoke the functionality), any constraints on the inputs, the expected outputs, any constraints on the outputs, failure modes, etc.
Google provides a number of HTTP-based APIs for accessing functionality from its services. Its services implement these APIs, which are consumed by issuing HTTP requests and reading the HTTP responses. HTTP is a convenient protocol to implement, because every device and language can speak HTTP; however, it is not always the most convenient to use as a developer. In many cases, the inputs and outputs you want are objects, not HTTP requests and HTTP responses. And, in many cases, matching function signatures in the language of your choosing and type-checking of inputs is more convenient than memorizing the HTTP request paths or manually serializing/deserializing your objects to HTTP requests or content sent within the request. That is where the client libraries come in. Whereas the HTTP APIs are implemented on Google's servers, the client libraries are libraries that developers include in their application and are distributed to the devices on which those applications run. The client libraries issue the HTTP requests and interpret the responses, and provide a more convenient programming language-specific wrapper, for a variety of different programming languages.
The data API link that you provided is documenting the HTTP-based API. Whereas the sample application is using the client library (which is invoking the HTTP-based API under the hood). The last link you provided, the cloud endpoints for iOS is unrelated to what you are trying to do; it is documenting a mechanism called Cloud Endpoints, a feature of App Engine, that allows developers to create their own HTTP APIs using Google's infrastructure and to auto-generate client libraries that wrap these HTTP APIs (much as Google auto-generates the client libraries for its own HTTP APIs).
Here's a sample app you can get started to build YouTube APIs on iOS.
Also there is an helper library to play YT videos in iOS.
Related
I have implemented a custom email server and web client. The server is just a REST API (similar to google's gmail API) that uses a 3rd party (sendgrid) for sending and receiving. The emails are stored in a database. The web client just talks to the REST client for sending and receiving.
The problem with this approach is it doesn't implement IMAP anywhere, which makes it impossible for standard clients (outlook, iphone, etc.) to connect to and use our email API. This limits customers to using only our client for email.
What I need is some sort of IMAP Server "facade" that will manage the connections to clients and make calls to my REST API for actually handling the requests (get email, send email, etc.).
How can an IMAP facade be implemented? Is there maybe a way to take an existing MailServer and gut it and point all it's "events" to making calls to my API?
tl:dr; write your gateway in Perl; use Net::IMAP::Server; override Net::IMAP::Server::Mailbox; and use one of the many Perl REST clients to talk to your server.
Your best bet for doing this quickly, while maintaining a reasonable amount of code security, is with Perl. You'll need two Perl modules. The first is Net::IMAP::Server, and here is the Github repository for that module. This is a standards-compliant RFC 3501 server that was purposely designed to have a configurable mail store. You will override the default Net::IMAP::Server::Mailbox implementation with your own code that talks to your custom email backend.
For your second module, choose your favorite Perl module(s) to use to speak to your REST server. Your choice depends on how much fine grained control you want to have over the construction and delivery of the REST messages.
Fortunately, here you have tons of choices. One possibility is Eixo::REST, which has a Github repository here. Eixo::REST seems to deal well with asynchronous vs. synchronous REST API calls, but it doesn't provide a lot of control over X509 key management. Depending on how googley your API is, there's also the REST::Google module. Interestingly, this family also has a REST::Google::Apps::EmailSettings module, specifically for setting Gmail-specific funkiness like labels and languages. Lastly, the REST::Consumer module seems to encapsulate a lot of https-specific things like timeout and authentication as parameters to Perl object instantiation.
If you use these existing frameworks, then about 90% of the necessary code should already be done for you.
Don't do this by hacking Dovecot or any other mail server written in C or C++. If you hack together a mail server quickly using a compiled language, your server will sooner or later experience all the joy of buffer overflows and stack smashing and everything else that the Internet does to fuck over mail servers. Get it working safely first, then optimize later.
(This is basically my comment again, but elaborated quite a bit more.)
Some IMAP servers, most notably Dovecot, are structured such that the file access is in a separate module with a defined interface. Dovecot isn't the only one, but it's by far the most popular and its backend interface is known to be appropriate, so I'd take that absent specific concerns.
There already exist non-file modules such as imapc, which proves that it can be done. When a client opens a mailbox backed by imapc, Dovecot parses IMAP commands, calls message access functions in imapc, imapc issues new IMAP commands, parses the server responses, returns C structs to Dovecot, Dovecot fashions new IMAP responses and returns them to the client.
I suggest that you take the dovecot source, look at src/lib-storage/inbox/index/imapc and the other backends in that directory, and implement one that speaks your REST API as a client.
Since you're familiar with .NET, I would suggest hacking either of the following implementations of IMAPv4 servers to your liking:
Lumisoft Mail Server - a very old project indeed (let's call it "mature", huh?). Don't be too turned off by the decade-old website and the lack of a github link - the source is provided under "other downloads".
McNNTP - also an older project and with a major focus on NNTP (as the name says) but very close to what you're trying to achieve in terms of the IMAP component. Take a look, you'll probably find this a good starting point.
We are building our applications in micro-services based architecture to implement our applications. As true with micro-services, we now see a lot of cross service interactions happening between services.
In order to safeguard the endpoints we plan to implement JWT based authentication between such secure exchanges.
There are 2 approaches we see helping us achieve it:
Embed an JWT engine in each application to generate the token (#consumer side) and evaluate (#provider side). With an initial exchange of keys, the token exchange shall work smooth for any future comms.
Have an external (to application) JWT engine, that sits in between all micro-service communications for the distributed application, and takes care of all token life cycle, including its encryption-decryption and validation.
There are lot of options to do it as per option #1 as listed on https://jwt.io but considering the over-head token generation and management adds to a micro-service, we prefer to go with 2nd option by having de-centralised gateway.
After quite some research and looking at various API gateways we have not yet come across a light weight solution/tool that can serve to our need and help us get centralised engine for one applications comprised of many micro-services.
Do anyone know about one such tool/solution?
If you have any other inputs on this approach, please let me know.
I prefer also option 2, but why are you looking for a framework?
The central application should only be responsible of managing the private key and issuing the tokens. Including a framework for solve one service could be excessive
You can also think to implement a validation service, but since applications are yours, I suggest to use an assymetric key and verify the token locally instead of executing remote validation requests to central application. You can provide a simple library to your microservices to download the key and perform the validation. Embed any of the libraries of JWT.io or build It from scratch. Validating a JWT is really simple
If you would need to reject a token before expiration time, for example using a blacklist, then It would be needed a central service. But I do not recommend this scheme because breaks JWT statelessness
Both scenarios could be implemented in Spring Cloud Zuul.
For more info:
http://cloud.spring.io/spring-cloud-static/Brixton.SR7/#_router_and_filter_zuul
http://cloud.spring.io/spring-cloud-static/Brixton.SR7/#_configuring_authentication_downstream_of_a_zuul_proxy
I'm responsible for a project that is producing the server backend for an iOS application.
I would like to formally define the service interface for the clients to call so both the IOS, Android and server teams can practice contract-first development.
In the dark past we would have used WSDL and generated RPC-style client and server interop boilerplate from that. However this isn't the norm for IOS projects. We've also looked at Apache Thrift, but there is no code generator for Swift and the Objective-C generator seems to produce code that relies on deprecated IOS APIs.
Which brings us to REST, which works well as a way to move object state around. It seems less good for the kind of conversation that says "Hey server, do X with these parameters and return me a result." We just end up creating server-side controllers for particular actions, and those "define" the service's calling convention by being sticklers for getting the right parameters. Contract-last.
Is there a standard way to do contract-first web service development for iOS clients, or am I just going to have to treat documentation as the spec?
tl;dr: No.
I'm not aware of a 'standard' way of doing things, but many
client/server apps today do use some incarnation of a RESTful
interface. JSON is the usual format.
There are some well documented 3rd party utilities that can handle
this for you client side (like [RESTKit][1] in the case of REST), or
you can roll your own implementation based on apple's provided
NSURLSession or a networking library like [AFNetworking][2]
If needed, iOS can also handle socket-based communication. (3rd party
libs exist for this as well.)
[1]: https://github.com/RestKit/RestKit [2]:
http://nshipster.com/afnetworking-2/
It's now trivial to create a web app that sits atop Parse.com. Now that I have this webapp, I want to expose parts of it to other developers via an oauth accesible api. So, they can develop an app that lets my site users 'give them permission' via oauth and they can now access the api.
How would I start going about doing this?
Update: After #Mubix response, I felt the following clarification would help
Currently I am accessing Parse from the server via a REST api, to get around any javascript security issues re:api keys etc. So, the api would be served of a server other than Parse. Also, the server code is in javascript / nodejs. I came across https://github.com/jaredhanson/oauth2orize which seems a likely candidate, was wondering how others are doing it and if anyone has actually gone a further step and integrated Parse access.
Hmmm .. Intereesting question!
Legal:
First of all their ToS doesn't seem to prohibit what you are trying to do but you should read it carefully before you start.
Implementation:
While parse doesn't provide feature to build your own APIs you could implement something yourself. You could treat the third party developers as users of your app. And you can use the ACL to control access.
Problems:
I don't see any way to implement oAuth entirely within parse.
How will third party apps access your API? Ideally you would like them to use a REST interface but with the parse.com REST API you won't be able to manage access to different parts of your data.
Conclusion:
It seems like too much trouble to implement the API entirely within parse. I would suggest that you write a thin API layer that takes care of auth and uses parse as the backend. You can use one of the service side libraries available for parse. eg. PHP Library, Node Parse.
I am currently creating a system basically consisting of mainly three parts. There is one authorization server and one resource server. Furthermore, I have one pubsub api based on Node.js (Javascript) next to it. The authorization server and resource server are built using the DotNetOpenAuth libraries. The resources can be accessed by means of the token received from the authorization server.
Now, what would be the preferred way of working when I also would like to have the pubsub api authorized by means of the same token? In the DotNetOpenAuth library, I have this VerifyAccess method available which does this for me but I don't have this in my Javascript. Would it be proper to have a separate web service doing the verification which i then call from my javascript?
Thank you in advance...
Having your Node.js call via web request to .NET to call VerifyAccess would certainly be the simplest. Alternatively if Node.js has the ability to perform asymmetric signature verification, and both asymmetric and symmetric decryption, then theoretically Node.js could validate the token directly. But that would be left as an exercise for the reader. :)
If you do accomplish it, please publish your result for others though.