Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on an iOS app that will upload images and videos and save them per user. I was able to integrate amazon s3 and do the upload from the iOS app, I already have a node.js backend that I persist meta about the file that I saved in S3 and the S3 Id I get back from iOS.
My question is : is this a good architecture, or should I move the S3 saving activity to the backend? how do other apps do it ( like instagram / vine ) should the mobile device handle that or the backend?
Thanks
What you are doing is considered as the best practice : let the mobile devices upload directly and securely to S3.
Documentation :
http://docs.aws.amazon.com/mobile/sdkforios/developerguide/s3transfermanager.html
https://aws.amazon.com/articles/3002109349624271 (a bit outdated)
You must ensure only your users can upload objects to S3 by crafting a correct IAM policy. Depending on how you authenticate your users, Cognito Identity might help to broker identity tokens received from third party providers (like Google, Facebook or Amazon) or your own (OpenID Connect Token) with AWS STS to receive a temporary Access Key and Secret Key.
Documentation :
http://docs.aws.amazon.com/mobile/sdkforios/developerguide/cognito-auth.html
Direct upload allows your application and your user base to scale without requiring additional compute power on the backend. S3 is a massively parallel object storage, it will handle your mobile fleet traffic, offloading you from low level tasks such as monitoring, scaling, patching,... your backend.
Now that Lambda is available (in Preview), you can also consider to capture meta data about the S3 object in a Lambda function and upload meta-data to your backend store (DynamoDB or a relational database) directly from lambda. Considering the generous free tier usage of Lambda, this solution would be much more cost effective than running your own backend.
You are familiar with Node.JS, the framework used by Lambda, so their will be almost no learning curve for you.
Documentation:
http://docs.aws.amazon.com/lambda/latest/dg/welcome.html
http://aws.amazon.com/lambda/pricing/
Related
I am trying to code an IOS application and already have an ec2 server designated for the app. I want to know how the app could send image data to the server. The ec2 server would receive incoming image data continuously from all the users that use the app. The server would then process the data. It would be similar to what applications such as Instagram do but, of course not at such a large scale.
I am a beginner at client-server communication and want to know how to implement this into my app. I also do not use stack overflow too frequently, so please tell me if I am doing something wrong if you need more information.
To be more specific, a user would post an image in the app. I currently have already set up an ec2 server to possibly receive that image. I want all of the images that users post to be delivered, processed, then stored in the ec2 server. Is there some way to handle the actual delivery of data. The question is a little broad because I want to know where to look. Would I have to write a script that is constantly running in the background and receiving data at some port? Is there another service I could use that handles this?
Um, briefly, you'll have issues with running on an EC2 if you have many users sending images at the same time.
Look into setting up API Gateway <-> Lambda <-> DynamoDB or S3 on AWS. Then your client can POST images/data to your gateway with a HTTP request.
First you must decide if your data is streaming (continuously pushed from server) or stored (pulled from the server as needed). The Instagram example you provided suggest that you have no need for real-time streaming data.
A streaming solution is more complicated and may typically require a technology like web sockets (or AWS IoT) to accomplish. A storage solution will be much simpler.
For storing you have the choice between creating and managing server(s) using a platform like EC2 (you'll need more than one server to scale to many users), or using managed 'serverless' technology like Lamba where you only need provide the code. The tradeoff is for this convenience is usually price.
For image storage, a typical pattern is creating database records that contain an S3 URL for the underlying image (as well as any metadata). You can create this database record and upload your file using whatever server technology you choose; Lambda may require an API Gateway server but remember that the AWS SDK can invoke Lambda functions directly.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am developing an Nodejs application and using AAD to secure an Azure function.
There would multiple Nodejs clients but I don't need to have a single user for each one of them (all the instances are should be treated as the same client).
How should I go about implementing this and is there any security concerns?
Edit
The protected resource is an Azure Function with a HTTP trigger.
I just want to limit the access to people who have the NodeJs client Installed. I don't want the user to enter his credentials. My question is which flow should I use and how should I go about that?
If you do not want to use the user's credentials, then please evaluate the Azure Active Directory v2.0 and the OAuth 2.0 client credentials flow.
This type of grant commonly is used for server-to-server interactions that must run in the background, without immediate interaction with a user.
While this advised solution may not look like an exact fit, you can use the admin consent and make this work for you. There are additional considerations like securing the credentials on each machine that you also have to look at.
in the client credentials flow, permissions are granted directly to the application itself. When the app presents a token to a resource, the resource enforces that the app itself has authorization to perform an action, and not that the user has authorization.
If this looks promising, then also look at the azure-activedirectory-library-for-nodejs to get you going.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I plan to build a back end based on microservices. The following picture presents my current idea:
Two important features are:
uploading large text and/or video files
stream video - display user in web app and native mobile applications
The tech-stack is not finally set, but initially I think about:
web app - ReactJs / Angular
backend apps - Ruby on Rails
I have following doubts for my current concept:
Should API Gateway work as a router, which redirects requests from users to specific microservices? Or... should it be a dedicated App (eg. Ruby on Rails App) with API?
How to make Authorization? Should I use separate microservice for this? Let's say that user uploads file and his request should go to third microservice: "Big Data Upload". Where and when I should authorize his access? In that microservice or earlier in API Gateway? Or maybe authentication should be also done in "Authentication microservice"?
Uploading large files - let's say that user want to transfer a large file (video or compressed text file with raw data) from mobile app to backend via HTTP. His request goes to API Gateway and then it is redirected to "Big Data Upload" microservice. App saves the file to the Object Storage. Is is right path for uploading files? Or I could make some shortcuts to make the file's route shorter?
Video stream - when user upload video file, I would like to place it into Assets (Object Storage - eg. Amazon S3). Is it enough to present this video to users in web app or mobile app? (Beside the transcoder service and CDN)
Load Balancing - Is it a reasonable to use load balancing to control flow to instances of microservices (on the picture between API Gateway and green microservices)? Or it's not a good approach because we can lost some information about request/recipient/user or even API Gateway would be more significant "bottleneck"?
Does this concept of architecture have the good potential for easy scalability, in yours opinion? Of course, an omitting a hardware and software configuration.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
So i'm creating an app that really only communicates with one other rails application besides for some remote touch screens. The app is only available to individuals who own one of these touch screens, and an admin. Therefore, I really don't see the point in being able to sign in with twitter, facebook, etc. However, I need SOME sort of http authentication using request/access tokens in order to 1. authenticate a user and 2. be able to derive what user is communicating with the server (and when). I've spent about a week (I'm a rails newb) researching Oauth, omniauth, etc, and I'm asking two things:
Because Im authenticating between my own two sets of apps, what gem would be best for my situation?
Where would I write the logic for request/access tokens?
I really can't find any good tutorials for this
If you don't need any kind of integration with existing identity providers, then Devise is all you need. It provides a simple way for you to manage user accounts, and users will login using their email addresses and passwords.
It gets trickier to authenticate against another app.
Method 1
If you don't need much communication between the two apps, you can have the user login to the main app, then generate a temporary token that the user can use in the secondary app. Finally, have the secondary app include this string in all communications with the main app. Real world examples include Pivotal Tracker, which gives users an API key that they can use in web hooks on GitHub.
Trivial Example
User goes to Main.com and logs in using email and password.
Main.com generates a temporary token for user.
User gives token to Sub.com.
Sub.com contacts Main.com using <user>:<token>#main.com/some/path?some=query
There are many security issues with this, but it's good enough for non-critical use cases. You might want to use SSL to protect the tokens.
Method 2
However, Method 1 is not very secure. A more robust and secure solution is to make the main app an OAuth provider, and then have the secondary app authenticate against the main app using OAuth. Here is a Railscast that explains how to do that with DoorKeeper. You can use OmniAuth in the secondary app.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am designing a system that will be running online under Microsoft Windows Azure. One component is a REST based web service which will really be a wrapper (using proxy pattern) which calls the REST web services of a business partner, which has to do with BLOB storage (note: we are not using azure storage). The majority of the functionality will be taking a request, calling our partner web service, receiving the request and then passing that back to the client.
There are a number of reasons for doing this, but one of the big ones is that we are going to support three clients: our desktop application (win and mac), mobile apps (iOS), and a web front end. Having a single API which we then send to our partner protects us if that partner ever changes.
I want our service to support both JSON and XML for the data transfer format, JSON for web and probably XML for the desktop and mobile (we already have an XML parser in those products). Our partner also supports both of these formats.
I was planning on using ASP.NET MVC 4 with the Web API. As I design this, the thing that concerns me is the static type checking of C#. What if the partner adds or removes elements from the data? We can probably defensively code for that, but I still feel some concern. Also, we have to do a fair amount of tedious coding, to setup our API and then to turn around and call our partner’s API. There probably is not much choice on it though. But, in the back of my mind I wonder if maybe a more dynamic language would be a better choice.
I want to reach out and see if anybody has had to do this before, what technology solutions they have used to (I am not attached to this one, these days Azure can host other technologies), and if anybody who has done something like this can point out any issues that came up. Thanks!
Researching the issue seems to only find solutions which focus on connecting a SOAP web service over a proxy server, and not what I am referring to here.