I have contents in Amazon cloudfront. These contents are private but will be given access to users when they purchase to get access to the contents for limited period of time.
I want the URLs given to the users be generated for that specific user IP or may be current session key. I am aware of the option of generating an authenticated URL to a private object that will expire in a certain time period. The problem with that is as long as the URL is not expired users can access the content from different machines which is not a requirement for users to have this capability.
The expiring URLs you mentioned are made using CloudFront's signed URLs with a "canned" policy (see Creating Signed URLs for Amazon CloudFront for examples)
If you also want to limit based on the requester's IP address, you need to use a custom policy which allows you to specify a date range and an IP address (or range).
See http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/index.html?RestrictingAccessPrivateContent.html#CustomPolicy for specifics.
Enjoy!
Related
Im using Matomo for tracking user's page visits. Things that I'm getting is userId, and pages which user visited nothing more. Do I still need to request for consent? Im not sharing user data with any other apps.
You are close. Two caveats:
Matomo will store the URL. If you send GET requests with http there can be Names and personal information in the stored URL
I am assuming you are anonymizing IP addresses? Matomo allows partial anonymization or full anonymization. This removes Geo data.
Check those two things and you are probably good.
Depending on whether you are staying compliant in EU or a specific EU country, interperetation of GDPR and PII (not identical) will vary a little. Matomo has some guides for checking config settings for all of these in Matomo:
CNIL https://matomo.org/blog/2021/10/matomo-exempt-from-tracking-consent-in-france/
GDPR https://matomo.org/gdpr-analytics/
PII https://matomo.org/blog/2018/04/how-to-not-process-any-personal-data-with-matomo-and-what-it-means-for-you/
These are helpful for confirming which settings to apply and whether to modify the default settings.
I am trying to serve video files using Amazon cloudfront to my app users using signed urls. I have created the signed urls using the documentation and it works perfectly well. The url generated has the signature, expires and keypair_id.
issues
What I am trying to achieve is to serve the video files to the user only when the request is coming in from my particular mobile application. I am looking for a solution to authorize the request (on a signed url) on the cloudfront side.
So if a user tries to access the signed url using our mobile app, we would want to serve the content but if the url is accessed from either web or any other mobile client we would like to raise an authorization error or 404.
I have went through the documentation and a couple blogs looking to achieve the above and everyone has pointed me in the direction to use signed urls which I already am. But the urls are still accessible directly via the browser.
Also I would like to know, why does a signed url has signature as a GET parameter, as if the signature is removed the content is still accessible using the url without the get query params.
Signed Url: http://d2z7g8y6l5f1j0.cloudfront.net/test_upload.mp4?Expires=1456828601&Signature=R3tljkRxGM9se2S4IJT908sT2BBGNJkpWE9IE-v1GAt-QY0WcaEVEY-OYvSSlhFK1ueNcWhgAscJQ7J~qUKZUt3XS5raKU3kj9STKYYzCemRRm1j5DE8XfhjRKRggSSw138F0lr~tDt~TLoJ7Pj9NNvoGl42jNNLaET7~d9pkAGAh-sNpoS1gz~d0CZTo41ZTFMIzshgZNxrWpCOR0PrLHfRALy2H9-Z9w4XfU4v66WEseVQ3FWyeXFyV0UO2S-KIXbe1ODiHFC6Ae6AJlWzoFfIGAxiLymmtUMJgeQHnu80u97ysMbbNYvek-S0tQBkkID3zC~tDQH~EjXPYcNUbA__&Key-Pair-Id=APKAINPV56WSGDECRTPQ
^^^ Serves the content
Original Url: http://d2z7g8y6l5f1j0.cloudfront.net/test_upload.mp4
^^^ Still serves the content
What's the difference in the above urls ?
Further Issue
The signed url that I have generated is still serving the content so what is the point of the expires GET query parameter, or the issue is that I have made the url correctly or not.
I followed the following method to generate my signed url:
from boto.cloudfront import CloudFrontConnection
from boto.cloudfront.distribution import Distribution
# establish cloudfront connection
cloudfront_connection = CloudFrontConnection('AWS_KEY', 'AWS_SECRET')
expiry_time = int(time.time() + 3000)
#get the distribution
distribution = Distribution(connection = cloudfront_connection, domain_name = '<specified_domain_name>', 'id' = '<specified distribution id>')
#create signed url
signed_url = distribution.create_signed_url(url = '<cloudfront_url>', keypair_id = '<cloudfront keypair_id>', expire_time = expiry_time, private_key_file = open('<location>', 'r'))
I have went through the documentation and a couple blogs looking to achieve the above and everyone has pointed me in the direction to use signed urls which I already am. But the urls are still accessible directly via the browser.
Perhaps you have a misunderstanding of the signed URL feature. Any client that has the URL can access the content - there's nothing limiting it to a specific mobile browser or desktop browser or anything else. So long as the URL is valid (e.g. is within the validity period/has not expired, and is within the IP range that you specified, etc), any client will be allowed access.
Your application should generate the signed URL in real time when the user requests it, and it should expire within a time frame that is acceptable to you. This is explained in the docs under How Signed URLs work.
Also I would like to know, why does a signed url has signature as a GET parameter, as if the signature is removed the content is still accessible using the url without the get query params.
You can set up a cache behavior that restricts access to requestors that have valid signed URLs. To summarize, when you set up the distribution, you can configure various cache behaviors based on the path that the user is requesting.
This topic is a bit buried in the documentation. See the docs on Cache Behavior Settings, and in particular the Path Pattern and Restrict Viewer Access subsections.
I am working on an iOS app which allows downloading and HTTP live streaming of private videos. The videos are stored in an Amazon S3 bucket (as mp4 and segmented as m3u8/ts files). Also CloudFront is turned on and connected to the bucket.
Since the content is private, I need to sign the URLs when connecting via CloudFront. In order to sign the URLs it's necessary to use the private key and therefore it's not possible to generate signed URLs in the iOS app without storing the private key in the bundle. And that would be a bad idea!
So I decided to write a simple Ruby server, which performs the URL signing and redirects to the generated signed CloudFront URL as follows:
http://signing.server.local/videos/1.mp4 → https://acbdefg123456.cloudfront.net/videos/1.mp4??Expires=XXX&Signature=XXX&Key-Pair-Id=XXX
http://signing.server.local/videos/1.m3u8 → https://acbdefg123456.cloudfront.net/videos/1.m3u8??Expires=XXX&Signature=XXX&Key-Pair-Id=XXX
For video downloads it works well, since there is only one request. But when I want the content streamed and give the MPMoviePlayerController the URL of the signing server, only the first request is signed by the server and redirected to CloudFront. For the next requests the MPMoviePlayerController takes the first signed CloudFront URL as the base and tries to connect directly without going throw the signing server.
The paths in the m3u8 files are relative.
Any suggestions how to implement this feature without the need to send all the content through the signing server?
The correct way to do private HLS with S3/CloudFront or any other storage/CDN is to use HLS encryption. See the Apple documentation about this topic.
In addition to the storage where your playlists and segmented video files are stored you have to integrate a secure HTTPS server for storing the top level playlists and keys. These keys are generated during the segmenting using the Apple HLS tools.
Here is how it works:
The MPMoviePlayerController gets an URL pointing to the top level playlist (.m3u8) on the secure HTTPS sever.
In this file there are links to the variant playlists (prog_index.m3u8) which are stored in S3/CloudFront and which point to the video files (.ts).
Additionally the variant playlists contain a link to the keys which are necessary in order to read the video files. These keys are stored on the secure HTTPS server as well.
See the following image:
Taken from the presentation Mobile Movies with HTTP LIve Streaming (CocoaConf DC, Jun '12)
Of course there are possibilities to make the infrastructure more secure, see the linked Apple documentation.
I also created a Ruby script for segmenting to produce the output with given base URLs, which makes things a lot simpler.
Lukas Kubanek has the right answer. However, you can get the effect of signed URLs by putting the top-level playlists in a "private" bucket, and then putting all the other playlists and .ts files in a public bucket. This is pretty much as secure as using signed URLs for everything, in that anyone who wants to can still download and save the content, but can't merely share the URL they were given. They can of course open the top-level playlist and then share a single stream of their choice, or host the top-level playlist themselves, but it's at least a small level of security-by-obscurity that may be enough for your content. Also, if you sign every single segment, you run into a problem with content that's longer than your time limit, or with the user simply pausing the video until the segment links expire.
I think you need some way to avoid doing two requests to different servers for each chunk of video.
Possible solution: Could you change the Cloudfront private key every few minutes? If yes, then just authenticate however you want (bidirectional handshake) and send the app the current private key. If it expires, or if there are any errors due to it expiring at not exactly when expected, just re-authenticate and get new private key.
Possible solution: Talk to authentication server when you want to play video X, and get signed URLs for every part of that video, or even better: a m3u8 file containing signed URLs. Then, play those directly...
Possible solution: Run everything through a local proxy (on loopback interface on the iOS device). Then modify request URLs as needed, or make them redirects.
Is is possible to authorize users at a machine level. For example, only when using authorized computers (my personal laptop or other managers' pcs) can one get access to the admin page? Any other computers should either get a denial of access message or something else. Authorized computer may still provide their own admin username and password in case people could fake a machine's identity, maybe. I'm not a security expert though.
Correct me if I misunderstand, but you are asking to only allow visitors on specific machines to access your website?
Jumping right into a solution here. The first question is how do you know which machines are "manager's" machines? Do you have a list of their IP addresses? Do you have some other ID on them?
If you have their IP addresses, then IP Whitelist them, and block all other ip addresses.
If you do not have their IP address, then you are limited. There is no machine ID that can be accessed through a web browser, so you'll need to create your own ID by setting a long lived cookie and a registration process.
Since you already have a login process, this next part is fairly easy. You've used this solution before. When you sign in to google mail and click "remember me" and don't need to sign in the next time your computer restarts, google has basically marked (set a cookie) your machine as yours.
Now, if you want to get super fancy, enterprises have NAC setup. Every system is identified before being allowed to connect to the network. Certain systems are given more access than others. For example, at a software development company, engineers may be given access to a production network while sales staff is not. When they connect, sales staff are move to a restricted vlan after identifying who they are and who the machine belongs to. If that were the case for your company, then you would whitelist an entire subnet block.
Last point. Chase bank uses the machine cookie concept like so: The first time you login they ask your username and password. Then the send a code to your phone or some third-party channel. After you enter the code, the set a machine cookie (same old cookie). The next time you login, they ask for username and password, then look for the machine cookie. If the machine cookie is there, then they don't make you enter the code again.
You could make that your registration process, except you provide the manager with a code they can enter. I don't think you want to get much more complex than a static password to register the machine, but if you did, you can generate one time tokens following the spec in rfc 4226.
You can't restrict access to specific computing device (as there are many types of devices used and there's no universal thing to bind to) but depending on your application design you still can solve your problem. You need to bind not to computer, but to other hardware device which is not possible to duplicate.
One of such devices is a hardware cryptotoken or cryptocard with the certificate and a private key in it. The user plugs the device to USB or to card reader respectively, then he authenticates on the server using the certificate and private key stored on this device). Client-side authentication using certificates is a large but well-known topic so I don't discuss it here.
While it's possible to move the cryptographic device to another computer system, it's not possible to duplicate it or extract the private key from it. So you can (with certain high level of reliability) assume that there exists only one copy of the private key and it's stored on certain particular device.
Of course you would need to create another certificate for each device, but this is not a problem - the only purpose of these certificates is to be accepted by the server, so the server can issue new certificates when needed.
I'm trying to write a web application that would use Twitter via OAuth.
I run my local server as 'localhost', so I need the callback URL to be something like http://localhost/something/twitter.do but Twitter doesn't like that: Not a valid URL format
I'm probably going to do a lot of tests, but once I've approved my app with my username, I can't test again can I? Am I supposed to create multiple twitter accounts? Or can you remove an app and do it again?
You can use 127.0.0.1 instead of localhost.
You can authorize your app as many times as you like from the same twitter account without the necessity to revoke it. However, the authenticate action will only prompt for Allow/Deny once and all subsequent authenticate requests will just pass through until you revoke the privilege.
Twitter's "rate limiting" for API GET calls is based on IP address of the caller. So, you can test your app from your server, using the same IP address, and get (once approved) 15,000 API calls per hour. That means you can pound on your app with many different usernames, as long as your approved IP address remains the same.
When you send the e-mail to Twitter to ask for an increase to your rate limit, you can also ask for the increase to apply to your Twitter username too.
I believe Twitter requires you - if you need to change your IP address, or change the username that is using the app - to send in another request asking for the rate limit increase for that new IP address or username. But, in my experience, Twitter has been pretty quick at turning around these requests (maybe less than 48 hours?).
use like this
for Website :http://127.0.0.1
and for callback URL: http://127.0.0.1/home
or any of your page address like http://127.0.0.1/index
Have you tried creating your own caching mechanism? You can take the result of an initial query, cache it on thread local, and given an expiration time, refresh from Twitter. This would allow you to test your app against Twitter data without incurring call penalties.
There is also another solution (a workaround, rather) which requires you to edit your hosts file.
Here is how you do it on a linux box:
Open your /etc/hosts file as root. To do this, you can open a terminal and type something like sudo vi /etc/hosts.
Pick a non-existent domain to use as your local address, and add it to your hosts file. For example, you will need to add something similar to the following at the end.
127.0.0.1 localhost.cep # this domain name was accepted.
So, that's pretty much it. Pointing your browser to localhost.cep will now take you to your local server. Hope that helped :)
In answer to (1), see this thread, in particular episod's replies: https://dev.twitter.com/discussions/5749
It doesn't matter what callback URL you put in your app's management page on dev.twitter.com (as long as you don't use localhost). You provide the 'real' callback URL as part of your request for an OAuth token.
1.) Don't use localhost. That's not helpful. Why not stand up another server instance or get a testing vm slice from slicehost?
2.) You probably want a bunch of different user accounts and a couple different OAuth key/secret credentials for testing.
You were on the right track though: DO test revoking the app's credentials via your twitter account's connections setting. That should happen gracefully. You might want to store a status value alongside the access token information, so you can mark tokens as revoked.