Using s3 in a healthcare application, private links - ruby-on-rails

We develop a rails-based healthcare application. What is the best way to configure our s3 implementation so that only the authenticated user has access to the image?

From the Documentation,you should use one of Amazon's "canned" ACLs.
Amazon accepts the following canned ACLs:
:private
:public_read
:public_read_write
:authenticated_read
:bucket_owner_read
:bucket_owner_full_control
You can specify a the ACL at bucket creation or later update a bucket.
# at create time, defaults to :private when not specified
bucket = s3.buckets.create('name', :acl => :public_read)
# replacing an existing bucket ACL
bucket.acl = :private

Wanted to post an updated answer to this question, as the S3 API has changed (slightly) since 2015. Here's a link to the updated ACL section of the S3 Docs. Further, the above answer reflects the use of the Ruby SDK, which not everyone uses.
Canned ACL's are predefined grants supported by S3 that have specific grantees and permissions in place. Canned ACL's can be sent via the SDK, as demonstrated in the above answer, or in an HTTP request by using the x-amz-acl request header for new resources, or with the request header or body for existing resources.
The canned ACL's are as follows. Unless otherwise specified, the bucket owner has FULL_CONTROL in addition to the other permissions listed:
private: No other user is granted access (default)
public-read: AllUsers group gets READ access
public-read-write: AllUsers group gets READ and WRITE access (not recommended)
aws-exec-read: Amazon EC2 gets READ access to GET an Amazon Machine Image (bundle)
authenticated-read: AuthenticatedUsers group gets READ access
bucket-owner-read: Bucket owner gets READ access. Ignored when creating a bucket
bucket-owner-full-control: Both object and bucket owner get FULL_CONTROL over object. Ignored when creating a bucket
log-delivery-write: LogDelivery group gets WRITE and READ_ACP permissions
Also noted in the docs: you can specify only one canned ACL in your request

Related

Does an S3 bucket need to be Public to serve user viewable images to an app?

At the moment my Rails 6 React app has user uploaded images (avatars, profile wallpapers, etc) stored in S3, inside a public bucket for local development (not facilitated by active storage because it was not playing nice with vips for image processing). The reason its set to public was for ease of set up, now that all of the functionality is complete, for the stagging (and soon production) I would like to add sensible bucket policies. I don't currently have CloudFront set up but I do intend to add that in the near term, for right now I'm using the bucket asset URL to serve assets. I have created two separate buckets, one for images that will be displayed in the app and one for content that is never to be publicly displayed which will be used for internal purposes.
The question I have is, for the content that is in the bucket reserved for viewable content, do I have to make it public (disable that setting in the AWS console that disables public access), then create a policy that allows GET request from wherever (*), then restriction POST, PUT, DELETE, requests to the arn ID of the EC2 instance that's hosting the rails application. The AWS documentation has confused me, it gives me the impression that you never want to enable public access to a bucket, and that policies alone are how you surface bucket content. When I take that approach I keep getting access denied in the UI when I have attempted to do that.
EDIT:
I'm aware that signed URLs can be used, but it is my current understanding that there is a nontrivial speed hit to the UX of the application if you have to generate a signed URL for every image (this app is image heavy). There are also SEO concerns given that all the image URLs would effectively be temporary.
Objects in Amazon S3 are private by default. You can grant access to an object in several ways:
A Bucket Policy that can grant access to everyone ('Public'), or to specific IP addresses or users
An IAM Policy on an IAM User or IAM Group that grants access to that user or group -- however, they would need to access via an AWS SDK so that they can authenticate the call (eg when an application makes a request to S3, it would make an authenticated API call)
An Access Control List (ACL) on the object, which can make the object public without requiring the bucket to be public
By using an Amazon S3 pre-signed URL, which is a time-limited URL that provides temporary access to a private object
Given your use-case, an S3 pre-signed URL would be the best choice since the content is kept private but the application can generate a link that provides temporary access to the object. This can also be done with CloudFront.
Generating the pre-signed URL only takes a few lines of code and does not involve an API call to AWS. It is simply creating a hash of the request using your Secret Key, and then appending that hash as a 'signature'. Therefore, there is effectively no speed impact of generating pre-signed URLs for all of your images.
I don't see how SEO would be impacted by using pre-signed URLs. Only actual web pages (HTML) are tracked in SEO -- images are not relevant. Also, the URLs point to the normal image, but have some parameters at the end of the URL so they could be tracked the same as a non-signed URL.
No it does not have to be public. If you don't want to use CloudFront, the other option is to use S3 presigned RLs.

Granting read access to lambda service principal doesn't work as expected

This may an a general AWS question, but still the way that CDK allows you to use this has an unexpected result in my opinion, and i'm not sure why.
in the code below i'm supposedly giving read permissions to the lambda service principal.
this doesn't work though and lambdas are unable to read from this bucket.
the only way I could get them to work is to update their lambda policy, allowing them access to the bucket by arn.
why doesn't this work? and since it's allowed (the policy is set fine to the bucket)
what does this mean?
Thanks!
bucket = S3.Bucket(
scope=self,
id='MyBucket',
versioned=True,
block_public_access=S3.BlockPublicAccess.BLOCK_ALL,
encryption=BucketEncryption.KMS_MANAGED,
removal_policy=core.RemovalPolicy.DESTROY,
lifecycle_rules=[bucket_lifecycle_rules],
)
lambda_service_principal = iam.ServicePrincipal('lambda.amazonaws.com')
bucket.grant_read(lambda_service_principal)
It is not the Lambda "service" that accesses the bucket. It is the IAM Role used by the Lambda function that accesses the bucket. That's why you need to either use a Bucket Policy that grants access to the IAM Role, or (better) add permissions to the IAM Role to access the bucket.
When the Lambda container is created, Lambda will 'assume' the IAM Role that is associated with the function. These credentials are then provided via the EC2 Instance Metadata Service in the same way that roles can be assigned to an Amazon EC2 instance.
The Lambda function will use these credentials to access services, so the requests will come "from the IAM Role" rather than from the Lambda service.

How to get S3 objects created by current user in AWS SDK (Ruby on Rails)?

Using AWS SDK, is it programmatically possible to get the list of all bucket objects created by a particular user (current user)?
This will help you:
require 'aws-sdk-s3' # v2: require 'aws-sdk'
region = 'us-west-2'
s3 = Aws::S3::Resource.new(region: region)
s3.buckets.limit(50).each do |b|
puts "#{b.name}"
end
Source
Also, this will list the objects of a bucket:
s3_bucket.objects.with_prefix('folder_name').collect(&:key)
With version 2 it is:
s3_bucket.objects(prefix: 'folder_name').collect(&:key)
Source
Combination of both will help you achieve your goal.
It is not possible to identify the "user" who created an object.
When an API call is made to Amazon S3 to upload/create an object, the credentials are checked to confirm that they are permitted to perform the operation. Once this has been confirmed, the object that is created is "owned" by the AWS Account, not a specific user within the account.
You could Enable Object-Level Logging for an S3 Bucket, but this only outputs log information. It does not associate an object with a user.

Security Key store into constant file is secure or suggest any alternate solution?

I am using AWS services to post my images and SNS services for push notification in it.
To post images on the AWS server I have the secret key & access key with me currently I am using that key from the Constant file which is a very simple and easy way to access any defined key.
#define AWS_AccessKey #"###############"
#define AWS_SecretKey #"####################"
But what my question is
is this key secure from others?
is anyone can get easily from Constant file? if YES how ?
Also, I have one more key of my encrypted database of SQLCypher so that key is also stored in my Constant file.
#define DB_KEY #"####################"
What is the best way to store our important keys? and where?
Thanks in advance.
Since the app runs on ec2 a more secure way would be to use an IAM Role attached to the instance. See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
That way you wont have to store the AWS keys anywhere. For your SQLCypher key you could use the user data script to pass the key to your ec2 instance at first boot and store it there, so you wont have to store that in the code at least.
Generally such config is best kept in the environment.
As suggested, the best way to use AWS services from EC2 without needing long-term credentials is to assign an IAM Role to the instance (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html).
If security is critical for you, I suggest to store the SQLCypher key in AWS KMS. KMS can then be accessed by your EC2 instance using its IAM Role.
As I have investigated more in the above secret key store concern I found my solution as mention below.
Reference question to AWS for security key link: https://forums.aws.amazon.com/thread.jspa?threadID=63089
As per the above link, AWS suggests that we need to implement TVM (Token Vending Machine) based service calling to give special rights to user to upload data on S3 or upload data on S3 with a particular bucket.
TVM is a kind of token service that gives user Token valid for 12 hours to 36 hours (max) to communicate with the server.
if Token expires it will again call the service and get a new token from AWS.
The temporary credentials provided by AWS Security Token Service consist of four components:
Access key
ID Secret
access key
Session token
Expiration time
The source for the TVMs is available at GitHub for both the Anonymous TVM and the Identity TVM. By this example, we can get how to get TVM and how to use it to communicate with the server.
Anonymous TVM: https://github.com/amazonwebservices/aws-tvm-anonymous
Identity TVM: https://github.com/amazonwebservices/aws-tvm-identity
Hope this will help others who all struggling with the same security issue for storing secret key.
Full Link for AWS secret key storing: https://aws.amazon.com/articles/Mobile/4611615499399490

paperclip overwrites / resets S3 permissions for non-bucket-owners

I have opened this as an issue on Github (http://github.com/thoughtbot/paperclip/issues/issue/225) but on the chance that I'm just doing this wrong, I thought I'd also ask about it here. If someone can tell me where I'm going wrong, I can close the issue and save the Paperclip guys some trouble.
Issue:
When using S3 for storage, and you wish your bucket to allow access to other users to whom you have granted access, Paperclip appears to overwrite the permissions on the bucket, removing access to these users.
Process for duplication:
Create a bucket in S3 and set up a Rails app with Paperclip to use this bucket for storage
Add a user (for example, aws#zencoder.com, the user for the video encoding service Zencoder) to the bucket, and grant this user List and Read/Write permissions.
Upload a file.
Refresh the permissions. The user you added will be gone. As well, a user "Everyone" with read permissions will have been added.
The end result is that you cannot, so far as I can tell, retain desired permissions on your bucket when using Paperclip and S3.
Can anyone help?
Try explicitly setting :s3_permissions => :public_read
Seems to work for me.

Resources