We have an SNS queue with multiple subscribers (2 * SQS, 1 * email, etc.). However, we only get the notification for the last subscriber.
I.e. when each of them is registered alone we get to notification. So as far as I understand each of them alone is configured all right.
I didn't find any relevant AWS documentation about it (limit of subscribers, policies, etc.)
Would love to hear any hint about it. Thanks in advance.
Solution: the issue was the order in which the SQS permission-subscription was created in. The correct order is detailed here. If you do step 3 before 2 it won't work.
The issue was the order in which the SQS permission-subscription was created in. The correct order is detailed here. If you do step 3 before 2 it won't work.
{
"Version": "2012-10-17",
"Id": "arn:aws:sqs:us-east-1:<account-number>:<sqs-name>/SQSDefaultPolicy",
"Statement": [
{
"Sid": "<sid>",
"Effect": "Allow",
"Principal": "*",
"Action": "SQS:SendMessage",
"Resource": "arn:aws:sqs:us-east-1:<account-number>:<name>",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:sns:us-west-2:<account-number>:*"
}
}
}
]
}
Here * is the key. It allows other sns to write to the same queue.
Related
I'm trying to create a notifier via MS Teams. I want to send a direct message to a named user. Here's what I've done so far:
Created a bot at https://dev.botframework.com in my azure account
Tied the bot to an app registration in AzureAD
Retrieved a token
I'm trying to create a new conversation by posting:
{
"bot": {
"name": "OpenUnison Notifications Bot",
"id": "openunison"
},
"members": [
{
"name": "Matt Mosley",
"id": "mmosley#marcboorshteintremolosecuri.onmicrosoft.com"
}
],
"topicName": "OpenUnison Notifications",
"isGroup": false
}
to https://smba.trafficmanager.net/apis/v3/conversations, the response I get is
{"error":{"code":"BadSyntax","message":"Bad format of conversation ID"}}
When I look in the activity log I don't see anything for the Teams channel, but for web I see Activity dropped because the bot's endpoint is missing. I think I'm missing something. I don't want to handle responses, this is a no-reply notifications only bot. How can I avoid requiring a bot endpoint? Also, am I even taking the right approach for my goal?
Notification-only bots use proactive messaging to communicate with the user.
A proactive message is a message that is sent by a bot to start a conversation.
When using proactive messaging to send notifications you need to make sure your users have a clear path to take common actions based on your notification, and a clear understanding of why the notification occurred.
POST {Service URL of your bot}/v3/conversations
{
"bot": {
"id": "c38eda0f-e780-49ae-86f0-afb644203cf8",
"name": "The Bot"
},
"members": [
{
"id": "29:012d20j1cjo20211"
}
],
"channelData": {
"tenant": {
"id": "197231joe-1209j01821-012kdjoj"
}
}
}
Sample Link-https://github.com/OfficeDev/microsoft-teams-sample-complete-csharp/blob/32c39268d60078ef54f21fb3c6f42d122b97da22/template-bot-master-csharp/src/dialogs/examples/teams/ProactiveMsgTo1to1Dialog.cs
I'm getting acquainted with using S3 with ruby to upload files to Amazon Web Service. I recently was confronted with the following error: AWS::S3::Errors::AccessDenied Access Denied. In poking around on google, I found this post on the error. It claims that the bucket policies aren't sufficient to allow access via the web-app and that the user must be given "Administrator Access" as well.
I've given this a try and it works fine but I feel like this is an indication that I'm not doing it right, given that administrator access isn't mentioned in any other documentation I've read. I'm using the aws-sdk gem. Could anyone weigh in on whether admin access is necessary? Many thanks!
None of the existing answers actually state which policies you need to grant, so here they are: s3:PutObject, s3:DeleteObject, and s3:PutObjectAcl.
Here's the complete S3 bucket policy I'm using to allow Paperclip to put objects with the :public_read permission:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::IAM_USER_ID:user/IAM_USER_NAME"
},
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::S3_BUCKET_NAME/*"
}
]
}
As explained in the accepted answer, you should not need "Admin Access". However, the typical policy for giving access to a bucket, as documented in some examples given by Amazon, could not be enough for paperclip.
The following policy worked for me:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket-name-to-be-set-by-you"
]
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name-to-be-set-by-you/*"
]
}
]
}
You should not really need the Admin Access to achieve this.
Make sure you have AWS access_key_id and secret_access_key setup in your heroku config. And, you also would need to make sure your user account has an Access Policy set in the AWS IAM Console.
See this post for some more info.
The default permission for Paperclip is :public_read unless you specify the bucket to be private.
See this for information about Module: Paperclip::Storage::S3
UPDATE: I eventually answered my own question. See the Answers section for a tutorial that solves this problem.
The question:
What exactly is the policy that is needed for an external source to access an AWS S3 bucket through the API controls?
Details:
I'm following the Rails Tutorial by Michael Hartl, and I reached the end of lesson 11 where we use CarrierWave to store image files in an AWS S3 bucket. I was able to get it to work (had to add a region ENV variable) but only with a user who has full admin privileges. Obviously that's not ideal. I created a User account specifically for the purpose, but all the walkthroughs only seem to be concerned with web browser access. In fact, I was able to create policies that would allow the user to only be able to read, write, and delete in the specific bucket, but that only worked through a web browser and not through the API. The API access only worked when I attached the AdministratorAccess policy.
Here's what I have so far:
Policy: AllowRootLevelListingOfMyBucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowGroupToSeeBucketListAndAlsoAllowGetBucketLocationRequiredForListBucket",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "AllowRootLevelListingOfMyBucket",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
""
],
"s3:delimiter": [
"/"
]
}
}
}
]
}
Policy: AllowUserToReadWriteObjectDataInMyBucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToReadWriteObjectDataInMyBucket",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket/*"
]
}
]
}
As I said, this allows web browser access, but API access attempts return an "AccessDenied" error: Excon::Errors::Forbidden (Expected(200) <=> Actual(403 Forbidden)
What do I need to add for API access?
Update: I have narrowed down the problem a bit. There is some "Action" that I need to give permission for, but I haven't been able to identify the action exactly. But using a wildcard works, and I've been able to lock down the user account to only be able to access one bucket. Here's the change I made:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToReadWriteObjectDataInMyBucket",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket/*"
]
}
]
}
I eventually answered my own question, and created a tutorial that others might want to follow:
The first thing you need to do is go back over the code that Hartl provided. Make sure you typed it (or copy/pasted it) in exactly as shown. Out of all the code in this section, there is only one small addition you might need to make. The "region" environment variable. This is needed if you create a bucket that is not in the default US area. More on this later. Here is the code for /config/initializers/carrier_wave.rb:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:region => ENV['S3_REGION']
}
config.fog_directory = ENV['S3_BUCKET']
end
end
That line :region => ENV['S3_REGION'] is a problem for a lot of people. As you continue this tutorial you will learn what it's for.
You should be using that block of code exactly as shown. Do NOT put your actual keys in there. We'll send them to Heroku separately.
Now let's move on to your AWS account and security.
First of all, create your AWS account. For the most part, it is like signing up for any web site. Make a nice long password and store it someplace secure, like an encrypted password manager. When you make your account, you will be given your first set of AWS keys. You will not be using those in this tutorial, but you might need them at some point in the future so save those somewhere safe as well.
Go to the S3 section and make a bucket. It has to have a unique
name, so I usually just put the date on the end and that does it. For example, you might name it "my-sample-app-bucket-20160126". Once you
have created your bucket, click on the name, then click on Properties.
It's important for you to know what "Region" your bucket is in. Find it,
and make a note of it. You'll use it later.
Your main account probably has full permissions to everything, so let's not use that for transmitting random data between two web services. This could cost you a lot of money if it got out. We'll make a limited user instead. Make a new User in the IAM section. I named it "fog", because that's the cloud service software that handles the sending and receiving. When you create it, you will have the option of displaying and/or downoading the keys associated with the new user. It's important you keep this in a safe
and secure place. It does NOT go into your code, because that will probably
end up in a repository where other people can see it. Also, don't give this
new user a password, since it will not be logging into the AWS dashboard.
Make a new Group. I called mine "s3railsbucket". This is where the
permissions will be assigned. Add "fog" to this group.
Go to the Policies section. Click "Create Policy" then select "Create Your
Own Policy". Give it a name that starts with "Allow" so it will show up near
the top of the list of policies. It's a huge list. Here's what I did:
Policy Name: AllowFullAccessToMySampleAppBucket20160126
Description: Allows remote write/delete access to S3 bucket named
my-sample-app-bucket-20160126.
Policy Document:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-sample-app-bucket-20160126",
"arn:aws:s3:::my-sample-app-bucket-20160126/*"
]
}
]
}
Go back to the Group section, select the group you made, then add
your new policy to the group.
That's it for AWS configuration. I didn't need to make a policy to allow
"fog" to list the contents of the bucket, even though most tutorials I tried
said that was necessary. I think it's only necessary when you want a user
that can log in through the dashboard.
Now for the Heroku configuration. This stuff gets entered in at your
command prompt, just like 'heroku run rake db:migrate' and such. This is
where you enter the actual Access Key and Secret Key you got from the "fog" user you created earlier.
$ heroku config:set S3_ACCESS_KEY=THERANDOMKEYYOUGOT
$ heroku config:set S3_SECRET_KEY=an0tHeRstRing0frAnDomjUnK
$ heroku config:set S3_REGION=us-west-2
$ heroku config:set S3_BUCKET=my-sample-app-bucket-20160126
Look again at that last one. Remember when you looked at the Properties of
your S3 bucket? This is where you enter the code associated with your
region. If your bucket is not in Oregon, you will have to change us-west-2 to your actual region code. This link worked when this tutorial was written:
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
If that doesn't work, Google "AWS S3 region codes".
After doing all this and double-checking for mistakes in the code, I got
Heroku to work with AWS for storage of pictures!
UPDATE: It works when I remove the explicit deny block from the bucket policy, but I need that in there to prevent people outside the site (including bots) from accessing the content.
--
I'm trying to figure out a way to set access control on my S3 content such that:
Users with HTTP referrer mydomain.com can view the files
Paperclip gem can upload files
Google bots can't crawl and index the files
Viewing existing files from the site works fine, but uploading a file gives this error in the console:
[AWS S3 403 0.094338 0 retries] put_object(:acl=>:public_read,
:bucket_name=>"mybucket",:content_length=>879394,
:content_type=>"image/jpeg",:data=>Paperclip::FileAdapter: Chrysanthemum.jpg,
:key=>"ckeditor_assets/pictures/6/original_chrysanthemum.jpg",
:referer=>"mydomain.com/")
AWS::S3::Errors::AccessDenied Access Denied
Here's the bucket policy I have:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by mydomain.com",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "mydomain.com/*"
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringNotLike": {
"aws:Referer": "mydomain.com/*"
}
}
}
]
}
The error message is odd, because I explicitly set the referrer to mydomain.com in Paperclip settings:
production.rb:
:s3_headers => {
'Referer' => 'mydomain.com'
}
And Paperclip does indeed use it, as shown on the second to last line of the error message.
So why does it still give Access Denied?
After fiddling with it for hours, I revised my approach in light of the original three requirements I listed. I'm now explicitly denying only GetObject (as opposed to everything via "*"), and I also placed a robots.txt file at the root of my bucket and made it public. Therefore:
Users can access bucket content only when my site is the referer (maybe this header can be spoofed, but I'm not too worried at this point). I tested this by copying a resource's link and emailing it to myself, and opening it from within the email. I got access denied, which confirmed that it cannot be hotlinked on other sites.
Paperclip can upload and delete files
Google can't index the bucket contents (hopefully robots.txt will be sufficient)
My final bucket policy for those who arrive at this via Google in the future:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by mydomain.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "https://mydomain.com/*"
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringNotLike": {
"aws:Referer": "https://mydomain.com/*"
}
}
}
]
}
I do have a theory on why it didn't work. I was reading the S3 documentation on Specifying Conditions in a Policy, and I noticed this warning:
Important:
Not all conditions make sense for all actions. For example, it makes sense to include an s3:LocationConstraint condition on a policy that grants the s3:PutBucket Amazon S3 permission, but not for the s3:GetObject permission. S3 can test for semantic errors of this type that involve Amazon S3–specific conditions. However, if you are creating a policy for an IAM user and you include a semantically invalid S3 condition, no error is reported, because IAM cannot validate S3 conditions.
So maybe the Referer condition did not make sense for the PutObject action. I figured I'd include this in case someone decides to pick this issue up from here and pursue it to the end. :)
I have some images in a bucket on S3. My app uses these images. What I want is the following:
Only allow the image to be accessed if:
The referer is my site - This I can already do with a bucket policy
The user was redirected from my site
The problem is the redirect here, because, when redirected, no referer is sent to Amazon S3.
Is there a way to limit access to my S3 files in the way I described above?
My current bucket policy looks like this:
{
"Version": "2008-10-17",
"Id": "e9c9be4d-cdfc-470c-8582-1d5a9e4d04be",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://myapp.com/*"
}
}
}
]
}
Have your files be private.
Use signed URLs in the links/redirects to your images.
The signed URLs include an expiration; Amazon will not show your image past the expiration.
The signed URLs cannot be forged; Amazon will not show your image if the signature is missing or invalid.
This guy appears to have solved the problem:
http://www.powercram.com/2010/07/s3-bucket-policy-to-restrict-access-by.html