email actions without registering with google? - google-schemas

I am trying to send internal emails with some go-to action (as in: we're using google apps for business, sending emails from ad-hoc services to internal / same-domain users) . The restrictions for registering with google are pretty strict.
Is there any way to have the actions show up properly when the emails are sent from the same domain to the same domain? One example is some internal defect tracking system, and I'd like to have a ViewAction:
...
<script type="application/ld+json">
{
"#context": "http://schema.org",
"#type": "EmailMessage",
"action": {
"#type": "ViewAction",
"url": "http://bugzilla.mydomain.com/show_bug.cgi?id=4318"
},
"description": "See this bug directly on Bugzilla."
}
</script>
...
more details about the bug.
...
This is one of many examples where we have some tools internally and we'd like to use actions but we don't want to register them all; in particular when emails come from users rather than particular machines.
Thank you.

Currently, there's no way to enable internal emails like yours without whitelisting the sender for the general public. However, your use case is a legitimate one and I'll bring it up with the team.

Related

Monitor TFS Service Hooks/Web hooks

I have some Web Hooks setup to push data to my custom endpoint during certain events. If my service is unavailable for some reason the web hook will go offline and I need to manually go to the web page inside of TFS and re-enable it. I wasn't able to find a way to queue items or auto-retry later.
Assuming it doesn't have an option for queuing and re-trying, is there a way I can automate:
1. Check each of the web hooks to determine if they are enabled
2. Re-Enable if after a period of time
I'm using on-prem TFS V15 (from help about)
There is not any configure to never auto-disable serviceHooks when encounter errors. You may have to re-enable the disable web hooks in this case.
You could try to enable web hooks through this REST API: Update a subscription.
Put https://[account].visualstudio.com/_apis/hooks/subscriptions/[subscription id]?api-version=1.0
Body (Content-Type:application/json)
{
"publisherId": "tfs",
"eventType": "build.complete",
"resourceVersion": "1.0-preview.1",
"consumerId": "webHooks",
"consumerActionId": "httpRequest",
"scope":1,
"status":0,
"publisherInputs": {
"buildStatus": "",
"definitionName":"ClassTestVNext",
"projectId": "578ca584-4268-4ba2-b579-7aaee499c306"
},
"consumerInputs":{"url":"http://XXXX/"}
}
You could also use F12 in chrome, choose network to capture the request info when you do the re-enable action.

Is it possible to show only certain messages to users in MQTT topics?

My goal is to extend an HTTPS REST API platform with an MQTT bus. I am trying to figure out what the best way is to go about this.
Topic Example:
I have an HTTPS REST API that contains the following end-points.
1) /files/{fileId}
2) /files
If I want to restrict users based on fileId's, it is easy for topic 1. If somebody is allowed to see this file, they can subscribe, otherwise they can not.
Now my question is about the second topic. Would it be possible, to publish to /files but only show subscribers the data they are allowed to see?
Message Example:
I publish these messages to /files
{
"fileName": "Test.txt",
"fileId": 123456,
"Author": "Bert",
"Content": "Hello World"
}
------
{
"fileName": "Test2.txt",
"fileId": 654321,
"Author": "Hank",
"Content": "Foo Bar"
}
Bert and Hank are both subscribed to /files but they are only allowed to see their own files (Bert = 123456, Hank = 654321).
UPDATE:
In this artice the topic starts with myhome etc. This might be the same example as above. If I publish to myhome, how can I know it is only this user when I have multiple users.
The ACL schemes for MQTT tend to be based purely username and access to a topic (or wildcard topic).
Messages are published to a topic, there is no way to specify anything more (e.g. username or client id).
Having to do message payload inspection to determine if a subscriber is able to see a specific message would have a huge impact on performance. Also as there is no prescribed message payload format (you can send any byte array payload) coming up with a way to specify a which parts of the message to filter on would be difficult.
You may be able to implement something like this by modifying a open source brokers, but I doubt it would be easy.
Your example looks very convoluted. Maybe we have an XY-problem case here. You seem to want to [ab]use a public-subscriber mechanism as a point-to-point mechanism over a non-secure (broadcast) network, like a WiFi without encryption or an ethernet without switches but hubs instead.
In that case you may want to use endpoints as topics (networknode{1, 2, ...}), to address individual computers, and if the answers are not to be seen by everyone, you will have to encrypt them. And you then need a mechanism for distributing the keys.
Or maybe you just want to use plain HTTPS. MQTT has been a huge step forward for the IoT world, but it is not the panacea.
On the other hand, why is the file ID not part of the topic?

How to get JIRA issue for branch/pull request?

I'm using both JIRA and Stash REST APIs for reading data from our systems, but I wasn't successful in finding out how to get JIRA issue(s) associated with a given pull request. I can easily get the pull request's from branch, so getting issue(s) associated with a given branch would work, too.
I know how to get pull request(s) associated with a given issue, that is, the other way around, but obviously I cannot scan all the issues. Also, the web interface shows the info, but I'd like to avoid using that programmatically. Surely there is a way using the API?
There is this REST resource to retrieve the JIRA issues for a pull request:
/rest/jira/1.0/projects/{projectKey}/repos/{repositorySlug}/pull-requests/{pullRequestId}/issues
Its response looks like:
[
{
"key": "JRA-11",
"url": "https://jira.atlassian.com/browse/JRA-11"
},
{
"key": "JRA-9",
"url": "https://jira.atlassian.com/browse/JRA-9"
}
]
It is available both for Stash and Bitbucket.

AWS architecture [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
We have around 10 different AWS accounts, I'm working on building an application internally for viewing the activity on these accounts.The activity consist of different nodes launched terminated , including usage and costs related to each account.This I'm planning to show it on internal portal.
The challenge here is I need to store and show whatever data I get from aws-sdk's.
I need some suggestions on architecture this kind of a platform, I'd want data as realtime as possible.
Any suggestions would be great
I think your best choice for the architecture is: none. And the reason is: you don't need it. Use CloudTrail for reporting the activities, and Billing for the costs. AWS now supports federated account, and with IAM you can easily configure a role who can just see that data for all the accounts.
Have a look at traildash. This is an all in one dashboard for cloudtrail. The other comments are 100% correct, AWS provides most of this already, Traildash simply wraps a nice GUI on it.
Not sure if this question has been addressed, but fwiw here is my take.
If I got it right, this question is about how to monitor the aws resources(ec2, s3, and etc) across multiple accounts regularly and automatically.
Programmatically I'd suggest the star-topology that having a service account, and accessing all other accounts through it by assuming roles. In this way we do not need to keep aws accessKey/secretKey for those accounts locally, thus having better security.
The implementation has to do with a new role/policy created in the service account which specifies a list of accountId/role that it can assume, and a new role/policy created in each of other accounts which allows read-access to its own resource.
For your reference, here is an example of policy to access ec2 instances:
service account policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::ACCOUNT-ID-A:role/ROLE-A",
"arn:aws:iam::ACCOUNT-ID-B:role/ROLE-B”,
"arn:aws:iam::ACCOUNT-ID-C:role/ROLE-C”,
"arn:aws:iam::ACCOUNT-ID-D:role/ROLE-D”,
"arn:aws:iam::ACCOUNT-ID-E:role/ROLE-E”,
]
},
"Action": "sts:AssumeRole"
}
]
}
normal account policy:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": “ec2:DescribeInstances",
"Resource": “arn:aws:ec2:Account-ID-A:instance/*"
}
AWS cloud trail is a best option for your requirement. Store all cloud trail logs on S3 and fetch it to where you want to show them.
You can use Graylog like log aggregation tools to have grate user experience and various dashboards based on your requirements.
Hope this helps...
Thanks
Kiran

How do I get revenue reports from a YouTube CMS account using an API (for an MCN)?

I have access to a YouTube CMS account (for an MCN). On YouTube I can do lots and lots of things with it and this also includes downloading CSV reports which contain detailed information about earnings.
However I want to do some automatic processing of that data and thus access the data using an API instead of a manual CSV download. It looks like the YouTube Analytics Content Owner Reports should contain these data as well, thus I tried to get some data from this API (for now only using the API Explorer) but the only thing I was able to get was a "Forbidden" response.
The API Explorer tells me that for a CMS account I need to specify contentOwner==OWNER_NAME but there is nowhere an explanation what that OWNER_NAME would be. I tried to just insert the displayed name of my CMS account, replacing spaces with underscores, but no success. How do I find out what my owner name is?
Additionally, when I authenticate using OAuth I receive as usual the list of accounts where I can choose which one to use (e.g. all the YouTube channels I am a manager of), but the CMS account is not listed. However if I go to YouTube I can click on the top right corner and then switch to the CMS. No idea if that is important...
Then again, maybe I am totally on the wrong track, because I want to get the reports for all channels connected to my MCN but that does not mean that I own the content. So maybe I am no content owner? In this case: Which is the correct way to request the reports from the API?
First of all, the CMS account is not a separate account you can log in via Oath. It is more like a privilege and it is connected to one of your google/youtube accounts. This is in contrast to youtube's regular channel-management, where each channel has it's own login credentials.
I attached a screenshot of my youtube account-selector-view, where the CMS belongs to the account name#email.com, which is also the account you have to use for oauth authorization to access your CMS reports.
Furthermore you can see the name of the CMS, in this case it "CMSName". So, generally this is the name you would use for contentOwner==CMSName. However, your CMS Name seems to include whitespaces. Unfortunately, i cannot reconstruct this case because of missing admin-rights, but i would suggest you the _ for whitespaces too, because " " and "%20" do not map the regular expression for valid params.
But you said, that you had no success by trying it. But there are too error scenarios:
403 Forbidden: The name of the CMS could either be wrong or the selected OAth account does not have the required privileges. Do you have all required Scopes and selected the correct account?
400 Bad Request: This happens when the request is invalid per se. So if you choose contentOwner==CMSName as ids param, a filter parameter is always required, e.g. channel==[ChannelIdForWhichIHaveCMSRights]. So, a API request, that should generally work, would look like this: https://www.googleapis.com/youtube/analytics/v1/reports?ids=contentOwner%3D%3D[CONTENTOWNER_ID]&start-date=2015-01-01&end-date=2015-01-15&metrics=views&filters=channel%3D%3D[CHANNEL_ID_WITH_CMS_RIGHTS]&access_token=[OATH_TOKEN_FOR_RIGHT_ACCOUNT]
If both cases won't work for you and you're still getting 403 errors, let us do some debugging and try to fetch the content Owner Id. I will now introduce the YouTube Content ID API https://developers.google.com/youtube/partner/.
A few words in advance: You have to activate the API in your developer console, like any other API you want to use for your app. BUT:
Note: The YouTube Content ID API is intended for use by YouTube content partners and is not accessible to all developers or to all YouTube users. If you do not see the YouTube Content ID API as one of the services listed in the Google Developers Console, see www.youtube.com/partner to learn more about the YouTube Partner Program.
You don't see it in the list auf available APIs, unless your account is connected to a CMS and some time has past... It takes 7-14 days unless the Content ID API is available for your account. This is a information i got from the support, but they told me, that it is an automated step.
So, now lets assume, that you already have access to the Content ID API.
You can fetch a list of contentOwnerShips that belong to an account. You can use the API explorer https://developers.google.com/youtube/partner/docs/v1/contentOwners/list#try-it just use as param fetchMine=true and authorize with the https://www.googleapis.com/auth/youtubepartner-content-owner-readonly scope. The response looks like this:
{
"kind": "youtubePartner#contentOwnerList",
"items": [
{
"kind": "youtubePartner#contentOwner",
"id": "[CMS_ID]",
"displayName": "[DisplayName]",
"primaryNotificationEmails": [
"mail#random.xx"
],
"conflictNotificationEmail": "mail#random.xx",
"disputeNotificationEmails": [
"mail#random.xx"
],
"fingerprintReportNotificationEmails": [
"mail#random.xx"
]
}
]
}
This is where you get your CMS_ID from, you can also use it for any API Request as onBehalfOfContentOwner.
To get a list of all channels that belong to the ownership, simply make this request
"https://www.googleapis.com/youtube/v3/channels?part=contentDetails&managedByMe=true&onBehalfOfContentOwner=[CONTENTOWNER]&access_token=[ACCESS_TOKEN]"
But this request requires the granted "https://www.googleapis.com/auth/youtubepartner" scope.
Hoe this could help you, feel free to ask further questions.

Resources