I'm trying to find a way to restrict the access to each of the buckets in my application. The goal is to prevent users to access objects from other buckets other than the one which is "assigned" to them.
In short the app assigns a bucket for every user to store objects within and I want to prevent users to access buckets which are not meant to be accessed by them.
I guess a request could look like this:
curl -v 'https://developer.api.autodesk.com/authentication/v1/authenticate'
-X 'POST'
-H 'Content-Type: application/x-www-form-urlencoded'
-d '
client_id=obQDn8P0GanGFQha4ngKKVWcxwyvFAGE&
client_secret=eUruM8HRyc7BAQ1e&
grant_type=client_credentials&
scope=data:read&
# I'm thinking for some parameter like this
bucket=CLIEN_SPECIFIC_BUCKET_ID
'
You should implement your own management layer of your app to manage user permissions to different buckets - per best practice user should not be exposed to app level access tokens to access the buckets themselves.
Forge cloud buckets to Forge app and not end users as it’s a development platform and operates on developer/application levels and rather than those of end users.
EDIT:
For Viewer you can go with an AOP approach and set up a proxy in your backend and delegate authentication to the proxy - you can redirect Viewer to send requests to your endpoints to retrieve resources and your backend can in turn authenticate and retrieve the resource from Forge services so that you won’t have to expose your access token to the users. Try:
Autodesk.Viewing.endpoint.setEndpointAndApi('https://yourhostname/your/proxy/service/path')
And you can add custom headers to Viewer’s requests to authenticate against your own app:
Autodesk.Viewing.endpoint.HTTP_REQUEST_HEADERS = {}
Alternatively you can download the derivatives to your own storage and load them from there - see here for details.
Related
We have two Az servers (AS) in our env. for different user base. We are looking to onboard a new API app/ChatBot, which is expecting AS1 to just act as a reverse proxy/RP to get the Access Token from AS2 and present it to them.
How does one go about configuring AS1 as a hub and merely act as a pass thru?
I would like to programmatically retrieve and process all logs available from the Office 365 Unified Audit Logs for the purpose of forensic investigation. From the front end, these logs are available through the Office 365 Compliance Admin Center.
I have tried the following options to access these logs from a script, with no success:
Microsoft 365 Management API - This contains the correct data, but is of limited usefulness for forensic investigations due to the short 7 day retention period.
Microsoft Graph - This does not contain all the relevant data - you cannot access the Unified Audit Logs directly through Graph, and the usage reports do not cover all items contained in the Audit Logs (e.g. Exchange actions).
Search-UnifiedAuditLog on Exchange Online PowerShell - Microsoft themselves recommend not to use this programmatically, and I've experienced extremely unreliable results and unmanageable rate-limiting when trying to do so.
So is there something I'm missing here, or is there no way to programmatically retrieve all items from the Unified Audit Logs for the entire retention period? (generally 90 days).
As far as I know the only way to do this is to use the Management API on a regular basis and output the results to some solution for long term storage (Azure Log Analytics Workspace comes to mind, or SIEM like Splunk / Graylog). I.e. write a script that retrieves logs for the last week, and run it at least weekly.
I'll explain how to retrieve logs manually and also show a tool which already exists for this at the bottom of the post.
Manually:
1: Enable Audit logging on the tenant if not already enabled
2: Create an App registration in Azure AD and for getting single tenant audit logs choose "Accounts in this organizational directory only (xyz only - Single tenant)"
3: Create a 'secret key' from within the newly created App Registration. Store it somewhere safe as it's only shown once. From the overview page of the App Registration also store the "Tenant ID" and "Application (Client) ID". You will need all three.
4: From within the new App Registration go to "API permissions" and add 'Application type' permissions for: 'ActivityFeed.Read' and 'ActivityFeed.ReadDlp'.
5: For the following steps you will need to start calling the Office API's, for which you will need a bearer token in the header. To obtain this send the following POST request:
URL: https://login.microsoftonline.com/***tenant_ID***/oauth2/token
Headers: "{'Content-Type': 'application/x-www-form-urlencoded'}"
Data: "grant_type=client_credentials&client_id=Application_ID&client_secret=Secret_Key&resource=https://manage.office.com"
You will receive a JSON response which contains 'access_token'. For all the upcoming API calls, use the following header:
"{'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'bearer access_token'}"
6: Subscribe to the audit log feeds you would like to retrieve. The following exist: 'Audit.General', 'Audit.AzureActiveDirectory', 'Audit.Exchange', 'Audit.SharePoint', 'DLP.All'. The POST for Exchange for example would look like: "https://manage.office.com/api/v1.0/tenant_ID/activity/feed/subscriptions/start?contentType=Audit.Exchange"
7: You are now ready to start retrieving actual logs. Individual logs live inside content blobs, which live inside pages, which live inside feeds (e.g. the Audit.Exchange feed). Therefore, for each feed you would like to retrieve logs from, you must collect all the content blobs (iterating through the pages of them) and then retrieve the actual content from that blob.
To retrieve a page of content blobs use the following URL (change bolded content to your situation): "https://manage.office.com/api/v1.0/tenant_ID/activity/feed/subscriptions/content?contentType=Audit.Exhange&startTime=2022-04-13T09:42:52&endTime=2022-04-14T08:42:52"
This will give you a JSON response with content blobs inside. In the response header check "NextPageUri"; if it contains a URL, call that URL for the next page of content.
Now that you have content blobs, use them to retrieve the actual logs. Each content blob is a JSON dict, which contains a "contentUri" field. Call that URL to retrieve a JSON response with the actual logs inside.
You can do this in most programming/scripting languages, but for larger amounts of logs you will want to retrieve logs in parallel, or it will take a long time.
With a tool
In case you want to use an existing tool, this one is free, works on Linux and Windows, and supports multiple outputs.
I bought a nest thermostat as I thought it would be able to give me detailed data to showing the target temp and the actual as well as time etc. I needed this for various reasons.
However, it seems the official API "Works with Nest" was closed by Google. I've been able to get postman to ping the same location that the Google Nest Webapp hits and get back the data I need. I want to create a simple webapp to keep polling and save the data locally. However, I'm unable to find the OAuth Client Secret that the Nest Webapp uses to get the authorization code. I had to login via the webapp to get the code in one of the request and then test it out using postman.
Is there any other API that will allow my to poll this data for my Nest easier?
If there isn't another API, is there a way to get the Client ID and Client Secret form the Nest Webapp so I can drop that in mine to use? (I know its hacky, but am I think I'm out of options)
I've an app that needs to share data among users, but not all of them. The idea is that users can belong to different groups, like, for example, users of 2 different companies who are using my app. I'm evaluating Simperium, but before embedding its library in my iOS app I would like to understand if there is a way to isolate users to avoid reading data belonging to other groups. I don't know if that is possibile using different buckets and in that case; how do I create separate buckets?
The iOS SDK doesn't provide a sharing mechanism. Nevertheless, you could still use the REST API to do so.
>>> curl -H 'X-Simperium-Token: { access_token }' \
https://api.simperium.com/1/{ app_id }/{ bucket_name }/i/{ object id }/share/{ target username } -d '{"write_access": true}'
Documentation can be found here.
Other than that, it would be up to the Host App to implement any required user group management (perhaps a simple REST endpoint of your own, that returns the collection User ID's for the current user would do the trick).
Hope that helps!
Can the App proxy be used to pass through various ID's for a separate backend rails app?
I have the case where we've implemented a subscription system in a separate rails app, but we want to show the user their subscriptions from Shopify. To do this I would like to add an app proxy on Shopify, such as:
Proxy Url: subscriptions.com/api/customers/subscriptions
Proxy Path: /a/customers
But I'd like to be able to proxy /a/customers/:customer_id/subscriptions, maybe even /a/customers/:customer_id/subscriptions/:id (for a show subscription liquid response), so concatenating the ids into the url is my main goal.
On the rails side I can easily extract the path_prefix from the params, its a matter of how Shopify is matching the Proxy Paths I guess.
Is this at all possible? Or is there another way around this problem?
The extra path components get appended to the Proxy URL. The Shopify Application Proxy documentation even provides an example showing this in the Proxy Request section.
So for you example, where the proxy url is http://subscriptions.com/api/customers/subscriptions and the proxy path is /a/customers then a request to /a/customers/:customer_id/subscriptions will be proxied to http://subscriptions.com/api/customers/subscriptions/:customer_id/subscriptions
So it sounds like the proxy request is already exactly what you want.