I am convinced that I want to use Glimpse for my project, but I would like to learn a bit more about the security model.
From what I can tell, when you turn Glimpse on, it simply writes a set of cookies to the client. When Glimpse receives these cookies, Glimpse begins to record information for the request and then sends it to the client.
Seems like I could just set the cookies for a site I know uses Glimpse and I would then be able to see their information.
I highly doubt this is how it works, so I would like to know what features are in place to prevent exposing server information.
Glimpse uses a collection of configurable Runtime Policies (http://getglimpse.com/Help/Custom-Runtime-Policy) that dictate how Glimpse responds to any given HTTP request.
Glimpse already adds some Runtime Policies out of the box that filter requests based on content types, http status codes, remote or local access, Uri's...
You can also build your own by implementing the IRuntimePolicy and check for instance if a user is authenticated and member of a specific group and based on that allow Glimpse to gather and return data or not. Such an example can be found at the link above.
Related
I'm looking for some advice about authorization for a request I'm making in Power Automate Desktop using the action 'Invoke Web Service'. I'm using this request to get information from Dataverse.
I've currently set up this request using OAuth2.0 with the Grant Type set to Implicit and I've hardcoded a token value into the header. I'm pretty green when it comes to authorization, so I'm just wondering if that's the best way to use OAuth2.0 to get info from Dataverse to PAD? I'm also concerned this token will expire and how to go about handling that. If I should set this up differently please let me know. And if you know how I can refresh the token automatically somehow, advice would be great.
I'm going to make the assumption that you have an Azure instance within your org.
You should be able to execute the entire OAuth flow through PAD given you can do it through Postman ...
https://learn.microsoft.com/en-us/powerapps/developer/data-platform/webapi/use-postman-web-api
... having said that, if you want an easier way, my suggestion would be to use LogicApps as it does all of the hard work for you. It will also protect keys, etc. that run the risk of being exposed if contained within a PAD flow and that's even if your store that sort of information in a KeyVault or something. At some point, it needs to be exposed to PAD.
You can create a LogicApp that's triggered by an incoming HTTP request ...
... and have your DataVerse connector pull the relevant data ...
... to then return back to the calling PAD flow with a response action.
This is an example flow ...
I haven't gone into detail given your question lacks specifics around filtering, etc. but you can always make your LogicApp more comprehensive by adding functionality in the payload to order, filter, expand, etc. on the OData call to DataVerse so you get exactly what you want from a data perspective.
I just got to know about the same origin policy in WebAPI. Enabling CORS helps to call a web service which is present in different domain.
My understanding is NOT enabling CORS will only ensure that the webservice cannot be called from browser. But if I cannot call it from browser I still can call it using different ways e.g. fiddler.
So I was wondering what's the use of this functionality. Can you please throw some light? Apologies if its a trivial or a stupid question.
Thanks and Regards,
Abhijit
It's not at all a stupid question, it's a very important aspect when you're dealing with web services with different origin.
To get an idea of what CORS (Cross-Origin Resource Sharing) is, we have to start with the so called Same-Origin Policy which is a security concept for the web. Sounds sophisticated, but only makes sure a web browser permits scripts, contained in a web page to access data on another web page, but only if both web pages have the same origin. In other words, requests for data must come from the same scheme, hostname, and port. If http://player.example tries to request data from http://content.example, the request will usually fail.
After taking a second look it becomes clear that this prevents the unauthorized leakage of data to a third-party server. Without this policy, a script could read, use and forward data hosted on any web page. Such cross-domain activity might be used to exploit cookies and authentication data. Therefore, this security mechanism is definitely needed.
If you want to store content on a different origin than the one the player requests, there is a solution – CORS. In the context of XMLHttpRequests, it defines a set of headers that allow the browser and server to communicate which requests are permitted/prohibited. It is a recommended standard of the W3C. In practice, for a CORS request, the server only needs to add the following header to its response:
Access-Control-Allow-Origin: *
For more information on settings (e.g. GET/POST, custom headers, authentication, etc.) and examples, refer to http://enable-cors.org.
For a detail read, use this https://developer.mozilla.org/en/docs/Web/HTTP/Access_control_CORS
I'm developing a Google Sheets add-on. The add-on calls an API. In the API configuration, a url like https://longString-script.googleusercontent.com had to be added to the list of urls allowed to make requests from another domain.
Today, I noticed that this url changed to https://sameLongString-0lu-script.googleusercontent.com.
The url changed about 3 months after development start.
I'm wondering what makes the url to change because it also means a change in configuration in our back-end every time.
EDIT: Thanks for both your responses so far. Helped me understand better how this works but I still don't know if/when/how/why the url is going to change.
Quick update, the changing part of the url was "-1lu" for another user today (but not for me when I was testing). It's quite annoying since we can't use wildcards in the google dev console redirect uri field. Am I supposed to paste a lot of "-xlu" uris with x from 1 to like 10 so I don't have to touch this for a while?
For people coming across this now, we've also just encountered this issue while developing a Google Add-on. We've needed to add multiple origin urls to our oauth client for sign-in, following the longString-#lu-script.googleusercontent.com pattern mentioned by OP.
This is annoying as each url has to be entered separately in the authorized urls field (subdomain or wildcard matching isn't allowed). Also this is pretty fragile since it breaks if Google changes the urls they're hosting our add-on from. Furthermore I wasn't able to find any documentation from Google confirming that these are the script origins.
URLs are managed by the host in various ways. At the most basic level, when you build a web server you decide what to call it and what to call any pages on it. Google and other large content providers with farms of servers and redundant data centers and everything are going to manage it a bit differently, but for your purposes, it will be effectively the same in that ... you need to ask them since they are the hosting provider of your cloud content.
Something that MIGHT be related is that Google rolled out some changes recently dealing with the googleusercontent.com domain and picassa images (or at least was scheduled to do so.) So the google support forums will be the way to go with this question for the freshest answers since the cause of a URL change is usually going to be specific to that moment in time and not something that you necessarily need to worry about changing repeatedly. But again, they are going to need to confirm that it was something related to the recent planned changes... or not. :-)
When you find something out you can update this question in case it is of use to others. Especially, if they tell you that it wasn't a one time thing dealing with a change on their end.
This is more likely related to Changing origin in Same-origin Policy. As discussed:
A page may change its own origin with some limitations. A script can set the value of document.domain to its current domain or a superdomain of its current domain. If it sets it to a superdomain of its current domain, the shorter domain is used for subsequent origin checks.
For example, assume a script in the document at http://store.company.com/dir/other.html executes the following statement:
document.domain = "company.com";
After that statement executes, the page can pass the origin check with http://company.com/dir/page.html
So, as noted:
When using document.domain to allow a subdomain to access its parent securely, you need to set document.domain to the same value in both the parent domain and the subdomain. This is necessary even if doing so is simply setting the parent domain back to its original value. Failure to do this may result in permission errors.
An architectural question.
My site needs to allow the user to record video and upload it to the "site". I've been poking around a fair bit and it seems I have to use some kind of media server to achieve this aim. As I'm introducing this secondary server into the system (I seek to embed the flash app residing on this server into the HTML delivered by the site) it occurs to me that this broadens the scope of security a lot. What scares me is attackers trying to embed the flash app themselves or attempting to impersonate clients (or anything else I haven't thought of yet!).
I was therefore wondering how people secure their applications with such an architecture. Sure I can do what is suggested here, a decent band-aid for now but afaik the domain information can technically be falsified by the client.
I could separate out the auth of the site giving me a WebServer, an AuthServer and a MediaServer enabling the MediaServer to separately auth. Getting the user to log into both sites is obviously onerous and passing around the user's login creds and securing all connections sounds ugly and averse to best practice.
As far as I can see my best bet is some kind of temporary token that the auth server creates. So the website kicks the auth server after logging in to generate the token which the site can then pass to the media server (as part of the flash vars) and the MediaServer itself can use to double check against the auth server.
I'm relatively new to Red5, Flash and web security so I was wondering if the following sounds sane, secure and/or necessary. Also if anyone knows of decent tools to use for such an auth system and whether there is something already kicking about in ASP.NET auth for such a purpose.
the solution provided in your link ... you should read my second comment.
The first about virtual hosts is wrong! My comment does actually tell you (at least one) solution to secure your app.
You could for example pass a SESSION_ID in the connect method to Red5. The user would get the SESSION_ID from another webservice call before he invokes the record or playback method.
The SESSION_ID might be even some kind of temporary token, that is only valid for 15 minutes and only usable a single time for exactly that video. How far you implement that is a matter of how secure your mechanism needs to be.
Sebastian
I am trying to determine how to retrieve session information using a Delphi REST DataSnap server.
I know that, when on the same client page, you have access to the current thread session using the TDSSession method GetThreadSession.
What I want to do, however, is store data in the session (putData) and still be able to retrieve it when the user moves from page1 to page2. At present, if the user moves to a different page, the session is lost and a new one is created, thus loosing the data in the session that I had previously set.
I have tried playing with TDSSessionManager.SetThreadSession(sessionid) - but I cant seem to get it working.
I've reviewed the much acclaimed Marco Cantu white paper, however, it doesn't provide a solution to this issue.
Any help I can get on this would be great - even if its just 'hey, this topic is covered in book X'.
Thanks!
The TDSSessionManager.SetThreadSession(sessionid) works with Session.sessionname.
Plus make sure your Lifecycle is set to Session (as stated by tondrej).
If you reconnect your client. a new session is started. So you want to keep your Datasnap connection open.
Or you can set the lifecycle to Server and mannage the client-sessions yourself.
Edit: Rest Servers are Stateless. So you need to store the page you are on on the Client. And Query the needed Page from the Server
You have to tweak the client side JavaScript to use a cookie to store session info.
See the last part of JavaScript Client Sessions
If you want to keep server side objects active for the session use the Session life cycle.
I believe what you need to do is set LifeCycle property of your TDSServerClass instance to Session (stateful). From your question it seems you are currently using Invocation (stateless).
Well, in Datasnap REST (GET, POST, DELETE, PUT) if you set your TDSServerClass to session, as is a REST in this case session is the same as invocation, is stateless (http://docwiki.embarcadero.com/RADStudio/Tokyo/en/Server_Class_LifeCycle#REST_Clients). It is right, you give the oportunity to all kind of clients to use your datasnap server with JSONs for example.
You need to create your owner model to session control to your REST server, or look for some framework to do this. In my case I use custom objects on lifecicle server (some cases with database too), and using tokens on request headers and other informations, I know if is the same client and I control too when the token expires and need to do new login, for exemple and I can give much more security too as on PUT resquests, only on records gave to client (it is only one case, but there are much others...). You need to resolve other way, not with classic way using TDSSession.