I've created a test LTI Tool Provider and tested it successfully against the IMSGlobal test consumer at http://www.imsglobal.org/developers/LTI/test/v1p1/lms.php.
However, when I use my endpoint in our client's D2L test instance the oAuth signatures don't match. I suspect the signature base string generated by D2L is somehow different than mine. Is there a way to obtain the D2L tool consumer's base string for comparison?
The D2L Tool Consumer implementation has also been tested successfully against the IMS reference implementations. However, registering and configuring an external learning tool in the Learning Environment is a bit of a tricky process.
In the External Learning Tools management tool you can manage the specific list links (which, when embedded, create an LTI launch point in the LMS) as well as the list of tool provider configurations (if, for example, you have one tool provider, but want to embed a number of LTI launch links for that same tool provider). The exact UX in the Learning Environment for getting to manage these two lists of items depends on the version of your LE... in LE's that have early support for LTI, the tool provider list is hidden behind a settings gear on the External Learning Tools management page I think; on later LE's, the list of links and list of tool providers are more equally visible in the management page.
The Tool Provider list allows you to provide a key and secret for the tool provider, and to use that to sign LTI launches rather than the default key/secret configured for the Tool Consumer itself ("use custom tool consumer information instead of default").
The Link list allows you to (a) choose to sign LTI launches from a link, and (b) sign the launch with the tool consumer key/secret or one specific to the LTI link itself. Note that if you use a matching Tool Provider entry for an External Learning Tool link entry, and if that Tool Provider entry has a key/secret that's set to override the default tool consumer information, then it is this Tool Provider override key/secret that get used to sign the launch if in (b) above you choose to sign the launch with the tool consumer key/secret.
Yes, that's confusing.
So -- the launch is either signed or not signed, depending on the setting in the 'Edit Link' page for the link. If the launch is signed, then it can be signed with the 'Link key/secret', provided on the 'Edit Link' page, or it can be signed with the 'Tool Consumer key/secret'. If that last one, then it will first check for a matching tool provider entry override to provide a key/secret and if not finding one, it will use the key/secret set for the entire LE.
Once you have all that set up, from inside the 'Edit Link' page for a link, you can "preview request" to do a test-launch. You can also "preview request details" which will take you to a page that shows you what the LTI POST body form will look like -- from there you can verify if the oauth properties will show up in the launch form -- if they're not in that preview form, then your launches aren't getting signed. If the oauth properties are in the form, then you can see what will be sent and you can do debugging/testing with those values.
Thanks in part to Viktor's suggestion to preview the request, I was able to debug this.
In my Tool Provider, I am checking for certain non-required LTI parameters. When such a parameter did not exist in the consumer's request I was setting it to an empty string rather than throwing an exception.
Using the IMS test consumer I was able to discover that when I set a request parameter to an empty string, while the Tool Consumer omits the field entirely, our signatures will differ even though the two base_strings match. I still don't fully understand this; my understanding of oAuth was that two identical strings signed with the same key and secret will produce the same signature. In any case, better validation of the request parameters now ensures that our strings--and signatures--match.
Related
I have a CLI tool that requires search access, on behalf of the user. I've set up an application on our team's workspace with all of the right scopes and configuration.
However, I am dismayed at how oppressive the OAuth access token process is for CLI tools. Step 1 in their process is to provide a link to a custom web site that has an "Add to Slack" button. This already extends a simple CLI tool into requiring an entire web site, but the button is merely a static URL with parameters about the app and scopes. So far, a static page from Confluence, GitHub, or some other wiki-based project space would be enough.
Step 2 is where the user grants access to the application through their browser. However, Step 3 is where the action redirects to a special, dynamic page that requires access to take a special code, and translate it into an actual usable token for the user to plug into the CLI configuration. This extra step requires a special web page that Slack does not provide to do the translation, instead of just handing a token directly to the user.
Even worse, the API call requires secret client_id and client_secret parameters that would be plain as day in an shared or open-source CLI tool. (Despite the API documentation saying otherwise, both of these parameters are actually required.)
For example, Slackcat requires a one-page special web site expressly for the purposes of fielding OAuth requests. This is a web page that cannot be opened to the public because it would reveal the secret parameters. Unfortunately, Slackcat does not have the right scope levels I need, so I can't just borrow its web site for token generation.
Is there a better way to sidestep this process and allow a simple CLI tool to just get the right user access token it needs?
OAuth2 is fundamentally built around web browsers. The entire point is to allow the user's web browser to be redirected to the OAuth2 provider's website for password entry, so that the user's credentials are never visible to you or pass through any infrastructure you control.
This necessarily involves spawning a browser and sending it to a site you control to begin the OAuth2 flow by setting up state and redirecting the user to the OAuth2 provider, and then complete the OAuth2 flow by handling the redirect back from the OAuth2 provider and performing the code/token exchange. You can't do any of this with a static site, you need a web service.
Is there a better way to sidestep this process and allow a simple CLI tool to just get the right user access token it needs?
No. If there were, it would be a vulnerability in OAuth2 that should be fixed, not exploited to bypass the entire point of OAuth2.
Our users authenticate to Acumatica using OAuth2-based SSO with Google as the identity provider. They do not have passwords to access the system (we generate very long, strong passwords which are discarded as soon as SSO is set up).
I don't want to go into all the reasons why SSO is important to us. It's critical, and I'll leave it at that. SSO was a policy and business requirement for us to select Acumatica.
The Report Designer doesn't seem to support OAuth2.
Is there a way we can give users the ability to download the RPX files via the browser, edit them outside the browser, and then upload the changes via the browser, using only SSO credentials?
Also, where can I get the source code to Report Designer? I'd love to see if I can add OAuth2 support myself.
There are 2 types of report in Acumatica:
Standard Reports
Analytical Reports
The links above describe how you can customize these reports using Acumatica Cloud ERP.
You can notice that standard reports have the following limited customization possibility using Cloud ERP:
You can adjust the report settings to meet your specific informational
needs. For example, you can specify sorting and filtering options and
select the data by using report-specific settings—such as financial
period, ledger, and account. You can configure additional processing
settings for each report.
Analytical reports have more customization possibilities through Cloud ERP:
The Analytical Report Manager is a web-based tool for creating and
modifying analytical reports. Users can design and run custom
analytical reports using advanced data selection criteria, data
calculation rules, and customizable report layout design features. By
using the Analytical Report Manager, you can: Create the layout and
structure of reports based on your business requirements. Define data
selection criteria for the report with a high level of granularity.
For example, data sources can include a range of accounts,
subaccounts, and financial periods. Use advanced formulas to calculate
values based on the information extracted from the data source. Create
consolidated reports based on the data from multiple data sources or
other analytical reports. Localize data used by a report if
multilingual support of user input is enabled.
Acumatica marketing material shouldn't refer to the standard report editor (Windows Form Desktop Application) as part of the Cloud ERP product. The reason for this is quite simple, a Windows Desktop application is not a cloud product, it can't be accessed with a browser and is not supported on unix/apple OS.
Analytical reports designer support SSO OAuth since the designer is running on the Cloud ERP product.
Standard reports designer doesn't support OAuth. You could fill a feature request for it though. Our marketing material should not refer to standard report customization as part of the Cloud ERP product because the designer requirements are different.
We strive to make every feature available on Cloud ERP platform. At the moment these are the features not available on Cloud ERP to my knowledge:
Standard Report Editor
DeviceHub, this is a Windows program that acts like a device
spooler so you can access windows desktop hardware like printer
and scales from the Cloud ERP.
Login page customization, this can't be achieved through
Customization Project Editor, you have to change files manually on
the server.
From now on, I'll focus on the Standard Report Designer, the Windows Desktop Application.
The designer uses the Web Service API to communicate with Acumatica Database.
Besides loading and saving RPX files the report designer also uses the Web Service API for features in 'File->Build Schema' dialog like 'Load Schema'. If your user are not using the 'Build Schema' features then having only the RPX file locally should be enough to allow them to modify the report using the designer.
Code for loading and saving RPX files is located in PX.Reports.Design.ReportUtils class which is part of PX.Reports.Design.dll assembly. It's using the SOAP API which to my knowledge is not the preferred API to use OAuth, the REST Contract API is recommended for OAuth enabled Web Services. Refactoring the PX.Reports.Design.dll assembly to use REST Contract API instead of SOAP API isn't trivial.
That said I believe it's possible to load and save RPX files using the Rest API and you could write a wrapper for the report designer to handle that task as long as you forgo 'Build Schema' feature. I'll touch on that at the end of my answer.
You can find the documentation for using OAuth with Rest API here
There are two ways you can use Acumatica Web Service, attended or unattended.
Attended use requires a user to enter his credentials in some form of UI before using the web service. The report designer uses this form of authentication.
The alternative for unattended use is to configure and save the authentication credentials before using the web service. That way an automated program can use web service API without requiring a user sitting in front of the computer.
Unattended use typically doesn't involve OAuth because you can't have a user dedicated to the task of authenticating. For example, if you integrate Acumatica with an ecommerce provider, requiring him to use OAuth authorization to push the orders to Acumatica is problematic because that process usually doesn't involve a UI where a user sitting at the computer provide authorization.
You could technically create a OAuth access token that never (rarely) expires for this task but it circumvents the purpose of OAuth which is having temporary access tokens to mitigate man in the middle attacks. Certifications that mandates use of OAuth typically forbid use of tokens that never expire. That's why requesting OAuth for automated process can raise eyebrows and lead programmers to question your security policies.
Now let's get to possible solutions for your problem. You'll have to assess whether your security policies allows use of unattended web service without OAuth authorization. If that's the case then your job will be much easier. If not then you'll likely run into similar problems later on if dealing with third party web service integration for Acumatica is required.
In order to write a wrapper over the report designer you'll have to write a windows desktop application and have the '.RPS' file type associated with your wrapper instead of being associated with Acumatica report designer.
When a user clicks the EDIT REPORT button on the website, a '.RPS' text file is generated in memory on the server and the user browser is redirected to that file in order to download it locally. When the user clicks on the RPS file, windows launches the associated program (Acumatica report designer) and passes the RPS file path by command line parameter. The report designer then presents the authorization dialog where user can enter his credentials and the report RPX file is downloaded by the Web Service API.
Example of the content of an RPS file:
ServiceUrl|http://localhost/AcumaticaInstance/
ReportName|gl633000.rpx
User|admin
The idea is to have your wrapper parse this RPS file, download the associated RPX file using the Web Service API that comply with your security policy and then launch Acumatica report designer for this RPX file using command line parameter:
"c:\...\ReportDesigner.exe" "c:\...'gl633000.rpx"
When you launch the report designer you want to halt your main thread until the user closes the report designer. Framework methods that do this are typically named 'wait for exit'. Before closing the report designer the user would save the RPX file. After the designer is closed your main thread will resume and you can then re-upload that RPX file to Acumatica database using Web Service API.
The easy way would be to create a report designer user to use Web Service API in unattended mode. You could store those credentials locally (in encrypted form) wherever you see fit and never expose them in UI. When making Web Service call you decrypt those credentials on the fly. In such a scenario the asset to protect is the decryption key.
If use of attended web service is required for OAuth support you will need to implement a UI to get those credentials. If you have to use Google login page for entering the credentials you would have to include an embedded browser in your wrapper for that purpose.
As a remainder, note that this solution will enable you to modify the report definition in the RPX file but will not enable you to use report designer features that require web service API like 'Build Schema'.
When using OAuth (2) I need a redirection endpoint in my application that the OAuth-offering service can redirect to, once I have been authenticated.
How do I handle this in a single page application? Of course, a redirect to the OAuth-offering service is not nice here, and it may not even be possible to redirect back.
I know that OAuth also supports a username / password based token generation. This works perfectly with an AJAX call, but requires my single page application to ask for a username and password.
How do you usually handle this?
Most of the time, a redirect is okay even for SPA because users don't like to put their X service credentials on any other website than X. An alternative will be to use an small popup window, you can check what Discourse does. IMHO a redirect is better than a popup.
Google Some providers support the resource owner flow which is what you described as sending username and password, but this is not nice. These are the problems I see:
Asking google credentials to users in your site will be a no-go for some users.
The resource owner flows need the client_secret too and this is something that you must NOT put in your client side javascript. If you instantiate the resource owner flow from your server-side application and your application is not in the same geographically region than the user, the user will get a warning "hey someone is trying to access with your credentials from India".
OAuth describes a client-side flow called implicit flow. Using this flow you don't need any interaction in your server-side and you don't need the client_secret. The OAuth provider redirects to your application with a "#access_token=xx". It is called implicit because you don't need to exchange authorization code per access token, you get an access_token directly.
Google implement the implicit flow, check: Using OAuth2 for Client-Side apps.
If you want to use the implicit flow with some provider that doesn't support it like Github, you can use an authentication broker like Auth0.
disclaimer: I work for Auth0.
What José F. Romaniello said is correct. However, your question is broad and thus I feel any offered conclusions are just generalities at this point.
Application state
For example, without knowing how complex your application state is at the time you want to let your users log in, nobody can know for sure if using a redirection is even practical at all. Consider that you might be willing to let the user log in very late in his workflow/application usage, at a point where your application holds state that you really don't want to serialize and save for no good reason. Let alone write code to rebuild it.
Note: You will see plenty of advice to simply ignore this on the web. This is because many people store most of the state of their application in server-side session storage and very little on their (thin) client. Sometimes by mistake, sometimes it really makes sense -- be sure it does for you if you choose to ignore it. If you're developing a thick client, it usually doesn't.
Popup dialogs
I realize that popups have a bad rep on the web because of all their misuses, but one has to consider good uses. In this case, they serve exactly the same purposes as trusted dialogs in other types of systems (think Windows UAC, fd.o polkit, etc). These interfaces all make themselves recognizable and use their underlying platform's features to make sure that they can't be spoofed and that input nor display can't be intercepted by the unprivileged application. The exact parallel is that the browser chrome and particularly the certificate padlock can't be spoofed, and that the single-origin policy prevents the application from accessing the popup's DOM. Interaction between the dialog (popup) and the application can happen using cross-document messaging or other techniques.
This is probably the optimal way, at least until the browsers somehow standardize privilege authorization, if they ever do. Even then, authorization processes for certain resource providers may not fit standardized practices, so flexible custom dialogs as we see today may just be necessary.
Same-window transitions
With this in mind, it's true that the aesthetics behind a popup are subjective. In the future, browsers might provide APIs to allow a document to be loaded on an existing window without unloading the existing document, then allow the new document to unload and restore the previous document. Whether the "hidden" application keeps running or is frozen (akin to how virtualization technologies can freeze processes) is another debate. This would allow the same procedure than what you get with popups. There is no proposal to do this that I know of.
Note: You can simulate this by somehow making all your application state easily serializable, and having a procedure that stores and restores it in/from local storage (or a remote server). You can then use old-school redirections. As implied in the beginning though, this is potentially very intrusive to the application code.
Tabs
Yet another alternative of course is to open a new tab instead, communicate with it exactly like you would a popup, then close it the same way.
On taking user credentials from the unprivileged application
Of course it can only work if your users trust you enough not to send the credentials to your server (or anywhere they don't want them to end up). If you open-source your code and do deterministic builds/minimization, it's theoretically possible for users to audit or have someone audit the code, then automatically verify that you didn't tamper with the runtime version -- thus gaining their trust. Tooling to do this on the web is nonexistent AFAIK.
That being said, sometimes you want to use OAuth with an identity provider under you control/authority/brand. In this case, this whole discussion is moot -- the user trusts you already.
Conclusion
In the end, it comes down to (1) how thick your client is, and (2) what you want the UX to be like.
OAuth2 has 4 flows a.k.a. grant types, each serving a specific purpose:
Authorization Code (the one you alluded to, which requires redirection)
Implicit
Client Credential
Resource Owner Password Credential
The short answer is: use Implicit flow.
Why? Choosing a flow or grant type relies on whether any part of your code can remain private, thus is capable of storing a secret key. If so, you can choose the most secure OAuth2 flow - Authorization Code, otherwise you will need to compromise on a less secure OAuth2 flow. e.g., for single-page application (SPA) that will be Implicit flow.
Client Credential flow only works if the web service and the user are the same entity, i.e., the web service serves only that specific user, while Resource Owner Password Credential flow is least secure and used as last resort since the user is required to give her social login credentials to the service.
To fully understand the difference between recommended Implicit flow and Authorization Code flow (the one that you alluded to and requires redirection), take a look at the flow side-by-side:
This diagram was taken from: https://blog.oauth.io/introduction-oauth2-flow-diagrams/
I'm now reading some introduction materials about OAuth, having the idea to use it in a free software.
And I read this:
The consumer secret must never be
revealed to anyone. DO NOT include it
in any requests, show it in any code
samples (including open source) or in
any way reveal it.
If I am writing a free client for a specific website using OAuth, then I have to include the consumer secret in the source code, otherwise making from source would make the software unusable. However, as it is said, the secret should not be release along with the source.
I completely understand the security considerations, but, how can I solve this dilemma, and use OAuth in free software?
I thought of using an external website as a proxy for authentication, but this is very much complicated. Do you have better ideas?
Edit:
Some clients like Gwibber also use OAuth, but I haven't checked its code.
I'm not sure I get the problem, can't you develop the code as open source retrieve the customer secret from a configuration file or maybe leave it in a special table in the database? That way the code will not contain the customer secret (and as such will be "shareable" as open source), but the customer secret will still be accessible to the application.
Maybe having some more details on the intended platform would help, as in some (I'm thinking tomcat right now) parameters such as this one can be included in server configuration files.
If it's PHP, I know a case of an open source project (Moodle), that keeps a php (config.php) file containing definitions of all important configurations, and references this file from all pages to get the definition. It is the responsibility of the administrator to complete the contents of this file with the values particular to that installation. In fact, if the application sees that the file is missing (usually on the first access to the site) it will redirect to a wizard where the administrator can fill the contents in a more user friendly way. In this case the customer secret will be one of these configurations, and as such will be present in the "production" code, but not in the "distributable" form of the code.
I personally like the idea of storing that value in the database in a table designed for it and possibly other parameters as the code needs not be changed. Maybe a installation wizard can be presented here ass well in the case the values do not exist.
Does this solve your problem?
If your service provider is a webapp, your server needs consumer signup pages that provides the consumer secret as the user signs up their consumer. This is the same process Twitter applications go through. Try signing up there and look at their workflow, you'll have all the steps.
If your software is peer-to-peer, each application needs to be both a service provider and a consumer. The Jira and Confluence use cases below outline that instance.
In one of my comments, I mention https://twitter.com/apps/new as the location of where Twitter app developers generate a consumer secret. How you would make such a page depends on the system architecture. If all the consumers will be talking to one server, that one server will have to have a page like https://twitter.com/apps/new. If there are multiple servers (i.e. federations of clients), each federation will need one server with this page.
Another example to consider is how Atlassian apps use OAuth. They are peer-to-peer. Setting up Jira and Confluence to talk to one another still has a setup page in each app, but it is nowhere near as complex as https://twitter.com/apps/new. Both apps are consumers and service providers at the same time. Visiting the setup in each app allows that app to be set up as a service provider with a one-way trust on the other app, as consumer. To make a two-way trust, the user must visit both app's service provider setup page and tell it the URL of the other app.
Are there any decent examples of the following available:
Looking through the WIF SDK, there are examples of using WIF in conjunction with ASP.NET using the WSFederationAuthenticationModule (FAM) to redirect to an ASP.NET site thin skin on top of a Security Token Service (STS) that user uses to authenticate (via supplying a username and password).
If I understand WIF and claims-based access correctly, I would like my application to provide its own login screen where users provide their username and password and let this delegate to an STS for authentication, sending the login details to an endpoint via a security standard (WS-*), and expecting a SAML token to be returned. Ideally, the SessionAuthenticationModule would work as per the examples using FAM in conjunction with SessionAuthenticationModule i.e. be responsible for reconstructing the IClaimsPrincipal from the session security chunked cookie and redirecting to my application login page when the security session expires.
Is what I describe possible using FAM and SessionAuthenticationModule with appropriate web.config settings, or do I need to think about writing a HttpModule myself to handle this? Alternatively, is redirecting to a thin web site STS where users log in the de facto approach in a passive requestor scenario?
An example of WIF + MVC is available in this chapter of the "Claims Identity Guide":
http://msdn.microsoft.com/en-us/library/ff359105.aspx
I do suggest reading the first couple chapters to understand all underlying principles. This blog post covers the specifics of MVC + WIF:
Link
Controlling the login experience is perfectly fine. You should just deploy your own STS (in your domain, with your look & feel, etc). Your apps would simply rely on it for AuthN (that's why a app is usually called a "relying party").
The advantage of the architecture is that authN is delegated to 1 component (the STS) and not spread out throughout many apps. But the other (huge) advantage is that you can enable more sophisticated scenarios very easily. For example you can now federate with other organization's identity providers.
Hope it helps
Eugenio
#RisingStar:
The token (containing the claims) can be optionally encrypted (otherwise they will be in clear text). That's why SSL is always recommended for interactions between the browser and the STS.
Notice that even though they are in clear text, tampering is not possible because the token is digitally signed.
That's an interesting question you've asked. I know that for whatever reason, Microsoft put out this "Windows Identity Foundation" framework without much documentation. I know this because I've been tasked with figuring out how to use it with a new project and integrating it with existing infrastructure. I've been searching the web for months looking for good information.
I've taken a somewhat different angle to solving the problem you describe.
I took an existing log-on application and integrated Microsoft's WIF plumbing into it. By that, I mean that I have an application where a user logs in. The log-on application submits the credentials supplied by the user to another server which returns the users identity (or indicates log-on failure).
Looking at some of Microsoft's examples, I see that they do the following:
Construct a SignInRequestMessage from a querystring (generated by a relying party application), construct a security token service from a custom class, and finally call FederatedSecurityTokenServiceOperations.ProcessSignInresponse with the current httpcontext.response. Unfortunately, I can't really explain it well here; you really need to look at the code samples.
Some of my code is very similar to the code sample. Where you're going to be interested in implementing a lot of your own logic is in the GetOutputClaimsIdentity. This is the function that constructs the claims-identity that describes the logged-in user.
Now, here's what I think you're really interested in knowing. This is what Microsoft doesn't tell you in their documentation, AFAIK.
Once the user logs in, they are redirected back to the relying party application. Regardless of how the log-on application works, the WIF classes will send a response to the user's browser that contains a "hidden" HTML input that contains the token signing certificate and the user's claims. (The claims will be in clear text). At the end of this response is a redirect to your relying-party website. I only know about this action because I captured it with "Fiddler"
Once back at the relying party web site, the WIF classes will handle the response (before any of your code is run). The certificate will be validated. By default, if you've set up your relying party web site with FedUtil.exe (by clicking "Add STS Reference in your relying party application from Visual Studio), Microsoft's class will verify the certificate thumbprint.
Finally, the WIF framework sets cookies in the user's browser (In my experience, the cookie names start out with "FedAuth") that contain the users claims. The cookies are not human readable.
Once that happens, you may optionally perform operations on the user's claims within the relying party website using the ClaimsAuthenticationClass. This is where your code is running again.
I know this is different from what you describe, but I have this setup working. I hope this helps!
ps. Please check out the other questions I've asked about Windows Identity Foundation.
UPDATE: To answer question in comment below:
One thing that I left out is that redirection to the STS log-on application happens by way of a redirect with a query-string containing the URL of the application the user is logging in to. This redirect happens automatically the first time a user tries to access a page that requires authentication. Alternatively, I believe that you could do the redirect manually with the WSFederationAuthentication module.
I've never tried to do this, but if you want to use a log-on page within the application itself, I believe the framework should allow you to use the following:
1) Encapsulate your STS code within a library.
2) Reference the library from your application.
3) Create a log-on page within your application. Make sure that such page does not require authentication.
4) Set the issuer property of the wsFederation element within the Microsoft.IdentityModel section of your web.config to the login page.
What you want to do is an active signin. WIF includes WSTrustChannel(Factory) which allows you to communicate directly with the STS and obtain a security token. If you want your login form to work this way, you can follow the "WSTrustChannel" sample from the WIF 4.0 SDK. Once you have obtained the token, the following code will take that token and call the WIF handler to create a session token and set the appropriate cookie:
public void EstablishAuthSession(GenericXmlSecurityToken genericToken)
{
var handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;
var token = handlers.ReadToken(new XmlTextReader(
new StringReader(genericToken.TokenXml.OuterXml)));
var identity = handlers.ValidateToken(token).First();
// create session token
var sessionToken = new SessionSecurityToken(
ClaimsPrincipal.CreateFromIdentity(identity));
FederatedAuthentication.SessionAuthenticationModule.WriteSessionTokenToCookie(sessionToken);
}
Once you have done this, your site ought to behave the same as if passive signing had occurred.
You could use the FederatedPassiveSignIn Control.
Setting your cookie like this:
FederatedAuthentication.SessionAuthenticationModule.WriteSessionTokenToCookie(sessionToken);
Doens't work for SSO to other domains.
To cookie should be set by the STS not at the RP.