I have an ASP.NET MVC application. When I start the application I get the error here:
SignalR: Connection must be started before data can be sent. Call .start() before .send()
No transport could be initialized successfully. Try specifying a different transport or none at all for auto initialization.
I am not at all using SignalR but I get that error related to SignalR. I am not sure why ...
But when I start my application, it loads browserlink file (please see below) which has references to SignalR. Is this creating problem? Please help.
typeof JSON!="undefined"&&(window._vwdJSON=JSON),typeof define!="undefined"&&(window._vwdDefine=define,define=null),typeof window.onbeforeunload!="undefined"&&(window._vwdonbeforeunload=window.onbeforeunload)
;
/* NUGET: BEGIN LICENSE TEXT
*
* Microsoft grants you the right to use these script files for the sole
* purpose of either: (i) interacting through your browser with the Microsoft
* website or online service, subject to the applicable licensing or use
* terms; or (ii) using the files as included with a Microsoft product subject
* to that product's license terms. Microsoft reserves all other rights to the
* files not expressly granted by Microsoft, whether by implication, estoppel
* or otherwise. Insofar as a script file is dual licensed under GPL,
* Microsoft neither took the code under GPL nor distributes it thereunder but
* under the terms set out in this paragraph. All notices and licenses
* below are for informational purposes only.
*
* NUGET: END LICENSE TEXT */
var JSON;JSON||(JSON={}),(function(){"use strict";function i(n){return n<10?"0"+n:n}function f(n){return o.lastIndex=0,o.test(n)?'"'+n.replace(o,function(n){var t=s[n];return typeof t=="string"?t:"\\u"+("0000"+n.charCodeAt(0).toString(16)).slice(-4)})+'"':'"'+n+'"'}function r(i,e){var h,l,c,a,v=n,s,o=e[i];o&&typeof o=="object"&&typeof o.toJSON=="function"&&(o=o.toJSON(i)),typeof t=="function"&&(o=t.call(e,i,o));switch(typeof o){case"string":return f(o);case"number":return isFinite(o)?String(o):"null";case"boolean":case"null":return String(o);case"object":if(!o)return"null";n+=u,s=[];if(Object.prototype.toString.apply(o)==="[object Array]"){for(a=o.length,h=0;h<a;h+=1)s[h]=r(h,o)||"null";return c=s.length===0?"[]":n?"[\n"+n+s.join(",\n"+n)+"\n"+v+"]":"["+s.join(",")+"]",n=v,c}if(t&&typeof t=="object")for(a=t.length,h=0;h<a;h+=1)typeof t[h]=="string"&&(l=t[h],c=r(l,o),c&&s.push(f(l)+(n?": ":":")+c));else for(l in o)Object.prototype.hasOwnProperty.call(o,l)&&(c=r(l,o),c&&s.push(f(l)+(n?": ":":")+c));return c=s.length===0?"{}":n?"{\n"+n+s.join(",\n"+n)+"\n"+v+"}":"{"+s.join(",")+"}",n=v,c}}typeof Date.prototype.toJSON!="function"&&(Date.prototype.toJSON=function(){return isFinite(this.valueOf())?this.getUTCFullYear()+"-"+i(this.getUTCMonth()+1)+"-"+i(this.getUTCDate())+"T"+i(this.getUTCHours())+":"+i(this.getUTCMinutes())+":"+i(this.getUTCSeconds())+"Z":null},String.prototype.toJSON=Number.prototype.toJSON=Boolean.prototype.toJSON=function(){return this.valueOf()});var e=/[\u0000\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g,o=/[\\\"\x00-\x1f\x7f-\x9f\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g,n,u,s={"\b":"\\b","\t":"\\t","\n":"\\n","\f":"\\f","\r":"\\r",'"':'\\"',"\\":"\\\\"},t;typeof JSON.stringify!="function"&&(JSON.stringify=function(i,f,e){var o;n="",u="";if(typeof e=="number")for(o=0;o<e;o+=1)u+=" ";else typeof e=="string"&&(u=e);t=f;if(f&&typeof f!="function"&&(typeof f!="object"||typeof f.length!="number"))throw new Error("JSON.stringify");return r("",{"":i})}),typeof JSON.parse!="function"&&(JSON.parse=function(n,t){function r(n,i){var f,e,u=n[i];if(u&&typeof u=="object")for(f in u)Object.prototype.hasOwnProperty.call(u,f)&&(e=r(u,f),e!==undefined?u[f]=e:delete u[f]);return t.call(n,i,u)}var i;n=String(n),e.lastIndex=0,e.test(n)&&(n=n.replace(e,function(n){return"\\u"+("0000"+n.charCodeAt(0).toString(16)).slice(-4)}));if(/^[\],:{}\s]*$/.test(n.replace(/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g,"#").replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g,"]").replace(/(?:^|:|,)(?:\s*\[)+/g,"")))return i=eval("("+n+")"),typeof t=="function"?r({"":i},""):i;throw new SyntaxError("JSON.parse");})})()
;
* ASP.NET SignalR JavaScript Library v2.0.2; Copyright (C) Microsoft Corporation; https://github.com/SignalR/SignalR/blob/master/LICENSE.md
*
* NUGET: END LICENSE TEXT */
/*!
* ASP.NET SignalR JavaScript Library v2.0.2
* http://signalr.net/
*
* Copyright (C) Microsoft Corporation. All rights reserved.
*
*/
You may not be using SignalR, but yes, browser link uses SignalR.
According to the documentation:
Browser Link is enabled by default. There are several ways to disable
it: In the Browser Link dropdown menu, uncheck Enable Browser Link.
Related
I am trying to integrate an eBay SDK developed by David Sadler which is in GitHub. But I got stuck on connection part itself. I am getting app token with http://localhost/ebay-sdk-examples/oauth-tokens/01-get-app-token.php with my production credentials.
But when I hit, http://localhost/ebay-sdk-examples/oauth-tokens/02-get-user-token.php it gives me this error:
[error] => invalid_grant
[error_description] => the provided authorization grant code is invalid or was issued to another client
all The codes are available in the SDK link. Incase you need here is the code snippet
<?php
/**
* Copyright 2017 David T. Sadler
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Include the SDK by using the autoloader from Composer.
*/
require __DIR__ . '/../vendor/autoload.php';
/**
* Include the configuration values.
*
* Ensure that you have edited the configuration.php file
* to include your application keys.
*/
$config = require __DIR__ . '/../configuration.php';
/**
* The namespaces provided by the SDK.
*/
use \DTS\eBaySDK\OAuth\Services;
use \DTS\eBaySDK\OAuth\Types;
/**
* Create the service object.
*/
$service = new Services\OAuthService([
'credentials' => $config['production']['credentials'],
'ruName' => $config['production']['ruName'],
'sandbox' => false,
]);
$token = $config['production']['testToken']; ** This is the app token I get with http://localhost/ebay-sdk-examples/oauth-tokens/01-get-app-token.php **
/**
* Create the request object.
*/
$request = new Types\GetUserTokenRestRequest();
$request->code = $token;
// $request->code = 'v^1.1#i^1#I^3#r^1#p^3#f^0#t^Ul41XzA6MkIzRjJFRjA1MENDMzZCQjlGMjVERkYyMkMxMTRBM0VfMV8xI0VeMjYw';
/**
* Send the request.
*/
$response = $service->getUserToken($request);
echo '<pre>';
print_r($response);
echo '<pre>';
exit;
/**
* Output the result of calling the service operation.
*/
printf("\nStatus Code: %s\n\n", $response->getStatusCode());
if ($response->getStatusCode() !== 200) {
printf(
"%s: %s\n\n",
$response->error,
$response->error_description
);
} else {
printf(
"%s\n%s\n%s\n%s\n\n",
$response->access_token,
$response->token_type,
$response->expires_in,
$response->refresh_token
);
}
If there are simple codes to without using any SDK to connect to eBay with OAuth, its what I am searching for.
I am also currently working with this SDK.
My understanding is you should place the url:
http://localhost/ebay-sdk-examples/oauth-tokens/02-get-user-token.php
As your Your auth accepted URL1 in the developer.ebay.com, under Get a Token from eBay via Your Application.
Where you are setting $token this should be set to $_GET['code'] as when testing the sign-in and accepting you should be redirected to the above URL with a ?code=xxxx parsed back to it.
The value your currently setting to $token should be set as the value of your authToken within your credentials.
Hope that helps.
eBay Token generation can be achieved using our custom PHP/.NET scripts. eBay API documentation having many references
https://viewdotnet.wordpress.com/2011/12/20/ebay-token-generation/
The above link includes all the required details to generate eBay Auth. token
I want to build a desktop application and be able to publish product keys or serial numbers.Before the user can use the application he will be requested to enter the product key/serial number.
Similar to Microsoft Office when they provide keys like XXXX-XXXX-XXXX-XXXX
The idea I have is to sell the app based on licenses and providing product key for every device seems more professional than accounts (usernames and passwords).
so my questions are:
1) Is it possible to accomplish this with electron?
2) Can you advice me wether I should go for serial numbers (if it is doable) or accounts? or are there better options?
3) if you answered the second question. Please state why?
Edit for 2021: I'd like to revise this answer, as it has generated a lot of inquiries on the comparison I made between license keys and user accounts. I previously would almost always recommended utilizing user accounts for licensing Electron apps, but I've since changed my position to be a little more nuanced. For most Electron apps, license keys will do just fine.
Adding license key (synonymous with product key) validation to an Electron app can be pretty straight forward. First, you would want to somehow generate a license key for each user. This can be done using cryptography, or it can be done by generating a 'random' license key string and storing it in a database and then building a CRUD licensing server that can verify that a given license key is "valid."
For cryptographic license keys, you can take some information from the customer, e.g. their order number or an email address, and create a 'signature' of it using RSA cryptography. Using Node, that would look something like this:
const crypto = require('crypto')
// Generate a new keypair
const { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
// Using a larger key size, such as 2048, would be more secure
// but will result in longer signatures.
modulusLength: 512,
privateKeyEncoding: { type: 'pkcs1', format: 'pem' },
publicKeyEncoding: { type: 'pkcs1', format: 'pem' },
})
// Some data we're going to use to generate a license key from
const data = 'user#store.example'
// Create a RSA signer
const signer = crypto.createSign('rsa-sha256')
signer.update(data)
// Encode the original data
const encoded = Buffer.from(data).toString('base64')
// Generate a signature for the data
const signature = signer.sign(privateKey, 'hex')
// Combine the encoded data and signature to create a license key
const licenseKey = `${encoded}.${signature}`
console.log({ privateKey, publicKey, licenseKey })
Then, to validate the license key within your Electron app, you would want to cryptographically 'verify' the key's authenticity by embedding the public (not the private!) key generated above into your application code base:
// Split the license key's data and the signature
const [encoded, signature] = licenseKey.split('.')
const data = Buffer.from(encoded, 'base64').toString()
// Create an RSA verifier
const verifier = crypto.createVerify('rsa-sha256')
verifier.update(data)
// Verify the signature for the data using the public key
const valid = verifier.verify(publicKey, signature, 'hex')
console.log({ valid, data })
Generating and verifying the authenticity of cryptographically signed license keys like this will work great for a lot of simple licensing needs. They're relatively simple, and they work great offline, but sometimes verifying that a license key is 'valid' isn't enough. Sometimes requirements dictate that license keys are not perpetual (i.e. 'valid' forever), or they call for more complicated licensing systems, such as one where only a limited number of devices (or seats) can use the app at one time. Or perhaps the license key needs a renewable expiration date. That's where a license server can come in.
A license server can help manage a license's activation, expirations, among other things, such as user accounts used to associate multiple licenses or feature-licenses with a single user or team. I don't recommend user accounts unless you have a specific need for them, e.g. you need additional user profile information, or you need to associate multiple licenses with a single user.
But in case you aren't particularly keen on writing and maintaining your own in-house licensing system, or you just don't want to deal with writing your own license key generator like the one above, I’m the founder of a software licensing API called Keygen which can help you get up and running quickly without having to write and host your own license server. :)
Keygen is a typical HTTP JSON API service (i.e. there’s no software that you need to package with your app). It can be used in any programming language and with frameworks like Electron.
In its simplest form, validating a license key with Keygen is as easy as hitting a single JSON API endpoint (feel free to run this in a terminal):
curl -X POST https://api.keygen.sh/v1/accounts/demo/licenses/actions/validate-key \
-d '{
"meta": {
"key": "C1B6DE-39A6E3-DE1529-8559A0-4AF593-V3"
}
}'
I recently put together an example of adding license key validation, as well as device activation and management, to an Electron app. You can check out that repo on GitHub: https://github.com/keygen-sh/example-electron-license-activation.
I hope that answers your question and gives you a few insights. Happy to answer any other questions you have, as I've implemented licensing a few times now for Electron apps. :)
YES but concerning the software registration mechanism, IT IS HARD and it needs a lot of testing too.
If 90% of your users have internet access you should definitely go with the user accounts or some OAUTH 2.0 Plug and play thing (login with facebook/gmail/whatever)
I built a software licensing architecture from scratch using crypto and the fs module , and it was quite a journey (year) !
Making a good registration mechanism for your software from scratch is not recommended , electron makes it harder because the source code is relatively exposed.
That being said , if you really wanna go that way , bcrypt is good at this (hashs), you need a unique user identifier to hash , you also need some kind of persistence (preferably a file ) where you can store the user license , and you need to hide the salt that you are using for hashing either by hashing the hash... or storing small bits of it in separate files.
this will make a good starting point for licensing but it's far from being fully secured.
Hope it helps !
There are many services out there which help you add license key based software licensing to your app. And to ensure your customers don't reuse the key, you would need a strong device fingerprinting algorithm.
You can try out Cryptlex. It offers a very robust licensing solution with advanced device fingerprinting algo. You can check out the Node.js example on Github to add licensing to your electron app.
Yes, it is possible.
I myself desired this feature, and I found related solutions such as paid video tutorials, online solutions [with Keygen], and other random hacks, but I wanted something that is offline and free, so I created my own repository for myself/others to use. Here's how it works.
Overview
Install secure-electron-license-keys-cli. (ie. npm i -g secure-electron-license-keys-cli).
Create a license key by running secure-electron-license-keys-cli. This generates a public.key, private.key and license.data.
Keep private.key safe, but stick public.key and license.data in the root of your Electron app.
Install secure-electron-license-keys. (ie. npm i secure-electron-license-keys).
In your main.js file, review this sample code and add the necessary binding.
const {
app,
BrowserWindow,
ipcMain,
} = require("electron");
const SecureElectronLicenseKeys = require("secure-electron-license-keys");
const path = require("path");
const fs = require("fs");
const crypto = require("crypto");
// Keep a global reference of the window object, if you don't, the window will
// be closed automatically when the JavaScript object is garbage collected.
let win;
async function createWindow() {
// Create the browser window.
win = new BrowserWindow({
width: 800,
height: 600,
title: "App title",
webPreferences: {
preload: path.join(
__dirname,
"preload.js"
)
},
});
// Setup bindings for offline license verification
SecureElectronLicenseKeys.mainBindings(ipcMain, win, fs, crypto, {
root: process.cwd(),
version: app.getVersion(),
});
// Load app
win.loadURL("index.html");
// Emitted when the window is closed.
win.on("closed", () => {
// Dereference the window object, usually you would store windows
// in an array if your app supports multi windows, this is the time
// when you should delete the corresponding element.
win = null;
});
}
// This method will be called when Electron has finished
// initialization and is ready to create browser windows.
// Some APIs can only be used after this event occurs.
app.on("ready", createWindow);
// Quit when all windows are closed.
app.on("window-all-closed", () => {
// On macOS it is common for applications and their menu bar
// to stay active until the user quits explicitly with Cmd + Q
if (process.platform !== "darwin") {
app.quit();
} else {
SecureElectronLicenseKeys.clearMainBindings(ipcMain);
}
});
In your preload.js file, review the sample code and add the supporting code.
const {
contextBridge,
ipcRenderer
} = require("electron");
const SecureElectronLicenseKeys = require("secure-electron-license-keys");
// Expose protected methods that allow the renderer process to use
// the ipcRenderer without exposing the entire object
contextBridge.exposeInMainWorld("api", {
licenseKeys: SecureElectronLicenseKeys.preloadBindings(ipcRenderer)
});
Review the sample React component how you can verify the validity of your license, and act accordingly within your app.
import React from "react";
import {
validateLicenseRequest,
validateLicenseResponse,
} from "secure-electron-license-keys";
class Component extends React.Component {
constructor(props) {
super(props);
this.checkLicense = this.checkLicense.bind(this);
}
componentWillUnmount() {
window.api.licenseKeys.clearRendererBindings();
}
componentDidMount() {
// Set up binding to listen when the license key is
// validated by the main process
const _ = this;
window.api.licenseKeys.onReceive(validateLicenseResponse, function (data) {
console.log("License response:");
console.log(data);
});
}
// Fire event to check the validity of our license
checkLicense(event) {
window.api.licenseKeys.send(validateLicenseRequest);
}
render() {
return (
<div>
<button onClick={this.checkLicense}>Check license</button>
</div>
);
}
}
export default Component;
You are done!
Further detail
To explain further, the license is validated by a request from the client (ie. front-end) page. The client sends an IPC request to the main (ie. backend) process via this call (window.api.licenseKeys.send(validateLicenseRequest)).
Once this call is received by the backend process (which was hooked up because we set it up with this call (SecureElectronLicenseKeys.mainBindings)), the library code tries to decrypt license.data with public.key. Regardless if this succeeds or not, the success status is sent back to the client page (via IPC).
How to limit license keys by version
What I've explained is quite limited because it doesn't limit the versions of an app you might give to a particular user. secure-electron-license-keys-cli includes flags you may pass when generating the license key to set particular major/minor/patch/expire values for a license.
If you wanted to allow major versions up to 7, you could run the command to generate a license file like so:
secure-electron-license-keys-cli --major "7"
If you wanted to allow major versions up to 7 and expire on 2022-12-31, you could run the command to generate a license file like so:
secure-electron-license-keys-cli --major "7" --expire "2022-12-31"
If you do run these commands, you will need to update your client page in order to compare against them, ie:
window.api.licenseKeys.onReceive(validateLicenseResponse, function (data) {
// If the license key/data is valid
if (data.success) {
if (data.appVersion.major <= data.major &&
new Date() <= Date.parse(data.expire)) {
// User is able to use app
} else {
// License has expired
}
} else {
// License isn't valid
}
});
The repository page has more details of options but this should give you the gist of what you'll have to do.
Limitations
This isn't perfect, but will likely handle 90% of your users. This doesn't protect against:
Someone decompiling your app and making their own license to use/removing license code entirely
Someone copying a license and giving it to another person
There's also the concern how to run this library if you are packaging multiple or automated .exes, since these license files need to be included in the source. I'll leave that up to the creativity of you to figure out.
Extra resources / disclaimers
I built all of the secure-electron-* repositories mentioned in this question, and I also maintain secure-electron-template which has the setup for license keys already pre-baked into the solution if you need something turn-key.
I'm currently developing an iOS app along with a Wordpress site with the bbPress plugin.
I would like to allow any user to easily post links with custom schemes in the forum like :
myappname://badebidobudy/fdjlkqsfj
I saw that in bbPress an admin can indeed post a link like this :
Da link
and bbPress tells me why :
Your account has the ability to post unrestricted HTML content.
But when an anonymous user wants to do this, the custom scheme is removed and the resulting html code is :
Da link
So my question is : how can I configure (or tweak) Wordpress to at least accept my url scheme or even recognize a raw link with a custom scheme ?
After reading the comments of : https://developer.wordpress.org/reference/functions/esc_url/
I ended implementing a small plugin, here is its php code (the protocol I add is "newzik") :
<?php
/**
* Plugin Name: NZK links support
* Plugin URI: http://newzik.com/
* Description: Adds support to newzik:// links
* Version: 1.0
* Author: Pierre Mardon
* Author URI: http://newzik.com/
* License: None
*/
/**
* Extend list of allowed protocols.
*
* #param array $protocols List of default protocols allowed by WordPress.
*
* #return array $protocols Updated list including new protocols.
*/
function wporg_extend_allowed_protocols( $protocols ){
$protocols[] = 'newzik';
return $protocols;
}
add_filter( 'kses_allowed_protocols' , 'wporg_extend_allowed_protocols' );
?>
I'm wading through the creation of a claims-based MVC site in Visual Studio 2013.
Some things I learned so far:
System.Identity is in, Microsoft.Identity is out
Many of the tutorials, including Microsoft's guides for 4.5, are out-dated. For example, I don't believe any changes to the Project Template-generated .config file are necessary for adding modules/handlers or anything.
There is no Microsoft built-in/add-in STS in Visual Studio 2013 as there was for 2012
Thinktecture's EmbeddedSTS addin is oft-recommended and sounds cool, but *://EmbeddedSTS/ doesn't resolve(?? I don't get it). Also, binary links to their IdentityServer v2- are currently broken(?)
ADFS feature requires Windows Server 2012, a Domain, and self-signed certs - not too hard if you've done it before, but steep learning curve if you haven't.
ADFS requires SSL - Visual Studio 2013/IIS Express 8 easily supports SSL sites, just make sure the port number is in the range :44300-44398
ADFS manager Relying Party interface suggests examples referring to "sts" and "adfs/ls" and stuff which is, I think, misleading. Really they should just point back to your app (https://localhost:44300 for example). Although mine's not working right yet, so that could be related to my mistake.
Once you create a new Visual Studio Web Application project, there is no tooling to change the authentication mechanisms. Just start over with a new project and change the authentication to Organizational Accounts (for on-premises, as in my case). Your STS, such as your ADFS installation, has to be installed and reachable in order to complete this wizard.
Use the hosts file to override DNS for the VM's IP to the expected domain name if you're hacking together a test ADFS DC in a VM because you don't have rights to join a machine to the domain.
"Users are required to provide credentials each time the sign in" is helpful when working through sign-in sign-out problems at first.
I don't think any claims, even identity, are passed if you don't have any Claim Rules.
1) What is wrong such that my app still thinks the user is not authenticated?
I'm to the point where my https://localhost:44300/Default/Index/ action is supposed to display details of User.Identity (I also tried Thread.CurrentPrincipal.Identity) if the user is authenticated. I have a login Action link, generated with:
var signIn = new SignInRequestMessage(new Uri("https://dc.ad.dev.local/adfs/ls/"), "https://localhost:44300");
return new RedirectResult(signIn.WriteQueryString());
Clicking this link indeed takes me to the ADFS login page. Logging in brings me back to my application. Watching the preserved Network activity in Chrome Bug shows that I have a RequestSecurityTokenResponse message being posted back to the app, but the app's User.Identity is still not authenticated.
I have one Claim Rule configured: A "Transform an Incoming Claim" from "Windows Account Name" to "Name ID" as a "Transient Identifier". I see the <saml:NameIdentifier Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient">DevAD\jdoe</saml:NameIdentifier> represented in the sniffed POST. I've tried a bunch of other Claim Rules and still don't get authenticated.
I don't have any custom code for absorbing the claims. I am expect interception of a POST with the token to any app url to be converted to the User.Identity auto-magically by the framework, perhaps having been initiated by this wizard-generated code in Startup.Auth.cs:
app.UseActiveDirectoryFederationServicesBearerAuthentication(
new ActiveDirectoryFederationServicesBearerAuthenticationOptions
{
Audience = ConfigurationManager.AppSettings["ida:Audience"],
MetadataEndpoint = ConfigurationManager.AppSettings["ida:AdfsMetadataEndpoint"]
});
But part of me doubts this expectation. Is it correct? Is there a special known route that MVC WIF creates for accepting such login posts that I should be using besides my default route url?
2) How can I log out successfully?
I also have a logout action:
WSFederationAuthenticationModule.FederatedSignOut(new Uri("https://dc.ad.dev.local/adfs/ls/"), new Uri(Url.Action("Index", null, null, Request.Url.Scheme)));
But on this https://dc.ad.dev.local/adfs/ls?wa=wsignout1.0&wreply=https%3a%2f%2flocalhost%3a44300%2f page, "An error occurred". Event Viewer shows #364: "Encountered error during federation passive request."
Protocol Name:
wsfed
Relying Party:
Exception details:
System.ArgumentException: An item with the same key has already been added.
at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add)
at Microsoft.IdentityServer.Web.Protocols.WSFederation.WSFederationProtocolHandler.AddSignoutSessionInformation(WSFederationSignOutContextBase context)
at Microsoft.IdentityServer.Web.Protocols.WSFederation.WSFederationProtocolHandler.ProcessSignOut(WSFederationSignOutContext context)
at Microsoft.IdentityServer.Web.PassiveProtocolListener.ProcessProtocolSignoutRequest(ProtocolContext protocolContext, PassiveProtocolHandler protocolHandler)
at Microsoft.IdentityServer.Web.PassiveProtocolListener.ProcessProtocolRequest(ProtocolContext protocolContext, PassiveProtocolHandler protocolHandler)
at Microsoft.IdentityServer.Web.PassiveProtocolListener.OnGetContext(WrappedHttpListenerContext context)
My ADFS Service > Certificates are all set to the same cert and I think are correct.
================
And by the way, the following is what is supposed to be passively posted to the app, right? And, again, it is absorbed automatically?
<t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">
<t:Lifetime>
<wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2014-07-28T14:29:47.167Z</wsu:Created>
<wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2014-07-28T15:29:47.167Z</wsu:Expires>
</t:Lifetime>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://localhost:44300/</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<t:RequestedSecurityToken>
<saml:Assertion MajorVersion="1" MinorVersion="1" AssertionID="_e2399a27-acac-4390-aa8a-556f41fec2f2" Issuer="http://dc.ad.dev.local/adfs/services/trust" IssueInstant="2014-07-28T14:29:47.167Z" xmlns:saml="urn:oasis:names:tc:SAML:1.0:assertion">
<saml:Conditions NotBefore="2014-07-28T14:29:47.167Z" NotOnOrAfter="2014-07-28T15:29:47.167Z">
<saml:AudienceRestrictionCondition>
<saml:Audience>https://localhost:44300/</saml:Audience>
</saml:AudienceRestrictionCondition>
</saml:Conditions>
<saml:AttributeStatement>
<saml:Subject>
<saml:NameIdentifier Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient">DevAD\jdoe</saml:NameIdentifier>
<saml:SubjectConfirmation>
<saml:ConfirmationMethod>urn:oasis:names:tc:SAML:1.0:cm:bearer</saml:ConfirmationMethod>
</saml:SubjectConfirmation>
</saml:Subject>
<saml:Attribute AttributeName="name" AttributeNamespace="http://schemas.xmlsoap.org/ws/2005/05/identity/claims">
<saml:AttributeValue>jdoe</saml:AttributeValue>
</saml:Attribute>
<saml:Attribute AttributeName="givenname" AttributeNamespace="http://schemas.xmlsoap.org/ws/2005/05/identity/claims">
<saml:AttributeValue>John Doe</saml:AttributeValue>
</saml:Attribute>
<saml:Attribute AttributeName="upn" AttributeNamespace="http://schemas.xmlsoap.org/ws/2005/05/identity/claims">
<saml:AttributeValue>jdoe#ad.dev.local</saml:AttributeValue>
</saml:Attribute>
</saml:AttributeStatement>
<saml:AuthenticationStatement AuthenticationMethod="urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport" AuthenticationInstant="2014-07-28T14:29:47.152Z">
<saml:Subject>
<saml:NameIdentifier Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient">DevAD\jdoe</saml:NameIdentifier>
<saml:SubjectConfirmation>
<saml:ConfirmationMethod>urn:oasis:names:tc:SAML:1.0:cm:bearer</saml:ConfirmationMethod>
</saml:SubjectConfirmation>
</saml:Subject>
</saml:AuthenticationStatement>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256" />
<ds:Reference URI="#_e2399a27-acac-4390-aa8a-556f41fec2f2">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256" />
<ds:DigestValue>+ZDduF0CKxXq7P+diyAXN51mo549pvwo3BNCekWSEpk=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>VMjCbSZXw3YROHYQ1eCYH5D9UQl1tzqZ9Nw99FUK78A8TSLs1ns3G8PE1d1Z1db2KKpbnzExXSXG2elP3Z69OejSWjsywIFTPeGcbGk4BvrV4ZcHGCbYKN0Wg5pySMEqm4LV1E5k+32kuALveLi5fkQROyXudquvVRgYrgu7XBsfr96Uvqo1yWmAzhhpEorfe4Z0p4RurKRpS7IsrI9SkssGOdQV/89NQelIZSZzOEMfay/AxewBbQ8C46g/4NgygaaPsG8X52EFVftzFY0BM8k+aMMUiKrJ0Xo7tJCMxJLcQ3aJdLBRNybHaklFgtln0ZCSlYylglUjUZ5d66jGcg==</ds:SignatureValue>
<KeyInfo xmlns="http://www.w3.org/2000/09/xmldsig#">
<X509Data>
<X509Certificate>MIIC7jCCAdagAwIBAgIQLB+dBr0GI75OvLElC1HZHTANBgkqhkiG9w0BAQsFADAzMTEwLwYDVQQDEyhBREZTIFNpZ25pbmcgLSBkYy5hZC5lbnRlcnByaXNlZGV2LmxvY2FsMB4XDTE0MDcyNDIxMTMxM1oXDTE1MDcyNDIxMTMxM1owMzExMC8GA1UEAxMoQURGUyBTaWduaW5nIC0gZGMuYWQuZW50ZXJwcmlzZWRldi5sb2NhbDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALvkkbfrr5YZWNkfv7LFQlVj3qTcfduRltKfAGiK/NOXNp498uMY+zhUBtiPU1woJhsoqfahgQpy3YJhIPsxbVGSXjAGcFVcUq03V2xVJB6+tW1Ny+/lqiXrdvYAHcZvqpeG/tnh5/hGi/mapd2oMxWIWkyRdztJrN+BCzUi4mm80bYrYX8liuDAcJEk5EYh73gaOwzIyUAZvOXwX1UWh9FA8j2mIMyv3b4SgjFQMPT+Fyw4L1cX+3u/PhGlVRSUEAu+igfMxM1JTco+3HMfQuBZLGd8YXhig+6WbIGlcGUhHEdNSr9ymljJBaps1JfGZk7Rj/7hYeHCXbl3mKK7yFUCAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAU5gYs7BZZBrrm+eDZM5pTaQfnwyeHYWHe9D2UDweWTNjj9KVC2sucUI2K9MXzA3kZOP7UIvnLmHvxG7cnPen095NBIyYYDjzvlImGwq297m6cz0m2ZxkBGMKq9bVSPoVOgDrX0c+e2wFjRgVojd1bWm9fbMvIUWn8KyMQMquXmyJxX5sPxpMfm95yheyh6q67VzgWF9TcUp6jNdmMcRowHWnQ9UVYf1UEMcPUlaljARVQYNQjyHjrcFaRkxF57EkjO3e263KIe0knVNoz8W57prXJLOHOqSe2f4kSKUdU+Xt8XQbJ77xHPdSOoW8NwKZVL7/9TrfVJ6pi1Ob/+LrAA==</X509Certificate>
</X509Data>
</KeyInfo>
</ds:Signature>
</saml:Assertion>
</t:RequestedSecurityToken>
<t:TokenType>urn:oasis:names:tc:SAML:1.0:assertion</t:TokenType>
<t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>
<t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType>
</t:RequestSecurityTokenResponse>
===============
Below are the Claims defined. As suggested by #nzpcmad, the second one now "Send LDAP Attributes as Claims" from "Active Directory" to send "SAM-Account-Name" as "Name", "Display Name" as "Given Name", and "User-Principal-Name" as "UPN". And though the application receives the claims in the passive post, User.Identity.IsAuthenticated is still false and the other User.Identity data are blank too.
Pretty much right with your observations.
Just to note:
ADFS runs on Server 2008 R2, 2012 and 2012 R2.
Have a look at Use the On-Premises Organizational Authentication Option (ADFS) With ASP.NET in Visual Studio 2013.
It describes exactly what you are trying to do.
In particular, have a look at the claims.
You'll see it uses "Send LDAP Attribute" rather than the Transform you use.
I came across the same sign-out issue and it seems to occur if you do not have the issuing certificate in the trusted people certificate store.
My JMS client connects to WMQ through JNDI. The initial context factory used is com.ibm.mq.jms.context.WMQInitialContextFactory.
Currently, at WMQ side, there's a queue manager called TestMgr. Under this queue manager I created two channels. One is PLAIN.CHL which does not specify an SSL Cipher Spec, the other one is SSL.CHL which configured SSL Cipher Spec with RC4_MD5_US and SSL Authentication with Optional.
I have created a key store for the queue manager using IBM Key Management tool. The path of key db is [wmq_home]\qmgrs\TestMgr\ssl\key.
For channel PLAIN.CHL, I defined a queue connection factory like:
DEF QCF(PlainQCF) QMANAGER(TestMgr) CHANNEL(PLAIN.CHL) HOST(192.168.66.23) PORT(1414) TRANSPORT(client)
And under the SSL channel SSL.CHL, I defined a queue connection factory like:
DEF QCF(SSLQCF) QMANAGER(TestMgr) CHANNEL(SSL.CHL) HOST(192.168.66.23) PORT(1414) TRANSPORT(client) SSLCIPHERSUITE(SSL_RSA_WITH_RC4_128_MD5)
Now I only can create connection using the PlainQCF. But failed to look up the SSL queue connection factory. My code looks like:
Hashtable environment = new Hashtable();
environment.put(Context.INITIAL_CONTEXT_FACTORY, "com.ibm.mq.jms.context.WMQInitialContextFactory");
environment.put(Context.PROVIDER_URL, "192.168.66.23:1414/SSL.CHL");
Context ctx = new InitialContext( environment );
QueueConnectionFactory qcf = (QueueConnectionFactory) ctx.lookup("SSLQCF");
qcf.createConnection();
....
Am I missing some context properties when looking up the SSL factory? connection And then I found the code is hanging on the line new InitialContext( environment ) for a long time, almost 5 minutes, and I got CC=2;RC=2009;AMQ9208... error.
Any suggestion would be appreciated. Is it true that SSL channel can't be connected by JNDI?
#T.Rob, thanks for your reply very much. But we still want to use WMQInitialContextFactory, so I'm afraid I still need to find solution for this.
I just defined the connection factory one time. The displayed info for the SSL queue connection factory like:
InitCtx> DISPLAY QCF(SSLQCF)
ASYNCEXCEPTION(ALL)
CCSID(819)
CHANNEL(SSL.CHL)
CLIENTRECONNECTOPTIONS(ASDEF)
CLIENTRECONNECTTIMEOUT(1800)
COMPHDR(NONE )
COMPMSG(NONE )
CONNECTIONNAMELIST(192.168.66.23(1414))
CONNOPT(STANDARD)
FAILIFQUIESCE(YES)
HOSTNAME(192.168.66.23)
LOCALADDRESS()
MAPNAMESTYLE(STANDARD)
MSGBATCHSZ(10)
MSGRETENTION(YES)
POLLINGINT(5000)
PORT(1414)
PROVIDERVERSION(UNSPECIFIED)
QMANAGER(TestMgr)
RESCANINT(5000)
SENDCHECKCOUNT(0)
SHARECONVALLOWED(YES)
SSLCIPHERSUITE(SSL_RSA_WITH_RC4_128_MD5)
SSLFIPSREQUIRED(NO)
SSLRESETCOUNT(0)
SYNCPOINTALLGETS(NO)
TARGCLIENTMATCHING(YES)
TEMPMODEL(SYSTEM.DEFAULT.MODEL.QUEUE)
TEMPQPREFIX()
TRANSPORT(CLIENT)
USECONNPOOLING(YES)
VERSION(7)
WILDCARDFORMAT(TOPIC_ONLY)
The JNDI Provider should be fine because I can look up the plain connection factory successfully. Also, for my client app, I extracted the cert from the key store which created for MQ server and imported it to the trust store(cacerts) of my JRE with alias name ibmwebspheremqtestmgr.
You are correct, with 2009 error there are some log entries:
=================================================================
4/20/2012 20:24:27 - Process(13768.3) User(MUSR_MQADMIN) Program(amqzmur0.exe)
Host(xxxx_host of my MQ) Installation(mqenv)
VRMF(7.1.0.0) QMgr(TestMgr)
AMQ6287: WebSphere MQ V7.1.0.0 (p000-L111019).
EXPLANATION:
WebSphere MQ system information:
Host Info :- Windows Server 2003, Build 3790: SP2 (MQ Windows 32-bit)
Installation :- C:\IBM\WebSphereMQ (mqenv)
Version :- 7.1.0.0 (p000-L111019)
ACTION:
None.
-------------------------------------------------------------------------------
4/20/2012 20:24:27 - Process(7348.116) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(xxxx_host of my MQ) Installation(mqenv)
VRMF(7.1.0.0) QMgr(TestMgr)
AMQ9639: Remote channel 'SSL.CHL' did not specify a CipherSpec.
EXPLANATION:
Remote channel 'SSL.CHL' did not specify a CipherSpec when the local channel
expected one to be specified.
The remote host is 'xxx_host of my app (192.168.66.25)'.
The channel did not start.
ACTION:
Change the remote channel 'SSL.CHL' on host 'xxx_host of my app (192.168.66.25)' to
specify a CipherSpec so that both ends of the channel have matching
CipherSpecs.
----- amqcccxa.c : 3817 -------------------------------------------------------
4/20/2012 20:24:27 - Process(7348.116) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(my app host) Installation(mqenv)
VRMF(7.1.0.0) QMgr(TestMgr)
AMQ9999: Channel 'SSL.CHL' to host 'xxx_host of my app (192.168.66.25)' ended
abnormally.
====================================================================
I also got some confusion with the error log. My app staged at at a machine which is different from my MQ. But the log says the Change the remote channel 'SSL.CHL' on host 'xxx_host of my app (192.168.66.25)' to
specify a CipherSpec so that both ends of the channel have matching
CipherSpecs. How can I change the channel cipher spec on my app host?
updates on MQEnvironment...
reply the comments.
The value of MQEnvironment.sslCipherSuite is null, so it throws out NullPointerExcetpion when i put it the the env hashtable. But i tried another one environment.put(MQC.SSL_CIPHER_SUITE_PROPERTY, "SSL_RSA_WITH_RC4_128_MD5") and it still failed with 2009 error.
For JMSAdmin tool, i had changed the config to use WMQInitialContextFactory. The configuration like(JMSAdmin.config):
INITIAL_CONTEXT_FACTORY=com.ibm.mq.jms.context.WMQInitialContextFactory
PROVIDER_URL=192.168.66.23:1414/SYSTEM.DEF.SVRCONN
The rest configuration leaves as default.
Kindly note, here i use the default channel SYSTEM.DEF.SVRCONN so that i can logon to admin console. If I change the channel to the SSL oneSSL.CHL, I also can't logon to admin console. The error happened here is just like the one in my client app.
Another clarification, in my client, i use follow code can connect to connect qmgr(TestMgr) successfully through channel SSL.CHL.
MQConnectionFactory factory = new MQConnectionFactory();
factory.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP);
factory.setQueueManager("TestMgr");
factory.setSSLCipherSuite("SSL_RSA_WITH_RC4_128_MD5");
factory.setPort(1414);
factory.setHostName("192.168.66.23");
factory.setChannel("SSL.CHL");
MQConnection connection = (MQConnection) factory.createConnection();
And now the problem is just like you said, that's the initial context failed connect to qmgr through SSL channel. The option(use plain channel for initial context and ssl channel for connection factory) you provided works too. But I still want to know how to get initial context with ssl channel work. Thanks for you patience very much. Your updates will be appreciated.
thanks
I never really liked com.ibm.mq.jms.context.WMQInitialContextFactory very much. It stores the managed objects on a queue. So in order to lookup the connectionFactory, which tells JMS how to connect to the QMgr, it is first necessary to connect to the QMgr to make the JNDI call. Therefore, before you can debug the SSL connection, you need to know whether the underlying JNDI provider is working.
If you want to skip the MQ-based JNDI provider and just use the filesystem, see the updated version of Bobby Woolf's article here. If you want to continue with com.ibm.mq.jms.context.WMQInitialContextFactory, read on but be prepared to provide more configuration info.
When you run the JMSAdmin tool, do you display the objects after creating them? For example, here is one of my JMSAdmin.bat scripts:
# Connection Factory for Client mode
# Delete the Connection Factory if it exists
DELETE CF(JMSDEMOCF)
# Define the Connection Factory
DEFINE CF(JMSDEMOCF) +
SYNCPOINTALLGETS(YES) +
SSLCIPHERSUITE(NULL_SHA) +
TRAN(client) +
HOST(127.0.0.1) CHAN(SSL.SVRCONN) PORT(1414) +
QMGR( )
# Display the resulting definition
DISPLAY CF(JMSDEMOCF)
This deletes the object (because JMSAdmin doesn't have a define with replace option) then defines the object, then displays it. Do you in fact see both objects defined? Can you connect and interactively display them both? Can you update your question with the contents displayed?
If so, then what does the JNDI provider configuration look like with each sample program? The 2009 indicates that there is at least a connection to the QMgr being made, so it is important to determine whether the thing that suffering the broken connection is your app or the JNDI provider. To diagnose that requires the config info you are using for the JNDI provider and whether it is the same in the working and failing cases. If not, how do they differ?
Once you know whether it's the app or the JNDI provider that is causing the problem (or switch to another JNDI provider that doesn't require an MQ connection such as the filesystem initial context) then it will be possible to determine the next steps.
The article linked above has samples of code and managed object scripts that use a filesystem JNDI provider. You may notice my scripts pasted in above use the same QMgr name. That's because I wrote that part of the article. When I want to switch to SSL using those same samples, I just update the connectionFactory to point to the SSL channel and it works.
Here are the other bits from the sample that I've modified:
java -Djavax.net.debug=ssl ^
-Djavax.net.ssl.trustStore=key2.jks ^
-Djavax.net.ssl.keyStore=key2.jks ^
-Djavax.net.ssl.keyStorePassword=???????? ^
-Djavax.net.ssl.trustStorePassword=???????? ^
-cp "%CLASSPATH%" ^
com.ibm.examples.JMSDemo -pub -topic JMSDEMOPubTopic %*
Note: The ^ is Windows version of line continuation.
Then if there are problems, I follow the debugging scenario I described in this SO answer. Note that the app will require a truststore, even if you have SSLCAUTH(OPTIONAL) on your channel. This is because the app must always validate the QMgr's certificate, even if the app does not present its own certificate. In my case I was using SSLCAUTH(REQUIRED) so my app needed both a keystore and a truststore. Your question mentions that the QMgr has a keystore but does not say what you did for the application.
Finally, a 2009 will usually generate an entry in the QMgr error logs. If you continue to get the problem, please update your question with those log entries.
UPDATE:
Responding to the comments, the JMSAdmin tool is part of the WMQ package. However, WMQ it comes with jars for filesystem context and LDAP context. The WMQInitialContextFactory is optional and is delivered as SupportPac ME01. When using WMQInitialContextFactory with the JMSAdmin tool (or the JMSAdmin GUI or with WMQ Explorer) it is necessary to configure the PROVIDER_URL with the host, port and channel. For example:
PROVIDER_URL: <Hostname>:<port>/<SVRCONN Channel Name>
192.168.66.23:1414/SSL.SVRCONN
So after reviewing your post again, I realized that you did provide the config info for WMQInitialContextFactory. I was looking for a JMSADmin.config file but you have it in the environment hash table. And that is where the problem is. You are attempting to use the SSL channel for both the WMQInitialContextFactory and the connection factory. This is what is causing the lookup to fail. The WMQInitialContextFactory first makes a Java connection to the QMgre in order to look in the queue to obtain the administered objects such as QCF. In order to do that, it needs to know the ciphersuite that the channel is set up for in order to negotiate the handshake. Right now, the *only * place that ciphersuite is recorded is in the QCF definition.
Try adding the following line:
environment.put(MQEnvironment.sslCipherSuite, "SSL_RSA_WITH_RC4_128_MD5");
As per this Infocenter page, that should tell the context factory classes what ciphersuite to use. Of course, they also need to know where the trust store is (and possibly keystore if the channel has SSLCAUTH(RQUIRED) set) so you still need to get those values in the environment. You can use the command-line variables or try loading them into the environment using code. You'll need both -Djavax.net.ssl.trustStore=key2.jks and -Djavax.net.ssl.trustStorePassword=????????.
The other option is to continue to use the plaintext channel for the WMQInitialContextFactory and the SSL channel for the application. If the plaintext channel has an MCAUSER for a non-privileged user ID, it can be restricted to only connect to the QMgr and access the queue that contains the administered objects. With those restrictions, anyone will be able to read the administered objects using that channel but not the application queues or administrative queues.