I want to implement integration test for my spring security kerberos authentication.
There is KerberosRestTemplate (reference) for this purpose. KerberosRestTemplate has got a default constructor with description "Leave keyTabLocation and userPrincipal empty if you want to use cached ticket".
For research i wrote a trivial class:
public static void main(String[] args) {
KerberosRestTemplate krt = new KerberosRestTemplate();
String result = krt.getForObject("http://testserver.testad.local:8080/", String.class);
System.out.println(result);
}
When i run it, exception has thrown:
Exception in thread "main" org.springframework.web.client.RestClientException: Error running rest call; nested exception is java.lang.IllegalArgumentException: Null name not allowed
at org.springframework.security.kerberos.client.KerberosRestTemplate.doExecute(KerberosRestT
emplate.java:196)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:530)
at org.springframework.web.client.RestTemplate.getForObject(RestTemplate.java:237)
at edu.mezlogo.Application.main(Application.java:9)
Caused by: java.lang.IllegalArgumentException: Null name not allowed
at sun.security.krb5.PrincipalName.<init>(Unknown Source)
at sun.security.krb5.PrincipalName.<init>(Unknown Source)
at javax.security.auth.kerberos.KerberosPrincipal.<init>(Unknown Source)
at javax.security.auth.kerberos.KerberosPrincipal.<init>(Unknown Source)
at org.springframework.security.kerberos.client.KerberosRestTemplate.doExecute(KerberosRestT
emplate.java:182)
... 3 more
My klist contain correct cached ticket, for my service.
#2> Client: deniz # TESTAD.LOCAL
Server: HTTP/testserver.testad.local # TESTAD.LOCAL
KerbTicket Encryption Type: RSADSI RC4-HMAC(NT)
Ticket Flags 0x40a10000 -> forwardable renewable pre_authent name_canonicalize
Start Time: 2/5/2016 6:17:39 (local)
End Time: 2/5/2016 16:16:32 (local)
Renew Time: 2/12/2016 6:16:32 (local)
Session Key Type: RSADSI RC4-HMAC(NT)
And my browser (firefox) has successful authenticated with kerberos sso.
I use Windows server 2012. And Windows 7 as client.
How to use cached ticket? (And does ktpass can generate client keytab?)
P.s. sorry for my English.
You are checking the Windows credentials cache - while Java is maintaining it's separate. In order to view the Java's credentials cache you should execute the klist command from your JRE/bin folder
Related
I have implemented code as specified here to add the multi-tenancy by issuer feature to my Spring Security configuration. However, when my Spring Boot application starts, I encounter the following error:
2021-10-26 | 10:31:37.762 | main | WARN | ConfigServletWebServerApplicationContext | Trace: | Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'springSecurityFilterChain' defined in class path resource [org/springframework/security/config/annotation/web/configuration/WebSecurityConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [javax.servlet.Filter]: Factory method 'springSecurityFilterChain' threw exception; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'org.springframework.security.oauth2.jwt.JwtDecoder' available
2021-10-26 | 10:31:39.361 | main | ERROR | o.s.b.d.LoggingFailureAnalysisReporter | Trace: |
***************************
APPLICATION FAILED TO START
***************************
Description:
Method springSecurityFilterChain in org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration required a bean of type 'org.springframework.security.oauth2.jwt.JwtDecoder' that could not be found.
Action:
Consider defining a bean of type 'org.springframework.security.oauth2.jwt.JwtDecoder' in your configuration.
The documentation states:
This is nice because the issuer endpoints are loaded lazily. In fact, the corresponding JwtAuthenticationProvider is instantiated only when the first request with the corresponding issuer is sent.
I wouldn't think that at application startup a JwtDecoder would be expected to be already instantiated according to this documentation. What am I missing in my configuration?
Update
After Steve Riesenberg's help, I have the following code compiling now. You can see in my code snippet what I used to have working (i.e., before we had the multi-tenant requirement) is now commented out:
//.jwt().jwtAuthenticationConverter(jwtAccessTokenConverter);
String[] issuers = new String[] {"https://www.example.com/auth/realms/example"};
JwtIssuerAuthenticationManagerResolver jwtIssuerAuthenticationManagerResolver =
new JwtIssuerAuthenticationManagerResolver(issuers);
...
.anyRequest()
.authenticated()
.and()
.oauth2ResourceServer(
oauth2ResourceServerConfigurer ->
oauth2ResourceServerConfigurer
.authenticationManagerResolver(jwtIssuerAuthenticationManagerResolver)
.authenticationEntryPoint(authenticationExceptionHandler));
// .jwt().jwtAuthenticationConverter(jwtAccessTokenConverter);
However, without the ability now to supply my own token converter since I had to remove .jwt(), I'm still unclear on what the default converter provides me.
Also, I'm not clear why I need to use the third constructor of JwtIssuerAuthenticationManagerResolver and provide my own AuthenticationManagerResolver<String>? If my code above is compiling, why do I need to do this?
The JwtDecoder is required if you've configured the resource server with a JwtAuthenticationProvider (because it requires a specific JwtDecoder). This would happen if you do for example:
http
...
.oauth2ResourceServer(oauth2 -> oauth2
.authenticationManagerResolver(authenticationManagerResolver)
.jwt(Customizer.withDefaults())
)
Since the authenticationManagerResolver is an alternative that branches at the AuthenticationManager level, you don't want to use a JwtAuthenticationProvider. It will be used internally by the JwtIssuerAuthenticationManagerResolver.
Remove .jwt() in that case to prevent the configurer from wiring one up.
Update
The section in the docs on Dynamic Tenants gives some more info on various customization options.
In your case, without the use of .jwt() you cannot as easily wire in a JwtAuthenticationConverter that can customize the returned granted authorities.
The JwtIssuerAuthenticationManagerResolver is internally using a TrustedIssuerJwtAuthenticationManagerResolver. This is what performs the multi-tenancy capability, by extracting an issuer claim from the JWT, and creating a JwtDecoder + new JwtAuthenticationProvider(jwtDecoder) based on the matched issuer.
In order to customize the JwtAuthenticationProvider, you will have to re-implement this class so you can inject your JwtAuthenticationConverter into each created instance. You will implement AuthenticationManagerResolver<String> to do this. Call it CustomTrustedIssuerJwtAuthenticationManagerResolver (see this line).
You just need to provide that to the JwtIssuerAuthenticationManagerResolver, like this:
String[] issuers = new String[] {"https://www.example.com/auth/realms/example"};
AuthenticationManagerResolver<String> authenticationManagerResolver =
new CustomTrustedIssuerJwtAuthenticationManagerResolver(issuers);
JwtIssuerAuthenticationManagerResolver jwtIssuerAuthenticationManagerResolver =
new JwtIssuerAuthenticationManagerResolver(authenticationManagerResolver);
...
Intro: I'm trying to get Azure Pod Identity to work in our cluster to read secrets from a KeyVault, and am mostly succeeding (so far so good). For the time being, we have two keyvaults, two AzureIdentity's, two AzureIdentityBinding's and two Pods using each their keyvault.
While testing, both pods are equal - only difference being their aadpodidbinding and an environment variable indicating what keyvault to use. At startup, the pod connects to the KeyVault, reads two values and prints them with Console.WriteLine. If the connection fails, the pod will crash and k8s will restart it.
The problem: One pod might startup being able to read from the keyvault immediately, while the other will crash and restart for - what seems to be - rather consistently 5 times before being able to get an access token.
When it fails, the following Exception is thrown:
Unhandled Exception: Microsoft.Azure.Services.AppAuthentication.AzureServiceTokenProviderException: Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/******************. Exception Message: Tried the following 3 methods to get an access token, but none of them worked.
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/******************. Exception Message: Tried to get token using Managed Service Identity. Access token could not be acquired. MSI ResponseCode: Forbidden, Response: no AzureAssignedIdentity found for pod:default/kv-test-be
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/******************. Exception Message: Tried to get token using Visual Studio. Access token could not be acquired. Environment variable LOCALAPPDATA not set.
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/******************. Exception Message: Tried to get token using Azure CLI. Access token could not be acquired. No such file or directory
at Microsoft.Azure.Services.AppAuthentication.AzureServiceTokenProvider.GetAuthResultAsyncImpl(String authority, String resource, String scope)
at Microsoft.Azure.Services.AppAuthentication.AzureServiceTokenProvider.<get_KeyVaultTokenCallback>b__8_0(String authority, String resource, String scope)
at Microsoft.Azure.KeyVault.KeyVaultCredential.PostAuthenticate(HttpResponseMessage response)
at Microsoft.Azure.KeyVault.KeyVaultCredential.ProcessHttpRequestAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at Microsoft.Azure.KeyVault.KeyVaultClient.GetSecretsWithHttpMessagesAsync(String vaultBaseUrl, Nullable`1 maxresults, Dictionary`2 customHeaders, CancellationToken cancellationToken)
at Microsoft.Azure.KeyVault.KeyVaultClientExtensions.GetSecretsAsync(IKeyVaultClient operations, String vaultBaseUrl, Nullable`1 maxresults, CancellationToken cancellationToken)
at Microsoft.Extensions.Configuration.AzureKeyVault.AzureKeyVaultConfigurationProvider.LoadAsync()
at Microsoft.Extensions.Configuration.AzureKeyVault.AzureKeyVaultConfigurationProvider.Load()
at Microsoft.Extensions.Configuration.ConfigurationRoot..ctor(IList`1 providers)
at Microsoft.Extensions.Configuration.ConfigurationBuilder.Build()
at KeyvaultTest.Program.Main(String[] args) in /app/src/Program.cs:line 16
The behaviour is similar when using FlexVolume (which eventually one group of our pods will use in production), but I find it easier to relate to the error with two equal pods.
While waiting for the pod to succeed, I'm seeing both "binding removed" and "binding applied" messages in mic's log.
My questions:
Is this behaviour "as intendend" and perhaps documented somewhere?
Is there a setting I can apply to make the "remove - apply" cycle faster?
Is there anything else that can be done to improve the time between pod creation and the identity binding being applied? Is this issue perhaps related to https://github.com/Azure/aad-pod-identity/issues/145
Sourcecode:
Program.cs
using System;
using System.IO;
using System.Threading;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
namespace KeyvaultTest
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Starting Keyvault read");
var configuration = new ConfigurationBuilder()
.AddAzureKeyVault()
.Build();
var test1 = configuration.GetValue<string>("jtest");
Console.WriteLine(test1);
var test2 = configuration.GetValue<string>("jtest:jtest");
Console.WriteLine(test2);
Console.WriteLine("Finished Keyvault read");
}
}
}
KeyVaultConfiguration.cs.cs
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Text;
using System.Threading;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Configuration.AzureKeyVault;
namespace KeyvaultTest
{
public static class KeyVaultConfiguration
{
public static IConfigurationBuilder AddAzureKeyVault(this IConfigurationBuilder builder)
{
var builtConfig = builder.Build();
var keyVaultName = Environment.GetEnvironmentVariable("KV_NAME");
if (string.IsNullOrWhiteSpace(keyVaultName))
{
throw new Exception("KV_NAME is not defined");
}
Console.WriteLine($"Using KV_NAME = {keyVaultName}");
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyVaultClient = new KeyVaultClient(
new KeyVaultClient.AuthenticationCallback(
azureServiceTokenProvider.KeyVaultTokenCallback));
builder.AddAzureKeyVault(
$"https://{keyVaultName}.vault.azure.net/",
keyVaultClient,
new DefaultKeyVaultSecretManager());
return builder;
}
}
}
Any help, hints or ideas are much appreciated.
Note: I've posted this same question to the Issue board on of the project's github page https://github.com/Azure/aad-pod-identity/issues/181
We were facing the same issue. We overcame this issue by upgrading AAD Pod Identity. Our version was 1.5 and upgrading this one to 1.7 resolved our issue.
Before that, we had also upgraded the packages (Microsoft.Azure.Services.AppAuthentication & Azure.Security.KeyVault.Secrets) that our applications were using to the latest versions but it wasn't enough.
We have an evirironment as follows:
CPE: 2 Servers
ICN: 2 servers
Application Server: WAS 8.5.5 Base
Both content Engine and Navigator are configured for high availability using Load Balancer. However, in case ICN 1 is connected to CPE1 and CPE1 is dwn, then Navigator is unable to connect to CPE2 even though load balancer of CPE is pointing to CPE2.
The logs are as follows:
javax.naming.NamingException: NMSV0610I: A NamingException is being thrown from a javax.naming.Context implementation. Details follow:
Context implementation: com.ibm.ws.naming.jndicos.CNContextImpl
Context method: lookupExt
Context name: HDOSYS0202Node01Cell/nodes/HDOSYS0202Node01/servers/server1
Target name: FileNet/Engine,10.39.128.66:2809/FileNet/Engine
Other data:
Exception stack trace: javax.naming.NamingException: Error during resolve [Root exception is org.omg.CORBA.TRANSIENT: initial and forwarded IOR inaccessible vmcid: IBM minor code: E07 completed: No]
at com.ibm.ws.naming.jndicos.CNContextImpl.doLookup(CNContextImpl.java:1867)
at com.ibm.ws.naming.jndicos.CNContextImpl.doLookup(CNContextImpl.java:1776)
at com.ibm.ws.naming.jndicos.CNContextImpl.lookupExt(CNContextImpl.java:1433)
at com.ibm.ws.naming.jndicos.CNContextImpl.lookup(CNContextImpl.java:615)
at com.ibm.ws.naming.util.WsnInitCtx.lookup(WsnInitCtx.java:165)
at com.ibm.ws.naming.util.WsnInitCtx.lookup(WsnInitCtx.java:179)
at org.apache.aries.jndi.DelegateContext.lookup(DelegateContext.java:161)
at javax.naming.InitialContext.lookup(InitialContext.java:436)
com.ibm.ws.ssl.channel.impl.SSLReadServiceContext$SSLReadCompletedCallback.complete(SSLReadServiceContext.java:1818)
at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:175)
at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217)
at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161)
at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138)
at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204)
at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:775)
at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:905)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1864)
Caused by: org.omg.CORBA.TRANSIENT: initial and forwarded IOR inaccessible vmcid: IBM minor code: E07 completed: No
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:412)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:271)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:258)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:376)
at java.net.Socket.connect(Socket.java:546)
at com.ibm.ws.orbimpl.transport.WSTCPTransportConnection.createSocket(WSTCPTransportConnection.java:370)
at com.ibm.CORBA.transport.TransportConnectionBase.connect(TransportConnectionBase.java:366)
at com.ibm.ws.orbimpl.transport.WSTransport.getConnection(WSTransport.java:437)
at com.ibm.CORBA.transport.TransportBase.getConnection(TransportBase.java:188)
at com.ibm.rmi.iiop.TransportManager.get(TransportManager.java:100)
at com.ibm.rmi.iiop.GIOPImpl.getConnection(GIOPImpl.java:134)
at com.ibm.rmi.iiop.GIOPImpl.createRequest(GIOPImpl.java:178)
at com.ibm.rmi.corba.ClientDelegate._createRequest(ClientDelegate.java:2010)
at com.ibm.rmi.corba.ClientDelegate.createRequest(ClientDelegate.java:1186)
at com.ibm.rmi.corba.ClientDelegate.createRequest(ClientDelegate.java:1272)
Content Platform Engine does not support session replication which would be required to failover. Once the connection is established, the client will bind to the specific endpoint so neither corbaloc nor a load balancing alias will help. If the nodes are not in a Cluster the peer will not be in the JNDI tree so do not know about each other. What you have is called a "stovepipe" configuration. You can load balance the front end, but each front end will talk to a specific backend, so not highly available. You could put the CE's behind a hardware load balancer (SNAT) but it would still lack fail over. CPE will run on JBOSS but ICN does not, so to be highly available you'll need to deploy to WebSphere ND or Weblogic.
Could you share the URI used to establish CPE connection?
When Content Platform Engine is made highly available through an application server cluster configuration the Content Platform Engine URI should have the following form (with no carriage returns):
corbaloc::node1_hostname:BOOTSTRAP_ADDRESS,:node2_hostname:BOOTSTRAP_ADDRESS/cell/clusters/your_websphere_cluster_name/FileNet/Engine
Example:
corbaloc::testnode1:9810,:testnode2:9810/cell/clusters/testwascluster/FileNet/Engine
This configuration requires the WebSphere cluster name in addition to the node names as part of the URI. The bootstrap port for a cluster configuration (by default, port 9810) is usually different from the bootstrap port on a non-cluster (standalone) configuration (by default, port 2809).
Only one URI is used regardless of SSL use. WebSphere EJB over SSL is automatically established if EJB security is enabled.
I found a link containing code to solve the issue in my case. The only problem is how to implement this code for Content Navigator
"This may help. I have recently written an EJB print app which is used by other apps at my company to generate printable documents. I am also using an access bean on the client to remotely call my EJB. The client is a 4 server cluster, and my EJB is a 2 server cluster. I have also experienced problems with the "connection refused" exception if I stop the application server(s) running my EJB when calling without restarting the client. Here is what I've done so far to resolve the issue.
Looking at the access bean, after you create an instance, when you call your remote method (whatever that may be and in my case is renderDocuments() which i will use in my example below) the access bean does the following:"
public DocumentRenderOutputContext renderDocuments
DocumentRequestList documentRequestList)
{
try
{
instantiateEJB();
return ejbRef().renderDocuments
documentRequestList);
}
catch (NamingException ne)
{
throw new DocumentRenderException(ne);
}
catch (CreateException ce)
{
throw new DocumentRenderException(ce);
}
catch (RemoteException re)
{
THE EXCEPTION THROWN WHEN THE APP SERVER IS
BROUGHT DOWN WITHOUT RESTARTING THE CLIENT
WILL BE CAUGHT HERE
}
}
If you bring down your EJB app server(s) without re-starting the client, the remote exception above will catch the "connect refused" exception.
So what i do inside the remote exception catch is the following:
try
{
//see below for methods
reset();
return retryRenderDocuments(documentRequestList);
}
catch (NamingException ne)
{
throw new DocumentRenderException(ne);
}
catch (CreateException ce)
{
throw new DocumentRenderException(ce);
}
catch (RemoteException remote)
{
throw new DocumentRenderException(re);
}
private void reset() throws NamingException
{
resetHomeCache();
resetEJBRef();
}
private DocumentRenderOutputContext retryRenderDocuments
DocumentRequestList documentRequestList)
throws
RemoteException,
NamingException,
CreateException,
DocumentRenderException
{
DocumentRenderOutputContext outputContext = null;
Properties properties = new Properties();
properties.put(
javax.naming.Context.PROVIDER_URL,
getInit_NameServiceURLName()); //im assuming youve
properties.put(
PROPS.JNDI_CACHE_OBJECT,
PROPS.JNDI_CACHE_OBJECT_CLEARED);
InitialContext initialContext = new InitialContext(properties);
Object object = initialContext.lookup(getInit_JNDIName());
ECommercePrintHome homeRef = (ECommercePrintHome) object;
ECommercePrint printEngine = homeRef.create();
outputContext = printEngine.renderDocuments(documentRequestList);
return outputContext;
}
Ref:- http://www.theserverside.com/discussions/thread.tss?thread_id=31495
My Application is in MVC 4 with Sql Anywhere 16 ODBC using Entity framework. I used Visual studio 2010. requirement is multi tenant so i created connection string dynamic on my Global.asax and once main database has been connected i create connection string of user based database on my Account controller.
Application run well when i run by visual studio. but when i publish this application on IIS 8.5 and load application on browser it shows below error.
<ErrorType>System.Data.EntityException: The underlying provider failed
on Open. ---> iAnywhere.Data.SQLAnywhere.SAException: DSN 'MainDB'
does not exist at iAnywhere.Data.SQLAnywhere.SAConnection.Open()
at
System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean
openCondition, DbConnection storeConnectionToOpen, DbConnection
originalConnection, String exceptionCode, String attemptedOperation,
Boolean& closeStoreConnectionOnFailure) --- End of inner
exception stack trace --- at
System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean
openCondition, DbConnection storeConnectionToOpen, DbConnection
originalConnection, String exceptionCode, String attemptedOperation,
Boolean& closeStoreConnectionOnFailure) at
System.Data.EntityClient.EntityConnection.Open() at
PDMSReporter.Controllers.AccountController.Login(LoginModel Login) in
E:\Projects\Triforce_PDM
Reporter\Latest_PDMSReporter\PDMSReporter\PDMSReporter\Controllers\AccountController.cs:line
56</ErrorType>
<ErrorDesc>The underlying provider failed on Open.</ErrorDesc>
I tried a lot to fix this issue. but didn't find any proper solution for it.
Please help me to fix this issue or suggest post where I can solve it.
The error message tells you: "DSN 'MainDB' does not exist". Your connection string is using a DSN that the client cannot find. This could be because you are creating a user DSN rather than a system DSN - if your client is running as a service (i.e. in IIS), it can't read user DSNs.
If you're creating the DSN using the dbdsn utility, make sure you use the -ws switch instead of -w.
I am currently using TFS 2013 (local installation) to try to build from an internal GitHub Enterprise installation using LDAP Authentication.
The problem I am getting is that it cannot access the source code, how can I configure TFS Build to use a specific authentication?
From the TFS Build Log
Exception Message: An error was raised by libgit2. Category = Net (Error).
VS30063: You are not authorized to access https://user:password#githubrepository.corp.company.net. (type LibGit2SharpException)
Exception Data Dictionary:
libgit2.code = -1
libgit2.category = 11
Exception Stack Trace:
Server stack trace:
at LibGit2Sharp.Core.Ensure.HandleError(Int32 result)
at LibGit2Sharp.Core.Proxy.git_clone(String url, String workdir, GitCloneOptions opts)
at LibGit2Sharp.Repository.Clone(String sourceUrl, String workdirPath, Boolean bare, Boolean checkout, TransferProgressHandler onTransferProgress, CheckoutProgressHandler onCheckoutProgress, Credentials credentials)
at Microsoft.TeamFoundation.Build.Activities.Git.GitPull.GitClone.GetRepository(String repositoryUrl, String workingFolder)
at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Object[]& outArgs)
at System.Runtime.Remoting.Messaging.StackBuilderSink.AsyncProcessMessage(IMessage msg, IMessageSink replySink)
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.EndInvokeHelper(Message reqMsg, Boolean bProxyCase)
at System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(Object NotUsed, MessageData& msgData)
at System.Func3.EndInvoke(IAsyncResult result)
at Microsoft.TeamFoundation.Build.Activities.Git.GitPull.GitRepositoryBase.EndExecute(AsyncCodeActivityContext context, IAsyncResult result)
at System.Activities.AsyncCodeActivity1.System.Activities.IAsyncCodeActivity.FinishExecution(AsyncCodeActivityContext context, IAsyncResult result)
at System.Activities.AsyncCodeActivity.CompleteAsyncCodeActivityData.CompleteAsyncCodeActivityWorkItem.Execute(ActivityExecutor executor, BookmarkManager bookmarkManager)
Follow up
I have tried the URL params for authentication (example)
https://username:password#domain.com/user/project.git
More Follow up
Completely uninstalled and update to the 2013 RC, error message has been updated as well, as it is different.
I have also tried setting up the build controller to run as an authenticated LDAP user in the github enterprise installation.
Libgit2 does support the url credentials, however TFS build activities for GitPull overrides the default behavior with a Microsoft.TeamFoundation.Build.Activities.Git.TfsSmartSubtransport class for the http and https protocol.
This class unfortunately ignores credentials in the URL and instead tries to retrieve credentials from the registry.
I was able to successfully get a TFS build server to pull source code from a gitlab server using TFS build with the default GitTemplate.12.xaml workflow.
Setup the TFS build's repository URL without any credentials in the URL.
Encrypted your credential's password with the following bit of code. This needs to get run on the build server as the encryption process is specific to the local machine it's executed on.
var password = "your_password";
var bytes = Encoding.Unicode.GetBytes(password);
var bytes2 = ProtectedData.Protect(bytes, null, DataProtectionScope.LocalMachine);
var base64 = Convert.ToBase64String(bytes2);
Add the following registry settings to your build server.
NOTE: The URL in the registry must exactly match the absolute URL of your repository or TFS won't find the credentials.
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\TeamFoundationServer\12.0\HostedServiceAccounts\Build\http://githubrepository.corp.company.net]
"Microsoft_TFS_UserName"="<username goes here>"
"Microsoft_TFS_Password"="<bas64 encrypted password goes here>"
"Microsoft_TFS_CredentialsType"="Windows"
The only other alternatives to this approach that I could think of is to modify the default workflow and replace the GitPull activity with something else.
I'm not suggesting that this is the best method, but it worked for me.
That's odd. It looks like the HTTP transport should honor url-encoded credentials.
In any case, it might be better and safer to set up the remote to get the credentials from elsewhere. The clone code is a good example of how to do this: here's how to set up the callback, and here's an example of how to generate the credential object.