Jenkins API Requests To crumbIssuer With HTTP Header Authentication Fail - jenkins

We have a Jenkins instance set up behind an NGinx HTTPS reverse proxy. User authentication is done via PKI user client certificates.
So I want to make RESTful API requests against Jenkins to do things like kick off builds etc. I have got this to work by using application tokens and can kick off builds with and without parameters by using Curl, specifying my client certificate and key. All good so far.
I was interested in using crumbs instead of application tokens as nothing needs to be set up beforehand (i.e. creating the application token). However all the examples I could find use basic auth style authentication in the URI and trying to use the PKI authentication as above results in Error 403 No valid crumb was included in the request. Also remember that being PKI based, users don't have passwords on the system as they are authenticated by their certificate.
Would I be right in assuming that the crumb approach can't be used with PKI authentication?

Related

Do OpenID Connect id_token and access_token need to be validated if they are obtained by web server?

I am wondering that in the OpenID Connect Auth Code Flow, whether there is still a need to validate the access_token and id_token given they are obtained by my web server rather than by browser (i.e. using back channel rather than front channel)?
By "auth code flow" I am referring to the flow where browser only receives an "authorization code" from the authorization server (i.e. no access_token, no id_token), and sends the auth code to my web server. My web server can therefore directly talk to the authorization server, presenting the auth code, and exchange it for the access_token and id_token. It looks like I can simply decode the access_token and id_token to get the information I want (mainly just user id etc.)
My understanding of the need for validating the access_token is that because access_token is not encrypted, and if it is transmitted through an insecure channel, there is a chance that my web server can get a forged token. Validating the token is basically to verify that the token has not been modified.
But what if the access_token is not transmitted on any insecure channel? In the auth code flow, web server directly retrieves the access_token from the auth server, and the token will never be sent to a browser. Do I still need to validate the token? What are the potential risks if I skip the validation in such flows?
You should always validate the tokens and apply the well known validation patterns for tokens. Because otherwise you open up your architecture for various vulnerabilities. For example you have the man-in-the-middle issue, if the hacker is intercepting your "private" communication with the token service.
Also most libraries will do the validation automatically for you, so the validation is not a problem.
When you develop identity systems you should follow best practices and the various best current practices, because that is what the users of the system expects from you.
As a client, you use HTTPS to get the public key from the IdentityServer, so you know what you got it from the right server. To add additional security layers, you could also use client side HTTPS certificates, so that the IdentityServer only issues tokens to clients that authenticates using a certificate.
In this way, man-in-the-middle is pretty impossible. However, in data-centers in the backend, you sometimes don't use HTTPS everywhere internally. Often TLS is terminated in a proxy, you can read more about that here
When you receive an ID-token, you typically create the user session from that. And yes, as the token is sent over the more secure back channel, you could be pretty secure with that. But still many attacks today occurs on the inside, and just to do a good job according to all best practices, you should validate them, both the signature and also the claims inside the token (expire, issuer, audience...).
For the access token, it is important that the API that receives it do validate it according to all best practices, because anyone can send requests with tokens to it.
Also, its fun to learn :-)
ps, this video is a good starting point

How to use curl/postman to access web page behind Azure AD Application Proxy

How can I use cURL or Postman to read a web page from behind Azure's AD Application Proxy?
I am trying to better understand how OAuth works in order to create some automation scripts that will need to access a server that we have behind an Application Proxy. Currently I am using a web browser and must sign in to my Microsoft account in order to view a web page hosted by the server. This works fine. Seeing as I am able to accomplish this without difficult using a web browser, it seems like I should also be able to accomplish the same using cURL or Postman.
The app that we have registered is registered as a confidential client (Microsoft's "Web App"). The public client option is disabled. It uses the Implicit Grant type to return an ID Token (The Access Token checkbox is not checked, only the ID Token checkbox). I don't have the ability to create a new client secret nor the ability to enable a public client type.
I have tried several of the different OAuth flows, but they all seem to require a client secret (because we are using the confidential client type), which I do not have access to. How am I able to read the web page through the browser despite not knowing any client secrets? How can I do the same via cURL or Postman?
After being redirected to Microsoft's login page and logging in, Azure saves an access cookie in the browser. You can copy this access cookie and include it as part of a request in Postman. This will allow the request from Postman (or curl or whatever) to get to the service behind the Azure AD Application Proxy.
This is enough for my particular use case, but it does come with the problem of having to teach users how to find and copy the access cookie. It would be nice to find a more user-friendly way of getting the access cookie (or something similar).

how to validate WSO2 oauth2 access token on Resource Server

I am looking for fittings ends to our SSO puzzle.
Currently we have an OpenLDAP behind WSO2 Identity Provider. A client (Service Provider) redirects authorization to the IP (OAuth2) and recieves an access_token.
All fine.
Next step is to validate this token on another Service Provider, in this case a reverse proxy (Apache or Nginx) residing on another EC2 instance, which protects a number of unprotected 3rd party applications (3rd party in the sense that we can't touch source code, but do the hosting our selves). Which tools do serve this request?
Am am aware that the OAuth2 spec leaves a hiatus here and that there is draft which adds a /introspect call to validate this token. I also know that pingidentity implements this draft as part of there Apache module (https://github.com/pingidentity/mod_auth_openidc).
I am just wondering how to implement this on the WSO2-IS side, as I don't find documentation.
*bonus: we also hit several errors while deploying WSO2 (SQL errors) and using it (https://wso2.org/jira/browse/IDENTITY-3009) which made us a bit distrusting about the product.
Oauth2 token validate may be performed with a SOAP call to
{WSO2_IS}/services/OAuth2TokenValidationService.OAuth2TokenValidationServiceHttpsSoap11Endpoint/
The response will include details regarding token validity and JWT claims.

OAuth - How is it secure?

I am writing some code to get Twitter and Instagram feed. Before I can write any code, I keep wanting to get a good understanding of oAuth because I have this nagging feeling that it is not all that secure and that most times, for instance when accessing public tweets, it is an unnecessary hassel. I started reading the oAuth 2 specification to get a better understanding, I am still in the middle of it. And I have a host of questions.
Let's use Twitter as an example.
A user accesses your site. You redirect them to Twitter for authentication and to obtain the authorization_grant code.
I understand this part is secure because the user authentication and the redirect to your website will happen over ssl. Is it enough for Twitter to support SSL or does your site also have to support SSL for the redirect to be secure? You wouldn't want the authorization code to be transferred insecurely, right?
Now that you have your authorization_grant code, your site will send a request to Twitter to obtain an access token. When making this request your site will send the authorization_grant code, your client id and client secret. Again I guess the communication is secure because this will happen over ssl. But what if the site has included the client id and secret somewhere in its HTML or Javascript, especially if it is a static site with no server side code?
Should the redirect url always be handled by server side code and the server side code should make the request for access token without ever going through HTML or Javascript?
Once you have the access token, you will include it in your request to obtain the user's tweets, to post tweets on their behalf etc. Again if the site in question were to include the access token inside its HTML or JavaScript along with the client id and secret, that would be pretty insecure, right?
It seems all the security of oAuth stems from ssl and the client's ability to keep their client secret secret. Am I right in this conclusion?
Another thing - in the first step of the process, when the client redirects the user to Twitter to authenticate and obtain the authorization_grant code, they could send in their client id and secret and get the access token directly instead of making a second request for it. I think this is what they mean by the implicit method in the specification.
So, why was this extra step of sending a second request to obtain access token added in the specification? Does it increase security?
I am not sure about twitter API, I am talking with respect to stackexchange API
https://api.stackexchange.com/docs/authentication
Again I guess the communication is secure because this will happen
over ssl. But what if the site has included the client id and secret
somewhere in its HTML or Javascript, especially if it is a static site
with no server side code?
client_secret is send only in the case of explicit flow. Explicit flow should be used by server side application and care should be taken to keep the client_secret safe.
So, why was this extra step of sending a second request to obtain
access token added in the specification?
Well, Implicit flow is less secure than explicit flow since access toke is send to the user agent. But there is an attribute expire in the case of implicit flow which will get expired unless you have specified scope as no_expiry. Also server side flow can be used only by the apps that are registerd
It seems all the security of oAuth stems from ssl and the client's
ability to keep their client secret secret. Am I right in this
conclusion?
Again client_secret will be available in server side flow. But yes, client should take care that access_token is not given out
Check out this link. It gives an example of possible vulnerability in ouath.

Client authentication on Public Client

Studying OAuth2.0 I finally found these 2 refs:
RFC6749 section 2.3,
RFC6749 section 10.1
Correct me if I'm wrong:
It's possible to use unregistered clients, but you have to manage them yourself with security risks.
How should I manage them?
Some more specific questions:
A Native Application (a Public Client indeed) is not able, by definition, to safely store its credentials (client_id + secret). Is it an unregistered client? If I can't verifiy/authenticate it using a secret, what else should I do?
client registration ≠ endpoint registration: the first is about registering Client Credentials (client_id + secret); the second about registering Client Redirection Endpoints. Is the Redirection Endpoint registration sufficient to grant the authenticity of the Client?
Does Client Credential Grant use the same credentials (client_id + secret) for client registration?
I think you could answer me by explaining what does this paragraph (RFC6749 section 10.1) mean.
Please give me some references and practical examples on how to implement the interaction between the authorization server and the resource server.
Thanks
tl;dr:
Native clients cannot be authenticated with client_id and client_secret. If you need to authenticate the client, you'll have to implement an authentication scheme that doesn't entrust the shared secret to the client (or involve the end-user in the client authentication discussion). Depending on your application's security model, you might not need to authenticate the client.
The redirection endpoint is not generally sufficient to authenticate the client (though exceptions exist).
The "client credential" grant type may use any client authentication mechanism supported by the authorization server, including the credentials given out at client registration.
The gist, as I read it, is that you can trust a confidential client's client_id (read: "username") and client_secret (read: "password") to authenticate them with your service. There is no[1] chance that a third-party application will ever represent itself with that client's credentials, because they are reasonably assumed to be stored safely away from prying eyes.
A public client, however, can make no such guarantee – whether a browser-based application or a native desktop application, the client's id and secret are distributed to the world at large. It's quite reasonable to assume that such applications will end up in the hands of skilled developers and hackers who can dig into the client and extract the id and secret. For this reason, Section 10.1 explicitly states that:
The authorization server MUST NOT issue client passwords or other
client credentials to native application or user-agent-based
application clients for the purpose of client authentication.
Okay. So public clients cannot be authenticated by password. However…
The authorization server MAY issue a client password or other
credentials for a specific installation of a native application
client on a specific device.
This exception works because it ties the authentication of the client to a specific device, meaning that even if someone walked away with the client's secret, they couldn't reuse it. Implicit in this exception, however, is that the "specific installation … on a specific device" must be uniquely identifiable, difficult to spoof, and integral to the authentication process for that client.
Not every native application can meet those criteria, and a browser-based application certainly cannot, since there's nothing uniquely identifiable or difficult to spoof in the environment in which it runs. This leads to a couple of options – you can treat the client as unauthenticated, or you can come up with a more appropriate authentication mechanism.
The key to the authentication dance is a shared secret – something that's known only to the authorization server and the authenticating client. For public clients, nothing in the client itself is secret. Thankfully, there are options, and I'm not just talking about RFID key fobs and biometrics (though those would be completely acceptable).
As a thought experiment, let's consider a browser-based client. We can reasonably assume a few things about it: it's running in a browser, it's served from a particular domain, and that domain is controlled by the client's authors. The authentication server should already have a Client Redirection URI, so we've got something there, but as the spec calls out:
A valid redirection URI is not sufficient to verify the client's
identity when asking for resource owner authorization but can be
used to prevent delivering credentials to a counterfeit client
after obtaining resource owner authorization.
So the redirection URI is something we should check, but isn't something we can trust, in large part because the domain could be spoofed. But the server itself can't be, so we could try to ask the domain something that only the client's domain's server would know. The simplest way to do this would be for the authentication server to require a second ("private") URI, on the same domain as the client, where the client's secret will be hosted. When the client application makes an authorization request, the server then "checks in" against that second URI relative to the client's reported hostname, and looks for the shared secret (which should only ever be disclosed to the authorization server's IP address) to authenticate the client.
Of course, this is not a perfect solution. It doesn't work for every application, it's easy to get wrong, and it's potentially a lot of work to implement. Many potential authentication mechanisms (both highly specific and highly general) exist, and any one which does not entrust the client application with private data would be suitable for this problem space.
The other option we have is to implement no further authentication, and treat the client as unauthenticated. This is notably not the same thing as an unregistered client, but the difference is subtle. An unregistered client is a client whose identity is unknown. An unauthenticated client is a client whose identity is known, but untrusted. The security implication for both types of clients is the same: neither should be entrusted with private data. Whether the authorization server chooses to treat these two cases the same, however, seems to be left up to the implementer. It may make sense, for example, for an API to refuse all connections from an unregistered client, and to serve public read-only content to any registered client (even without verifying the client's identity).
Pragmatism, however, may yet win out – an unauthenticated client is fundamentally no different than the SSL "errors" you'll occasionally see when your browser cannot verify the authenticity of the site's SSL certificate. Browsers will immediately refuse to proceed any further and report exactly why, but users are allowed to accept the risk themselves by vouching for the identity of the server. A similar workflow may make sense for many OAuth2 applications.
Why is it important to verify the client's identity, anyway? Without doing so, the chain of trust is broken. Your application's users trust your application. The authorization workflow establishes that your users also trust the client, so your application should trust the client. Without validating client identity, another client can come along and assume the role of the trusted client, with all of the security rights thereof. Everything about client authentication serves to prevent that breach of trust.
Hope this helped!
[1]: Server compromises, where the source code of your application falls into malicious hands, are an exception to this, and there are other safeguards built-in for that case. Having said that, the spec also specifically calls out that a simple username/password combination isn't the safest option:
The authorization server is encouraged to consider stronger
authentication means than a client password.

Resources