I'm trying to set up TLS mutual authentication between client and IBM-MQ queue manager (using the ibmcom/mq Docker image). The certificates are self-signed and created according to this article. As stated in the docs, it should be possible to bake in the server's private key and both certificates into the image. My Dockerfile looks like this:
FROM ibmcom/mq
USER mqm
COPY --chown=mqm:mqm 20-config.mqsc /etc/mqm/ # creation of additional queues, no problems here
COPY --chown=mqm:mqm keys_mq1/key.key /etc/mqm/pki/keys/mykey/
COPY --chown=mqm:mqm keys_mq1/key.crt /etc/mqm/pki/keys/mykey/
COPY --chown=mqm:mqm keys_client/client.crt /etc/mqm/pki/trust/0/
The files can be found in the running container:
/etc/mqm/pki/keys/mykey
drwxr-xr-x 1 mqm mqm 4096 Feb 16 11:18 .
drwxr-xr-x 1 mqm mqm 4096 Feb 16 11:18 ..
-rwxr-xr-x 1 mqm mqm 1253 Feb 16 10:54 key.crt
-rwxr-xr-x 1 mqm mqm 1704 Feb 16 10:53 key.key
/etc/mqm/pki/trust/0
drwxr-xr-x 2 mqm mqm 4096 Feb 16 13:34 .
drwxr-xr-x 3 mqm mqm 4096 Feb 16 13:34 ..
-rwxr-xr-x 1 mqm mqm 1054 Feb 16 13:29 client.crt
One thing to notice is that, according to the docs, the channel details should now show the following entry: CERTLABL(mykey). In my case, it's just CERTLABL( ). However, I'm not sure if that's the problem here, authentication of the server without client authentication seems to be working (see below).
DISPLAY CHANNEL(DEV.APP.SVRCONN)
1 : DISPLAY CHANNEL(DEV.APP.SVRCONN)
AMQ8414I: Display Channel details.
CHANNEL(DEV.APP.SVRCONN) CHLTYPE(SVRCONN)
ALTDATE(2020-02-16) ALTTIME(13.34.47)
CERTLABL( ) COMPHDR(NONE)
COMPMSG(NONE) DESCR( )
DISCINT(0) HBINT(300)
KAINT(AUTO) MAXINST(999999999)
MAXINSTC(999999999) MAXMSGL(4194304)
MCAUSER(app) MONCHL(QMGR)
RCVDATA( ) RCVEXIT( )
SCYDATA( ) SCYEXIT( )
SENDDATA( ) SENDEXIT( )
SHARECNV(10) SSLCAUTH(OPTIONAL)
SSLCIPH(ANY_TLS12) SSLPEER( )
TRPTYPE(TCP)
On client side, I created two Java keystores (JKS), one with the server's certificate (truststore) and one with the client's keypair.
My connection attempts were as followed:
Connecting to the default queue manager QM1 using the provided app user (no password) and DEV.APP.SVRCONN channel. The client application is an existing tool that perfectly works with the existing MQ infrastructure, I just exchanged the keystores and connection details.
Client exception: com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2035' ('MQRC_NOT_AUTHORIZED').
MQ log:
AMQ5534E: User ID 'app' authentication failed
AMQ5542I: The failed authentication check was caused by the queue manager CONNAUTH CHCKCLNT(REQDADM) configuration.
Connecting using the provided admin user and DEV.ADMIN.SVRCONN channel via IBM MQ Explorer (in this scenario, I switched to admin because app has insufficient rights to be used with MQ Explorer, regardless of the authentication method). I checked the "no password" option, since I want to authenticate with the client's certificate.
MQ Explorer error message:
Access not permitted. You are not authorized to perform this operation. (AMQ4036)
Explanation: The queue manager security mechanism has indicated that the userid associated with this request is not authorized to access the object.
MQ log:
AMQ5540E: Application 'MQ Explorer 8.0.0' did not supply a user ID and password
AMQ5541I: The failed authentication check was caused by the queue manager CONNAUTH CHCKCLNT(REQDADM) configuration.
AMQ9557E: Queue Manager User ID initialization failed for 'admin'.
Same as 2., but omitting the client's keystore and providing the password instead. Works. The idea here was to verify that at least the server's certificate is configured correctly (on the other hand, I'm not sure if MQ Explorer is enforcing the check of the server's certificate against the truststore in the first place).
What am I missing?
edit: my actual goal is to use mutual authentication for the app user and DEV.APP.SVRCONN channel.
CHANNEL attribute CERTLABL
This attribute does not need to be set, unless you require to present a different certificate over this SVRCONN than all the other channels on the queue manager. If you do not have this requirement, leave CHANNEL attribute CERTLABL blank and just use the overall queue manager wide certificate. This is either following the default pattern of a certificate named ibmWebSphereMQ<qm-name> or uses the certificate label that you set using the following MQSC command:
ALTER QMGR CERTLABL(my-certificate-label)
Connection Authentication (MQ built-in Password checking)
A brand new queue manager created at V8 or above will have the Connection Authentication feature enabled, which means the queue manager will check any passwords you provide, and more importantly in your scenario, will demand that any privileged user id must supply one. The error message you report in connection attempt 1:
AMQ5542I: The failed authentication check was caused by the queue manager CONNAUTH CHCKCLNT(REQDADM) configuration.
and connection attempt 2/3:
AMQ5540E: Application 'MQ Explorer 8.0.0' did not supply a user ID and password
AMQ5541I: The failed authentication check was caused by the queue manager CONNAUTH CHCKCLNT(REQDADM) configuration.
... are telling you that connection authentication is mandating that your user id, which it considers privileged (i.e. member of mqm group or similar), has not supplied a password.
If you do not require password checking for any remotely connecting privileged user id, then you can turn this off with the following commands on the queue manager.
ALTER AUTHINFO(SYSTEM.DEFAULT.AUTHINFO.IDPWOS) AUTHTYPE(IDPWOS) CHCKCLNT(OPTIONAL)
REFRESH SECURITY TYPE(CONNAUTH)
Mutually Authenticated SSL/TLS
In order to ensure mutually authenticated SSL/TLS, you will eventually need to ensure that your CHANNEL attribute SSLCAUTH is set to REQUIRED. But the easiest way to achieve this is to start with it set to OPTIONAL, and get to the point where the client is authenticating the queue manager's certificate, and then get it sending it's own, and finally set SSLCAUTH(REQUIRED) to ensure that it will only work if the client continues to do so.
You will need to ensure that you have set SSLCIPH on both ends of the channel. You don't mention that in your question, but the instructions you reference use SSLCIPH(ANY_TLS12) so I assume you have done the same.
If you successfully make a connection and are not sure whether the client has sent a certificate to the queue manager, use the following MQSC command:-
DISPLAY CHSTATUS(DEV.ADMIN.SVRCONN) SSLPEER SSLCERTI
to see the Subject's DN and Issuer's DN of the certificate sent by the client. If blank, it did not sent a certificate.
Related
I was using coral board with my login credentials before but the SSH didn't seem to work so I removed the keys from the coral in order to generate new ones and now its not letting me in the board. I'm a noob at this, if you answer this please be specific. its for my college project. How do i change the accesskeys in the directory?
Waiting for a device...
Connecting to green-horse at 192.168.101.2
Key not present on green-horse -- pushing
Couldn't connect to keymaster on green-horse: [Errno 111] Connection refused.
Did you previously connect from a different machine? If so,
mdt-keymaster will not be running as it only accepts a single key.
You will need to either:
1) Remove the key from /home/mendel/.ssh/authorized_keys on the
device via the serial console
- or -
2) Copy the mdt private key from your home directory on this host
in ~/.config/mdt/keys/mdt.key to the first machine and use
'mdt pushkey mdt.key' to add that key to the device's
authorized_keys file.
Failed to push via keymaster -- will attempt password login as a fallback.
Can't login using default credentials: Bad authentication type; allowed types: ['publickey']
ssh should also works (it is what I use), but you'll need generate a key on your host machine and then put it in the ~/.ssh/authorized_keys on the board, there could be multiple keys placed in that file, and mdt needs to be one of them.
To recovers mdt access, you can check here: https://coral.ai/docs/dev-board/mdt/#recover-mdt-access
To ssh into the board, generate your own ssh key:
ssh-keygen
and your new key will be in ~/.ssh/id_rsa.pub, you can put that key on the board in order to ssh.
I want to force our office users to enter their LDAP credentials when connecting to the WiFi in our office. So I installed FreeRadius as instructed at:
Using FreeIPA and FreeRadius .
Using radtest, I can successfully authenticate against our FreeIPA server using PAP. Moving on I configured a WiFi connection on my Windows 10 laptop to use EAP-TTLS as the authentication method along with selecting PAP as the non-EAP method. Again I can successfully authenticate against our FreeIPA server when connecting to the WiFi AP. But I realize that is not safe since passwords are sent as clear-text.
So next I configured a WiFi connection on my Windows 10 laptop to use PEAP as the authentication method with EAP method of EAP-MSCHAP v2. But now authentication fails. An excerpt from the FreeRadius debug log shows:
(8) mschap: WARNING: No Cleartext-Password configured. Cannot create NT-Password
(8) mschap: WARNING: No Cleartext-Password configured. Cannot create LM-Password
(8) mschap: Creating challenge hash with username: test55
(8) mschap: Client is using MS-CHAPv2
(8) mschap: ERROR: FAILED: No NT/LM-Password. Cannot perform authentication
(8) mschap: ERROR: MS-CHAP2-Response is incorrect
I’m struggling to figure out a solution. I have found various configurations of eap, mschap & ldap files online but so far I have not solved my issue.
I’m not sure if I’m asking the right question but is the password hash sent by the Windows client incompatible with the password hash used by FreeIPA?
It turns out mschapv2 is a challenge response protocol, and that does not work with an LDAP bind in the basic configuration of FreeRadius.
However I did find a solution where FreeRadius looks up a user by their LDAP DN, then reads (not bind) the NTHash of the user. From there, FreeRADIUS is able to process the challenge response.
First permissions have to be given to service accounts:
https://fy.blackhats.net.au/blog/html/2015/07/06/FreeIPA:_Giving_permissions_to_service_accounts..html
After performing these steps users will need to change their password in order to generate an ipaNTHash.
Then configure FreeRadius to use mschapv2 with FreeIPA:
https://fy.blackhats.net.au/blog/html/2016/01/13/FreeRADIUS:_Using_mschapv2_with_freeipa.html
After completing all the steps described in both links, this radtest cli command should return an Access-Accept response.
radtest -t mschap <ldap-user-uid> <ldap-user-password> 127.0.0.1:1812 0 <FreeRadius-secret>
I've suddenly got this message after a month of docker trust working fine for me via GitLab CI.
I have a Gitlab Runner that mounts the ~/.docker/trust (so its persisted) and pushes it to our QA registry.
tag_image_test:
stage: tag_image
script:
- docker login -u "gitlab-ci-token" -p "$CI_BUILD_TOKEN" $CI_REGISTRY
- docker pull "${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}"
- export DOCKER_CONTENT_TRUST=1
- export DOCKER_CONTENT_TRUST_SERVER=$QA_REGISTRY_SIGNER
- export DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE=$QA_REGISTRY_SIGNER_ROOT_PASSPHRASE
- export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE=$QA_REGISTRY_SIGNER_REPO_PASSPHRASE
- docker login -u "$QA_REGISTRY_USERNAME" -p "$QA_REGISTRY_PASSWORD" $QA_REGISTRY_URL
- export PROJ_PATH=$(echo -en $CI_PROJECT_PATH | tr '[:upper:]' '[:lower:]')
- docker tag "${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}" "${QA_REGISTRY_IMAGE}/${PROJ_PATH}:${CI_COMMIT_REF_SLUG}"
- docker push "${QA_REGISTRY_IMAGE}/${PROJ_PATH}:${CI_COMMIT_REF_SLUG}"
However the push commands ends with:
time="2019-03-18T11:51:14Z" level=debug msg="failed to verify TUF data for: qa.registry.local/mygroup/myimage, valid signatures did not meet threshold for "
time="2019-03-18T11:51:14Z" level=debug msg="downloaded 1.root is invalid: could not rotate trust to a new trusted root: failed to validate data with current trusted certificates"
time="2019-03-18T11:51:14Z" level=debug msg="Client Update (Root): could not rotate trust to a new trusted root: failed to validate data with current trusted certificates"
could not rotate trust to a new trusted root: failed to validate data with current trusted certificates
When I look at the root.json file, the expiry is not for a long time:
"expires":"2029-02-08T15:07:05.172338131Z"
Same for targets.json:
"expires":"2022-02-10T15:07:05.173954376Z"
So I'm at a loss for what is going on and probably don't understand what it is trying to do. Does anyone have any insight?
I’m still learning docker, but are you sure it is root.json that it is looking in and not roots.json.
Based on the configuration here, it should be looking in roots.json for the trusted certs.
Maybe you are pushing to the wrong file to identify your roots, or you could just have a typo in your post.
In any case, this is helpful:
https://github.com/cirocosta/docker-cli/blob/master/vendor/github.com/theupdateframework/notary/trustpinning/certs.go
How those errors are generated are seen there with comments for why those errors occur.
For example, regarding your key rotation error:
// ErrRootRotationFail is returned when we fail to do a full root key rotation
// by either failing to add the new root certificate, or delete the old ones
it's only a locally corrupted state right? You should be able to fix it with a notary remove server.example.com/test1.
The fix I want to get in for this is lazy initialization where one no longer has to explicitly call notary init. As part of lazy initialization, we would always query the server for existing data before assuming it needs to be created locally.
A shorter term fix may be to check the server, or if network connectivity isn't available, the local cache, for existing data. At the moment I believe init assumes the repo doesn't exist and overwrites any existing cache.
Also please make sure to configure DNS and made host entry in host file.
For the purposes of UCP Signing Policy, configured via the “Content Trust” section of the Admin Settings, it’s necessary that we can identify the image was signed by a member of the UCP organization. We do that by making use of client bundles that you can download for your user account from UCP. Client Bundles contain a “cert.pem” file which is an x509 certificate signed by the UCP Certificate Authority, and a “key.pem” file which is the private key matched with the certificate.
You need to create the “targets/releases” delegation and one other delegation, e.g. “targets/my_user” and add the “cert.pem” as the public signing key to both. When another service then inspects the trust data, they can determine that the certificate belongs to a member of the UCP organization and their signatures should be trusted. You then need to import the key.pem so it is available for signing when you push.
The documentation 23 provides more information and specific commands to run, specifically the “Initialize a Repo” section.
I have the following scenario:
I want to run three services (intranet only) in windows docker containers on a windows host
an IdentityServer4
an Api (which uses the IdSvr for authorization)
a Webclient (which uses the api as Datalayer and the IdSvr for authorization)
All three services are running with asp.netcore 2.1 (with microsoft/dotnet:2.1-aspnetcore-runtime as base) and using certificates signed by a local CA.
The problem I'm facing now is that i cannot get the api or the webclient into trusting these certificates.
E.g. if I call the api the authentication-middleware tries to call the IdSvr but gets an error on GET '~/.well-known/openid-configuration' because of an untrusted ssl certificate.
Is there any way to get the services into trusting every certificate issued by the local CA? I've already tried this way but either I'm doing it wrong or it just doesn't work out.
Imho a docker container must have its own CertStore otherwise none trusted https connection would be possible. So my idea is to get the root certificate from the docker hosts CertStore (which trusts the CA) into the container but I don't know how to achieve this.
I am trying to renew the O-Auth Certificate from one of the Front-end Server and I am facing some issues with it.
When using Lync Server 2013 deployment wizard to request O-Auth Certificate from Internal CA, the process goes well but at the end, the current certificate is not updated.
I can see the same certificate is replicated to other FEs (which is default behavior), it fails to apply to other FEs as well. I can see the following event logs in every FE's.
The replication of certificates from the central management store to the local machine failed due to a problem with certificate processing or installation on the local machine Microsoft Lync Server 2013, Replica Replicator Agent will continuously attempt
to retry the replication. While this condition persists, the certificates on the local machine will not be updated.
Exception: System.Security.Cryptography.CryptographicException: Access is denied.
at System.Security.Cryptography.X509Certificates.X509Store.RemoveCertificateFromStore(SafeCertStoreHandle safeCertStoreHandle, SafeCertContextHandle safeCertContext)
at Microsoft.Rtc.Management.Common.Certificates.CertUtils.AddCertificateToStore(X509Certificate2 cert, StoreName storeName, IManagementReporter reporter)
at Microsoft.Rtc.Management.Deployment.Core.Certificate.ImportFromPinnedArray(PinnedByteArray pfx, Boolean allowSelfSigned)
at Microsoft.Rtc.Management.Deployment.Core.Certificate.ReplicateCMSCertificates(IScopeAnchor scope)
at Microsoft.Rtc.Internal.Tools.Bootstrapper.Bootstrapper.ReplicateCMSCertificates().
Cause: The certificate provisioned in the central management store is invalid or cannot be handled on the local machine.
Resolution:
Ensure that certificates provisioned in the central management store are valid, have all needed issuer certificates included or installed on the local machine, and can be used with cryptographic providers available on the local machine.
I have checked the replication status and Replication is true.
Has anyone came across with similar situation.
I have read from another thread that this is due to the Root CA with private key. I have checked the server and I can see the Root CA with Private key. How can I remove private key from the Root CA only on the Lync Servers.
https://social.technet.microsoft.com/Forums/ie/en-US/47014b21-33d4-4a59-ba52-5cf537d14104/event-id-3039-lync-2013-internal-oauth-certificate?forum=lyncdeploy
Any help will be greatly appreciated.
I had a similar issue. Turned out the CA certificate on multiple front end servers certificate stores had a private key! Wrong on so many levels. Deleted all copies of CA cert with private key and copied again without, and then it all worked.