Key Vault virtual machine extension for Linux - how to delete previous PEM file - azure-keyvault

I have successfully installed the Key Vault virtual machine extension for Linux on Ubuntu 18.04 (Azure VM).
The certificate from KeyVault is imported in the default store /var/lib/waagent/Microsoft.Azure.KeyVault in PEM format.
How do I ensure that after importing a new version of the certificate, only the current one remains in the store and the old (invalid) is deleted?
This is the current state:
adminmox2#VM2:/var/lib/waagent/Microsoft.Azure.KeyVault$ ls
michalcpqtestwekv1.TestAcme
michalcpqtestwekv1.TestAcme.9c312a9e003b4df8a3a7881b5b149a6c.1651038865.1658814864.PEM
michalcpqtestwekv1.TestAcme.e1d6acf454d6474dab68dfb455e1b048.1650965285.1658741284.PEM
Thank you

If the VM has certificates downloaded by previous version i.e; v1,
deleting that v1 extension will NOT delete the downloaded
certificates. After installing v2.0, one may need to delete the
certificate files or roll-over the certificate to get the PEM file
with full-chain on the VM.
According to GitHub issue on cerificates azure Key Vault will not be able replace the old certificate as of now .So as a work around ,you can use custom script and periodically delete old certs.
References:
Run Custom Script Extension on Linux VMs in Azure - Azure Virtual Machines | Microsoft Docs
Azure Key Vault VM Extension for Linux - Azure Virtual Machines | Microsoft Docs

Related

Add Letsencrypt Certificate to Keycloak Trusted Certificates

We have the following setup:
A Keycloak Server on a VM installed as a docker container.
Server certificate via Lets Encrypt.
Two realms a and b.
Realm b is integrated into Realm a as an identity provider.
To achieve that it works, we had to import the certificate of the Keycloak server into the java trusted store. Now the login works and we can choose in realm a if we want to login with realm b. Unfortunately the process of importing the certificate comes with lots of manual effort (copy the certificate into the container, divide the chain into several files with only one certificate, call a function) and the certificates are just valid for 90 days. Of course we can automate this but the question is, is there an "official way" of doing this? Like mounting the Lets Encrypt certificate folder into the container and "done"? We are using the official jboss/keycloak container image.
The docker container should support this by setting the X509_CA_BUNDLE variable accordingly. See the docs here.
This creates the truststore for you and configures it in Wildfly. Details can be found in this and that script.

neo4j docker image (vps managed with plesk), cannot assign certificates for secure bolt connection with Let's encrypt certificate

I'm trying to run neo4j community on a vps via a docker image managed with plesk.
I am however having issues configuring the SSL certificate so I can connect to it securely from nodejs.
Currently, the error I'm getting is quite straightforward in node:
Neo4jError: Failed to connect to server.
Please ensure that your database is listening on the correct host and port and that you have
compatible encryption settings both on Neo4j server and driver. Note that the default encryption
setting has changed in Neo4j 4.0. Caused by: Server certificate is not trusted. If you trust the
database you are connecting to, use TRUST_CUSTOM_CA_SIGNED_CERTIFICATES and add the signing
certificate, or the server certificate, to the list of certificates trusted by this driver using
`neo4j.driver(.., { trustedCertificates:['path/to/certificate.crt']}). This is a security measure
to protect against man-in-the-middle attacks. If you are just trying Neo4j out and are not
concerned about encryption, simply disable it using `encrypted="ENCRYPTION_OFF"` in the driver
options. Socket responded with: DEPTH_ZERO_SELF_SIGNED_CERT
I've mapped the volumes as follows:
/certificates to the letsencrypt live folder for the domain db.example.com
Then I'm trying to connect to it via: bolt://db.example.com:32771
When i check via browser, the certificate being served is self-signed. I have try to add this certificate to the trusted certificates in windows but it didn't do anything at all.
Also added the path to the trusted certificates when instantiating the driver:
this._driver = neo4j.driver(process.env.Neo4jUri, token, {
encrypted: true,
trustedCertificates: ['ssl/neo4j.crt'],
});
I've also tried to copy the files within that certificate folder so that the appropriate files are named as mentioned in this article.

Lync 2013 Server O-Auth Certificate Renewal

I am trying to renew the O-Auth Certificate from one of the Front-end Server and I am facing some issues with it.
When using Lync Server 2013 deployment wizard to request O-Auth Certificate from Internal CA, the process goes well but at the end, the current certificate is not updated.
I can see the same certificate is replicated to other FEs (which is default behavior), it fails to apply to other FEs as well. I can see the following event logs in every FE's.
The replication of certificates from the central management store to the local machine failed due to a problem with certificate processing or installation on the local machine Microsoft Lync Server 2013, Replica Replicator Agent will continuously attempt
to retry the replication. While this condition persists, the certificates on the local machine will not be updated.
Exception: System.Security.Cryptography.CryptographicException: Access is denied.
at System.Security.Cryptography.X509Certificates.X509Store.RemoveCertificateFromStore(SafeCertStoreHandle safeCertStoreHandle, SafeCertContextHandle safeCertContext)
at Microsoft.Rtc.Management.Common.Certificates.CertUtils.AddCertificateToStore(X509Certificate2 cert, StoreName storeName, IManagementReporter reporter)
at Microsoft.Rtc.Management.Deployment.Core.Certificate.ImportFromPinnedArray(PinnedByteArray pfx, Boolean allowSelfSigned)
at Microsoft.Rtc.Management.Deployment.Core.Certificate.ReplicateCMSCertificates(IScopeAnchor scope)
at Microsoft.Rtc.Internal.Tools.Bootstrapper.Bootstrapper.ReplicateCMSCertificates().
Cause: The certificate provisioned in the central management store is invalid or cannot be handled on the local machine.
Resolution:
Ensure that certificates provisioned in the central management store are valid, have all needed issuer certificates included or installed on the local machine, and can be used with cryptographic providers available on the local machine.
I have checked the replication status and Replication is true.
Has anyone came across with similar situation.
I have read from another thread that this is due to the Root CA with private key. I have checked the server and I can see the Root CA with Private key. How can I remove private key from the Root CA only on the Lync Servers.
https://social.technet.microsoft.com/Forums/ie/en-US/47014b21-33d4-4a59-ba52-5cf537d14104/event-id-3039-lync-2013-internal-oauth-certificate?forum=lyncdeploy
Any help will be greatly appreciated.
I had a similar issue. Turned out the CA certificate on multiple front end servers certificate stores had a private key! Wrong on so many levels. Deleted all copies of CA cert with private key and copied again without, and then it all worked.

Publisher Public key for docker notary

I am using docker notary to establish a trust in the images I download from my private docker registry. While I am able to work out all(push, pull) quite well while I am running on one single host. However in a multi-node(server/client) situation I am just wondering how to get the publishers public key. This publisher key will be than run alongside docker engine pull from a client host. Here the server host has the registry as well as docker-notary server/signer.
Regards
Ashish
Docker Content Trust (powered by Notary) by default will perform TOFUs when downloading trust data for an image - the "s" for indicating this is over HTTPS.
If you're using standalone Notary, you can provide trust-pinning configuration to pin to a specific public key or CA against a publisher's TUF root key (though importing certs to Notary repos is WIP, and scheduled for next point release).
I encourage you to check out the relevant Notary client config information and this PR for more information about how to set this up in Notary -- Docker Content Trust integration is WIP.
I am also new to Notary and coming up to speed. My understanding of Notary (which is built on TUF) is TOFU (trust on first use). So what you need is to be able to connect over SSL to the Notary server, which will then download the publisher certs automatically. You trust what you get the first time (hence, TOFU) and then after that the publisher certs are used to validate all future verification / key updates / etc.

How to detect Remote Desktop Session Host using registry?

I need to detect if Remote Desktop Session Host is installed in windows server 2008 using registry data as part of the prerequisite checker for our product. Earlier it was known as terminal service which could be detected using the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Terminal Server\TSEnabled key but now this key "TSEnabled" is no more part of the registry.
It seems TSEnabled key is no longer present in Windows 2008 server. There's not much information about it, except for these:
http://forums.techarena.in/small-business-server/1015603.htm
http://msmvps.com/blogs/bradley/archive/2008/05/20/attaching-a-windows-2008-terminal-server-box-to-a-sbs-2003-server.aspx
Did you try using fDenyTSConnections key? Does this key serve your purpose?
http://technet.microsoft.com/en-us/library/cc722151.aspx
http://technet.microsoft.com/en-us/library/dd184089.aspx

Resources