Error This is a standby Vault node but can't communicate with the active node via request forwarding. Sign in at the active node to use the Vault UI - devops

image
Got this error when trying to login to vault.
Anyone knows reason and how to solve the error?
vault version is 1.6.2

Related

access azure key vault from azure web app where ip changes often bc of CI/CD

I have a docker container that accesses azure key vault. this works when I run it locally.
I set up an azure web app to host my container, and it cannot access the key vault
Forbidden (HTTP 403). Failed to complete operation. Message:
Client address is not authorized and caller is not a trusted service.
Client address: 51.142.174.224 Caller:
I followed the suggestion from https://www.youtube.com/watch?v=QIXbyInGXd8 and
I went to the web app in the portal to set status to on
Created an access policy
and then receive the same error with a different ip
Forbidden (HTTP 403). Failed to complete operation. Message:
Client address is not authorized and caller is not a trusted service.
Client address: 4.234.201.129 Caller:
My web app ip address would change every time an update were made, so are there any suggestions how to overcome this?
It might depend on your exact use case and what you want to achieve with your tests, but you could consider using a test double instead of the real Azure Key Vault while running your app locally or on CI.
If you are interested please feel free to check out Lowkey Vault.
I found solution by setting up a virtual network,
and then whitelisting it in the keyvault access rights

AKS Pod fail to access Azure Key Vault

I have a Dockerfile which has used to build a node project and run the "az login --service-principal" command. In this node project, it will retrieve the secret value from Azure Key Vault.
I try to run this docker image locally and it can successfully return the secret I set on Azure Key Vault. However, after I deploy the same docker image to AKS, it returns 403 forbidden error. Why would it happen?
I understand that I may not use this method to get authenticated to Azure Key Vault, but why it fails?
403 forbidden error means that the request was authenticated (it knows the requesting identity) but the identity does not have permission to access the requested resource. There are two causes:
There is no access policy for the identity.
The IP address of the requesting resource is not approved in the key
vault's firewall settings.
As you are able to access the key vault from your local, it means the error is because of the key vault's firewall settings
Check your Azure Key Vault networking settings. If you allowed access from selected networks, make sure to add AKS VMSS scale set virtual network in the selected networks
Now, you would be able to access key vault secrets from your AKS pod

Composer instance freeze, metadata.google.internal authentication error

Our Composer instance dropped all its active workers in the middle of the day. Node memory and cpu utilization disappeared for 2 out of 3 nodes.
First errors were:
_mysql_exceptions.OperationalError: (2006, "Can't connect to MySQL server on 'airflow-sqlproxy-service.default.svc.cluster.local' (110))"
Restarting Composer instance (with a dummy env variable) does not help, gives the below error:
Killing GKE workers in error does not help either. Stackdriver has this:
ERROR: (gcloud.container.clusters.describe) You do not currently have an active account selected.)
And another error seems to point to internal Google authentication service problem:
ERROR: (gcloud.container.clusters.get-credentials) There was a problem refreshing your current auth tokens: Unable to find the server at metadata.google.internal)
The Composer storage bucket seems to have 'Storage Legacy Bucket ...' permissions for some service accounts. Some changes going on in the authentication backend or what could be the underlying cause of the sudden and weird freeze?
Versions are composer-1.8.2 and airflow-1.10.3.

Connect Grafana to Kapua's Elasticsearch

I have kapua (as a docker container on my pc) and kura on the raspberryPi.
I managed to connect them, to run the example publisher and to correctly receive the data on kapua.
Now I would like to view the data via graphana (docker container) by linking this to kapua's elasticsearch (container docker).
I tried to link them indicating the address of elastichsearch localhost:9200 and to enter the credentials of kapua but it continues to give error 502 bad gateway.
Could anyone help me?
Thanks in advance.
By default Elasticsearch in Kapua has no credential.
The capability of configure them is not yet released and it has been introduced with https://github.com/eclipse/kapua/pull/2685. It will be released in Kapua 1.1.0
Have you tried without credentials?

Jenkins Jcloud and digitalocean provisionning

I'm trying to provision digitalocean droplet through jenkins jcloud plugin but am having a hard time knowing what to put.
First of all is this the right endpoint url for the api v2?:
https://api.digitalocean.com/v2
In digitalocean I've created an APP and I was given the Identity and Secret key I provided to jenkins.
But when connecting I get this error
Cannot connect to specified cloud, please check the identity and
credentials: status cannot be null connecting to GET
https://api.digitalocean.com/v2/droplets HTTP/1.1
What am I doing wrong here ?
You do not need add endpoint.
Only add one credential, with password (token). And that is it. Like this:

Resources