access azure key vault from azure web app where ip changes often bc of CI/CD - azure-keyvault

I have a docker container that accesses azure key vault. this works when I run it locally.
I set up an azure web app to host my container, and it cannot access the key vault
Forbidden (HTTP 403). Failed to complete operation. Message:
Client address is not authorized and caller is not a trusted service.
Client address: 51.142.174.224 Caller:
I followed the suggestion from https://www.youtube.com/watch?v=QIXbyInGXd8 and
I went to the web app in the portal to set status to on
Created an access policy
and then receive the same error with a different ip
Forbidden (HTTP 403). Failed to complete operation. Message:
Client address is not authorized and caller is not a trusted service.
Client address: 4.234.201.129 Caller:
My web app ip address would change every time an update were made, so are there any suggestions how to overcome this?

It might depend on your exact use case and what you want to achieve with your tests, but you could consider using a test double instead of the real Azure Key Vault while running your app locally or on CI.
If you are interested please feel free to check out Lowkey Vault.

I found solution by setting up a virtual network,
and then whitelisting it in the keyvault access rights

Related

AKS Pod fail to access Azure Key Vault

I have a Dockerfile which has used to build a node project and run the "az login --service-principal" command. In this node project, it will retrieve the secret value from Azure Key Vault.
I try to run this docker image locally and it can successfully return the secret I set on Azure Key Vault. However, after I deploy the same docker image to AKS, it returns 403 forbidden error. Why would it happen?
I understand that I may not use this method to get authenticated to Azure Key Vault, but why it fails?
403 forbidden error means that the request was authenticated (it knows the requesting identity) but the identity does not have permission to access the requested resource. There are two causes:
There is no access policy for the identity.
The IP address of the requesting resource is not approved in the key
vault's firewall settings.
As you are able to access the key vault from your local, it means the error is because of the key vault's firewall settings
Check your Azure Key Vault networking settings. If you allowed access from selected networks, make sure to add AKS VMSS scale set virtual network in the selected networks
Now, you would be able to access key vault secrets from your AKS pod

I can not create a Webhooks in gitlab to integrate jenkins

Prepare the environment in jenkins to integrate sonarqube and gitlab, with sonarqube I have no problem but when I try to create Webhooks, it does not let me enter a URL localhost.
If someone can help me to give access to my URL.
This was reported in gitlab-ce issue 49315, and linked to the documentation "Webhooks and insecure internal web services"
Because Webhook requests are made by the GitLab server itself, these have complete access to everything running on the server (http://localhost:123) or within the server’s local network (http://192.168.1.12:345), even if these services are otherwise protected and inaccessible from the outside world.
If a web service does not require authentication, Webhooks can be used to trigger destructive commands by getting the GitLab server to make POST requests to endpoints like http://localhost:123/some-resource/delete.
To prevent this type of exploitation from happening, starting with GitLab 10.6, all Webhook requests to the current GitLab instance server address and/or in a private network will be forbidden by default.
That means that all requests made to 127.0.0.1, ::1 and 0.0.0.0, as well as IPv4 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 and IPv6 site-local (ffc0::/10) addresses won’t be allowed.
If you really needs this:
This behavior can be overridden by enabling the option “Allow requests to the local network from hooks and services” in the “Outbound requests” section inside the Admin area under Settings (/admin/application_settings/network):

403 "Error: Forbidden" when opening the URL of my Cloud Run service

I built my container image and then deployed to Cloud Run using the Cloud Console. However, when I open the endpoint URL of my service, I get a 403 "Error: Forbidden" page
If you receive a 403 "Error: Forbidden" error message when accessing your Cloud Run service, it means that your client is not authorized to invoke this service. You can address this by taking one of the following actions:
If the service is meant to be invocable by anyone, update its IAM settings to make the service public.
If the service is meant to be invocable only by certain identities, make sure that you invoke it with the proper authorization token.
I have used IAP to resolve this issue. If there is a use case where only authenticated users must be able to access the application then use IAP
Accessing Authenticated Cloud Run applications using IAP

com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException

Hello I've followed so far the tutorial https://developers.sap.com/tutorials/s4sdk-odata-service-cloud-foundry.html
step by step and I'm having issues, to run the solution on local machine.
I'm running windows 10 and according to tutorial I have set an environment variable to be as following:
destinations=[{name: "ErpQueryEndpoint", url: "xxxx.s4hana.ondemand.com", username: "INT_USER", password: "xxxxxxxx"}]
when i run the solution on localhost i get this:
Message Error occured while handling request: com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: Failed to get destinations of provider service instance: Failed to get access token for destination service. If your application is running on Cloud Foundry, make sure to have a binding to both the destination service and the authorization and trust management (xsuaa) service, AND that you either properly secured your application or have set the "ALLOW_MOCKED_AUTH_HEADER" environment variable to true. Please note that authentication types with user propagation, for example, principal propagation or the OAuth2 SAML Bearer flow, require that you secure your application and will not work when using the "ALLOW_MOCKED_AUTH_HEADER" environment variable. If your application is not running on Cloud Foundry, for example, when deploying to a local container, consider declaring the "destinations" environment variable to configure destinations.
Be sure to set the destinations variable so it is visible to your application. You can check using System.getenv("destinations"); in your code.

Trusting a certificate signed by a local CA from an AspNetCore-Api inside docker

I have the following scenario:
I want to run three services (intranet only) in windows docker containers on a windows host
an IdentityServer4
an Api (which uses the IdSvr for authorization)
a Webclient (which uses the api as Datalayer and the IdSvr for authorization)
All three services are running with asp.netcore 2.1 (with microsoft/dotnet:2.1-aspnetcore-runtime as base) and using certificates signed by a local CA.
The problem I'm facing now is that i cannot get the api or the webclient into trusting these certificates.
E.g. if I call the api the authentication-middleware tries to call the IdSvr but gets an error on GET '~/.well-known/openid-configuration' because of an untrusted ssl certificate.
Is there any way to get the services into trusting every certificate issued by the local CA? I've already tried this way but either I'm doing it wrong or it just doesn't work out.
Imho a docker container must have its own CertStore otherwise none trusted https connection would be possible. So my idea is to get the root certificate from the docker hosts CertStore (which trusts the CA) into the container but I don't know how to achieve this.

Resources