Access to Azure Keyvault inside Azure Container Instance - azure-keyvault

I have a machine learning model deployed on azure container instance and I need to access to key vault. When i use command below
credential = DefaultAzureCredential()
It can't authenticate thus i cannot reach my secrets.
How can i reach keyvault inside azure container instance?

There are some restrictions on using an MSI in an Azure Container as it's in preview:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-managed-identity
Since DefaultAzureCredential() isn't working, you should test the ability to get a token from the MSI endpoint using a plain HTTP call. This CURL command should give you an idea on how to do that:
token=$(curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net' -H Metadata:true | jq -r '.access_token')
One you have the token then you can manually call the Key Vault client over HTTP/REST commands to get the secret you require.
https://learn.microsoft.com/en-us/rest/api/keyvault/

Related

How to inject docker registry username/password in docker-compose file?

In order to deploy an application using docker and a remote registry:
Using docker client, execute docker login so that the credentials will be stored either directly in $HOME/.docker/config.json or on a credential store specified also in $HOME/.docker/config.json. Then use the docker create command to start the application.
In kubernetes, a secret can be generated using the docker registry username and password. Then, the secret can be injected in the helm-chart using imagePullSecret. then, helm install command can instruct kubelet to pull the image inside the created container inside the scheduled pod. To update the image registry, the image name and pull secret can be updated before re-installation.
I have three questions:
How can I set the username and password or inject these credentials for the services in docker-compose without having to run docker login first in each deployment host ? (as in nu
Can I populate a credential store specified in a $HOME/.docker/config.json using docker login command on one machine, then specify the same credential store in $HOME/.docker/config.json of another machine, then use the answer of the previous question to inject or pull the credentials
if the docker daemon checks for the credentials inside the credential stores that is specified in $HOME/.docker/config.json, then what is the use of the helper program ?

query GitLab Container Registry programatically

I have access to a GitLab Container Registry and can push images as follows:
docker login --username $my_username -p $my_token $my_server/$my_project
docker tag $my_image:latest $my_server/$my_project/$my_image:latest
docker push $my_server/$my_project/$my_image:latest
I'd now like to use the Container Registry API as well and have tried this for a start (I'd like to list all tags subsequenty):
curl -H "PRIVATE-TOKEN: $my_token" https://$my_server/api/v4/projects/$my_project/registry/repositories
However, this results in "404 page not found". What am I missing, shouldn't the URL be valid according to the documentation?
I have by now concluded that the most likely cause is that my access token $my_token has been granted only limited privileges by the GitLab admin.
This perhaps includes the scopes read_registry and write_registry, but not api. Apparently there is no scope that specifically covers the Container Registry API without the rest of the GitLab API.

Run Docker Image in Local Machine with fetching .env variables from the Hashicorp-Vault server

We are using the HashiCorp-Vault for credentials and parameters management of the Node-JS and Java Applications.
As we are using the docker images of the NodeJS and Java applications.
Now the credentials that are saved in the HashiCorp-Vault are injected into the Images using SideCar in Kubernetes pods. The Application is working as expected.
Now the issue comes in when we want to run the application(DockerImage) on the local machine How we can inject the Credentials and Parameters from the Hashicorp-vault server.?
Their are several CURL API (https://www.vaultproject.io/api-docs/secret/kv/kv-v2) that can be used but how we can inject the data into the DockerImage.
Please share the way how can we inject the Credentials and parameters to the docker image.
Thanks!
Something like this:
curl -H "X-Vault-Token: $VAULT_TOKEN" -X GET "$VAULT_ADDR/v1/$VAULT_DJANGO_DB/$ENV" | jq .data.data > $CONFIG_PATH/django_db.json
Do you need to run the application in a raw docker container on your local machine, or could you instead run it in a simple local k8s cluster (e.g. minikube)?
If you can continue to use k8s, then you can continue to use the sidecar.

access ecr images from within jenkins docker ecs container

Hello Jenkins / Docker experts -
Stuff that is working:
Using the approach suggested here, I was able to get Jenkins docker image running in an AWS ECS cluster. Using -v volume mounts for docker socket (/var/run/docker.sock) and docker (/usr/bin/docker) I am able to access the docker process from inside Jenkins container as well.
Stuff that isn't:
The last problem I am facing is pulling / pushing images to and from AWS ECR Registry. When I try to execute docker pull / push commands, I am ending up with - no basic auth credentials.
I stumbled up on this link explaining my problem. But, I am unable to use the solutions suggested here as there is no ~/.docker/config.json in the host machine to share with Jenkins docker container.
Any suggestions?
Amazon ECR users require permissions to call ecr:GetAuthorizationToken
before they can authenticate to a registry and push or pull any images
from any Amazon ECR repository. Amazon ECR provides several managed
policies to control user access at varying levels; for more
information, see ecr_managed_policies
AmazonEC2ContainerRegistryPowerUser
This managed policy allows power user access to Amazon ECR, which allows read and write access to repositories, but does not allow users to delete repositories or change the policy documents applied to them.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage"
],
"Resource": "*"
}]
}
So, instead of using ~/.docker/config.json this, assign the above policy role to your ECS Task and your docker container service will be able to push pull image from ECR.
IAM Roles for Tasks
With IAM roles for Amazon ECS tasks, you can specify an IAM role that
can be used by the containers in a task. Applications must sign their
AWS API requests with AWS credentials, and this feature provides a
strategy for managing credentials for your applications to use,
similar to the way that Amazon EC2 instance profiles provide
credentials to EC2 instances. Instead of creating and distributing
your AWS credentials to the containers or using the EC2 instance’s
role, you can associate an IAM role with an ECS task definition or
RunTask API operation. The applications in the task’s containers can
then use the AWS SDK or CLI to make API requests to authorized AWS
services.
Benefits of Using IAM Roles for Tasks
Credential Isolation: A container can only retrieve credentials for
the IAM role that is defined in the task definition to which it
belongs; a container never has access to credentials that are intended
for another container that belongs to another task.
Authorization: Unauthorized containers cannot access IAM role
credentials defined for other tasks.
Auditability: Access and event logging is available through CloudTrail
to ensure retrospective auditing. Task credentials have a context of
taskArn that is attached to the session, so CloudTrail logs show which
task is using which role.
But you have to run this command as mentioned above to get Auth token.
eval $(aws ecr get-login --no-include-email)
You will get response like
Login Succeeded
Now you push pull image once you obtain the auth token from ECR.
docker push xxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/nodejs:test
Automate ECR login

How to stop gcloud docker -a overwriting long-lived credentials?

We are using the Google Container Registry to store our Docker images.
To authorize our build instances we place long-lived access tokens in .docker/config.json as described in the docs.
This works perfectly fine until someone (i.e. some Makefile) uses gcloud docker -- push ... to push to the registry (instead of e.g. docker push ...). gcloud will replace the existing, long-lived credentials with short-lived ones that expire after some time. Thus subsequent builds may fail, depending on the exact timing.
My Question: How can I prevent gcloud docker ... from messing with my provisioned credentials?
I've tried chattr +i .docker/config.json, but this just makes gcloud complain.
From https://cloud.google.com/sdk/gcloud/reference/docker:
The gcloud docker command group wraps docker commands, so that gcloud can inject the appropriate fresh authentication token into requests that interact with the docker registry.
The only thing that gcloud docker does is change these credentials, then invoke the docker CLI. If you don't want it to change the credentials, there's no reason not to just call docker directly.
One workaround might be to use an alternate configuration file location for your long-lived credentials; per https://docs.docker.com/engine/reference/commandline/cli/:
Options:
--config string Location of client config files (default "/root/.docker")

Resources