I am having a visual studio subscription. From that I am having a benefit of accessing azure devops. I have used docker container registry for image build and push, and it is successful. Now I have changed to Azure container registry, this time image build is successful, but push failed saying unauthorized access. See below error
The push refers to repository [(registryname).azurecr.io/(myname)/myfirstproject]. unauthorized: authentication required
I have tried to select Service Principal Authentication option, but saying
**Failed to create an app in Azure Active Directory. Error: Insufficient privileges to complete the operation.**
So, I have used Managed Identity Authentication option, but the push image failed.
Is it like I have to use Service Principal Authentication option only to push the image in ACS or am I missing anything.
I can provide more information if required.
Thanks in advance.
Related
I'd like to push the image image/name to the docker repository image/name through Jenkins. I have logged in my docker account on my local machine. But it returned with the error:
com.github.dockerjava.api.exception.DockerClientException: Could not push image: errors:
denied: requested access to the resource is denied
unauthorized: authentication required
why?
Maybe use the library you are building with to pass credentials
From the doc:
.withRegistryUsername(registryUser)
.withRegistryPassword(registryPass)
Reference: https://github.com/docker-java/docker-java/blob/master/docs/getting_started.md
I'm assuming this is the library you use by the error message you posted.
Summary:
I have made many attempts to deploy simple C# Blazor image in public DockerHub repo to Azure App Service web site. All attempts using bicep and the azure portal have failed.
Goal:
Use bicep inside of a Github action (CI/CD pipeline) to deploy from public DockerHub repo to Azure App Service Web Site. (I'm also curious as to how to do it on the portal).
What Works:
This powershell command successfully deploys my DockerHub image to the Azure App Service Web site:
az.cmd webapp create --name DockerhubDeployDemo004 --resource-group rg_ --plan Basic-ASP -s siegfried01 -w topsecretet --deployment-container-image-name siegfried01/demovisualstudiocicdforblazorserver
This bicep for creating an azure container instance also works.
Error Messages from Failed Attempts:
From the log files in the azure portal I get:
2022-05-20T21:50:35.914Z ERROR - DockerApiException: Docker API responded with status code=NotFound, response={"message":"pull access denied for demovisualstudiocicdforblazorserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"}
2022-05-20T21:50:35.915Z ERROR - Pulling docker image docker.io/demovisualstudiocicdforblazorserver failed:
2022-05-20T21:50:35.916Z WARN - Image pull failed. Defaulting to local copy if present.
2022-05-20T21:50:35.923Z ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
2022-05-20T21:50:35.928Z INFO - Stopping site dockerdeploydemo003 because it failed during startup.
/home/LogFiles/2022_05_20_lw1sdlwk000FX5_docker.log (https://dockerdeploydemo003.scm.azurewebsites.net/api/vfs/LogFiles/2022_05_20_lw1sdlwk000FX5_docker.log)
2022-05-20T21:35:47.559Z WARN - Image pull failed. Defaulting to local copy if present.
2022-05-20T21:35:47.562Z ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
Failing Bicep Code:
I tried exporting the ARM code from the successful powershell deployment and the failed portal attempts and converting it to bicep. In both cases the code was very similar. In both cases I had to add/edit the app settings containing the dockerhub URL, account and password. I always received the above error messages. After deploying using bicep code, I could go back into the portal and view the appsettings (dockerhub creds & URL). They looked correct.
References:
Nice DockerHub example but no bicep code.. Says to use index.docker.io for the server and I tried that (did not work). I also tried using https://index.docker.io/v1/ for the server URL and that did not work either.
Nice Bicep Example but uses ACR instead of DockerHub
Another nice Bicep Example that uses ACR instead of DockerHub.
I was surprised I could not find the documentation on the DockerHub site!
Please help me correct my bicep code. I suspect I'm not specifying the correct URL or server for DockerHub.
Thanks
Siegfried
I could not find the web page on Dockerhub that gave the detailed information I was looking for (like the URL). However, the docker Info command as described here was very helpful.
This bicep code did the trick for me (with some help from the bicep support on github):
var appConfigNew = {
DOCKER_ENABLE_CI: 'true'
DOCKER_REGISTRY_SERVER_PASSWORD: dockerhubPassword
DOCKER_REGISTRY_SERVER_URL: 'https://index.docker.io/v1/'
DOCKER_REGISTRY_SERVER_USERNAME: dockerUsername
}
resource appSettings 'Microsoft.Web/sites/config#2021-01-15' = {
name: 'appsettings'
parent: web
properties: appConfigNew
}
And lastly, I discovered this by trial and error:
linuxFxVersion: 'DOCKER|${dockerUsername}/demovisualstudiocicdforblazorserver:${tag}'
Wow! I really worked hard for this one!
I have a parent project that has an artifact registry configured for docker.
A child project has a cloud run service that needs to pull its image from the parent.
The child project also has a service account that is authorized to access the repository via an IAM role roles/artifactregistry.writer.
When I try to start my service I get an error message:
Google Cloud Run Service Agent must have permission to read the image,
europe-west1-docker.pkg.dev/test-parent-project/docker-webank-private/node:custom-1.
Ensure that the provided container image URL is correct and that the
above account has permission to access the image. If you just enabled
the Cloud Run API, the permissions might take a few minutes to
propagate. Note that the image is from project [test-parent-project], which
is not the same as this project [test-child-project]. Permission must be
granted to the Google Cloud Run Service Agent from this project.
I have tested manually connecting with docker login and using the service account's private key and the docker pull command works perfectly from my PC.
cat $GOOGLE_APPLICATION_CREDENTIALS | docker login -u _json_key --password-stdin https://europe-west1-docker.pkg.dev
> Login succeeded
docker pull europe-west1-docker.pkg.dev/bfb-cicd-inno0/docker-webank-private/node:custom-1
> OK
The service account is also attached to the cloud run service:
You have 2 types of service account used in Cloud Run:
The Google Cloud Run API service account
The Runtime service account.
In your explanation, and your screenshot, you talk about the runtime service account, the identity that will be used by the service when it runs and call Google Cloud API.
BUT before running, the service must be deployed. This time, it's a Google Cloud Run internal process that run to pull the container, create a revision and do all the required internal stuff. To do that job, a service account also exist, it's named "service agent".
In the IAM console, you can find it: the format is the following
service-<PROJECT_NUMBER>#serverless-robot-prod.iam.gserviceaccount.com
Don't forget to tick the checkbox in the upper right corner to include the Google Managed service account
If you want that this deployment service account be able to pull image in another project, grant on it the correct permission, not on the runtime service account.
I'm quite new to github actions and gcloud. I have trouble to get my github-CI/CD-Pipeline running because I can't push any docker image to the google Cloud Registry due to access restrictions.
What have I done so far:
I have a Quarkus app hosted on github
I used github actions to build the Maven project and the docker image
I created a project in google Cloud and added a service account which I use for the github action. The login seems to work:
Run google-github-actions/setup-gcloud#master
/usr/bin/tar xz --warning=no-unknown-keyword -C /home/runner/work/_temp/ac85f67a-89fa-4eb4-8d30-3f6379124ec2 -f /home/runner/work/_temp/de491940-a4b1-4a15-bf0a-95d563e68362
/opt/hostedtoolcache/gcloud/342.0.0/x64/bin/gcloud --quiet config set project ***
Updated property [core/project].
Successfully set default project
/opt/hostedtoolcache/gcloud/342.0.0/x64/bin/gcloud --quiet auth activate-service-account github-actions#***.iam.gserviceaccount.com --key-file -
Activated service account credentials for: [github-actions#***.iam.gserviceaccount.com]
If I now try to push the docker image I get the following (expected) error message:
Run docker push "$GCR_HOSTNAME/$PROJECT_ID/$IMAGE:$IMAGE_TAG"
The push refers to repository [eu.gcr.io/***/***]
715ac1ae8693: Preparing
435cfe5f5775: Preparing
313d03d71d4d: Preparing
c5c8d86ccee1: Preparing
1b0f2238925b: Preparing
144a43b910e8: Preparing
4a2bc86056a8: Preparing
144a43b910e8: Waiting
4a2bc86056a8: Waiting
denied: Token exchange failed for project '***'. Caller does not have permission 'storage.buckets.get'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
Error: Process completed with exit code 1.
Next, I opened the Google Cloud Console and created a custom role (IAM & Admin -> Roles -> Create Role) which has the necessary permissions.
Then, I had trouble to assign my new custom role to the service account (IAM & Admin -> Service Accounts -> Manage Access -> Add member). I used the email address of the service account as "New members", but I could not choose the custom role I just created. What am I missing here?
I read somewhere that I can also add service accounts as member (IAM & Admin -> IAM -> Add). Again I used the email address of the service account as "New Members". This time I could choose my custom role. What's the difference to the first approach?
Anyways, if a I try to run the github action again, now I get the following error:
Run docker push "$GCR_HOSTNAME/$PROJECT_ID/$IMAGE:$IMAGE_TAG"
The push refers to repository [eu.gcr.io/***/***]
c4f14c9d3b6e: Preparing
fe78d438e8e2: Preparing
843fcae4a8f4: Preparing
dcf8cc80cedb: Preparing
45e8815b101d: Preparing
144a43b910e8: Preparing
4a2bc86056a8: Preparing
144a43b910e8: Waiting
4a2bc86056a8: Waiting
denied: Access denied.
Error: Process completed with exit code 1.
The error message is different, so I guess the permission for the service account somehow worked, but still I can't succeed. Which steps did I miss?
Any help is highly appreciated. Thanks a lot!
One way to debug this is to create a key for the service account on your local host, configure your script|gcloud to use the service account as its credentials and then try the push manually.
One immediate problem may be that you're not authenticating against Google Container Registry (GCR). GCR implements Docker's registry API and you'll need to use one of the mechanisms to authenticate before you can interact with the registry.
Notes:
I think you don't need to create a custom role. You have 2 options. Either (preferred) create an account specifically for the CI/CD job and grant it the minimum set of roles needed including storage.buckets.get. I think you can start with roles/storage.admin (link) and perhaps refine later.
You can grant roles e.g. roles/storage.admin to a Project in which case the permission applies to all Cloud Storage resources or to a specific Bucket in which case the permission applies only to the bucket and its objects.
Service Accounts have a dual role in GCP. As an identity and as a resource (that can be used by other identities). It can be confusing.
I am trying to pull my Container Registry docker image but it fails with:
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I am on a compute engine instance so I believe its already configured to pull? I also checked the service account and roles
I even added storage viewer role to my compute engine service account
What is wrong here?
In addition to permissions you need to authenticate your compute engine to connect to container registry. Please see Advanced Authentication for more details