Running gcloud beta run deploy --image gcr.io/mynippets-dev/web:latest when gcloud project is set to 'mysnippets-dev' returns the following:
ERROR: (gcloud.beta.run.deploy) Google Cloud Run Service Agent must have permission to read the image, gcr.io/mynippets-dev/web:latest. Ensure that the provided container image URL is correct and that the above account has permission to access the image. If you just enabled the Cloud Run API, the permissions might take a few minutes to propagate. Note that [mynippets-dev/web] is not in project [mysnippets-dev]. Permission must be granted to the Google Cloud Run Service Agent from this project
It should be noted that both the GCR image and the Cloud Run account live in project 'mysnippets-dev'. But for some reason, it thinks it's a cross project deployment and maybe thinking it's 'mynippets-dev/web' with the /web (the GCR repository).
I can also repro the same issue in Cloud Run UI.
Deployment should succeed.
This looks like it is most likely a typo with mynippets-dev vs mysnippets-dev (missing an 's')
Cloud Run interprets this as a cross-project deployment, which is allowed, but requires sufficient permissions.
If this isn't intended to be a cross project deployment, this should succeed with this command.
gcloud beta run deploy --image gcr.io/mysnippets-dev/web:latest
Related
I would like to run some gcloud commands from Jupyter notebook (user-mananged-notebook created from Vertex AI workbench) on Google cloud platform.
The source code is here https://github.com/GoogleCloudPlatform/nvidia-merlin-on-vertex-ai/blob/main/01-dataset-preprocessing.ipynb
E.g.
!gcloud builds submit —config “”/home/cloud_build.yaml” # The command build the Docker container image to the NVTabular preprocessing steps of the pipeline and push it to the Google Container Registry.
I got an error:
ERROR: {gcloud.builds.submit} PERMISSION_DENIED: The caller does not have permission.
The GCP support team told me that “cloud run admin” role of the service account permission is disabled because of security reason.
How can I work around this without breaking any security rules.?
Thanks
I have created a durable function in VSCODE, it works perfectly fine locally, but when I deploy it to azure it is missing some dependencies which cannot be included in the python environment (Playwright). I created a Dockerfile and a docker image on a private docker hub repository on which I want to use to deploy the function app, but I don't know how I can deploy the function app using this image.
I have already using commands such as:
az functionapp config container set --docker-custom-image-name <docker-id>/<image>:latest --name <function> --resource-group <rg>
Then when I deploy nothing happens, and I simply get The service is unavailable. I also tried adding the environment variables DOCKER_REGISTRY_SERVER_USERNAME, DOCKER_REGISTRY_SERVER_PASSWORD and DOCKER_REGISTRY_SERVER_PASSWORD. However, it is unclear whether the url should be <docker-id>/<image>:latest, docker.io/<image>:latest, https://docker.io/<image>:latest etc. Still the deployment gets stuck on The service is unavailable, not a very useful error message.
So I basicly have the function app project ready and the dockerfile/image. How can it be so difficult to simply deploy using the giving image? The documentation here is very elaborate but I am missing the details for a private repository. Also it is very different from my usual vscode deployment, making it very tough to follow and execute.
Created the Python 3.9 Azure Durable Functions in VS Code.
Created Container Registry in Azure and Pushed the Function Code to ACR using docker push.
az functionapp config container set --docker-custom-image-name customcontainer4funapp --docker-registry-server-password <login-server-pswd> --docker-registry-server-url https://customcontainer4funapp.azurecr.io --docker-registry-server-user customcontainer4funapp --name krisdockerfunapp --resource-group AzureFunctionsContainers-rg
As following the same MS Doc, pushed the function app to docker custom container made as private and to the Azure Function App. It is working as expected.
Refer to this similar issue resolution regarding the errorThe service is unavailable comes post deployment of the Azure Functions Project as there are several reasons which needs to be diagnosed in certain steps.
I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images
I'm trying to deploy a node.js application to Google cloud run.
I pushed a docker to container registry - that seems to be successful.
But, when I'm trying to deploy it to google cloud run -> to make it public, and accessible via WAN it fails, for unknown reasons.
While loading:
this step can take 10-15 minutes...
when it fails:
Resource readiness deadline exceeded.
The solution is provided in the GCP documentation:
https://cloud.google.com/run/docs/troubleshooting#service-agent
They suggest that you Verify that the service agent has the Cloud Run Service Agent role. If the service agent does not have the role, grant it.
Additionally, you should check the logs for the run app, you might see clues to what the cause is.
Using Service Account credentials, I am successful at running Cloud Build to spin up gsutil, move files from gs into the instance, then copy them back out. All is good.
One of the Cloud Build steps successfully loads a docker image from outside source, it loads fine and reports its own help info successfully. But when run, it fails with the error message:
"fail to open file "..intermediary_work_product_file." permission denied.
For the app I'm running in this step, this error is typically produced when the file cannot be written to its default location. I've set dir = "/workspace" to confirm the default.
So how do I grant read/write permissions to the app running inside a Cloud Build step to write its own intermediary work product to the local folders? The Cloud Build itself is running fine using Service Account credentials. Have tried adding more permissions including with Storage, Cloud Run, Compute Engine, App Engine admin roles. But the same error.
I assume that the credentials used to create the instance are passed to the run time. Have dug deep into the GCP CloudBuild documentation and examples, but found no answers.
There must be something fundamental I'm overlooking.
This problem was resolved by changing the Dockerfile USER as suggested by #PRAJINPRAKASH in this helpful answer https://stackoverflow.com/a/62218160/4882696
Tried to solve this by systematically testing GCP services and role permissions. All Service Account credentials tested were able to create container instances, and run gcloud or gutil fine. However, the custom apps created containers but failed when doing local write even to the default shared /workspace.
When using GCP Cloud Build, local read/write permissions do not "pass through" from the default service account to the runtime instance. The documentation is not clear on this.
I encountered this problem while building my react app with Cloud Build, i wasn't able to install node-sass globally...
So i tried to chown recursively the /usr directory to nobody:nogroup, and it worked. I have no idea if there is another better solution to this, but, the important thing, it fixed my issue.
I had a similar problem; the snippet I was looking for in my cloudbuild manifest was:
- id: perms
name: "gcr.io/cloud-builders/git"
entrypoint: "chmod"
args: ["-v", "-R", "a+rw", "."]
dir: "path/to/some/dir"