GoogleCloudPlatform/gcr-cleaner cannot delete the stale images - docker

I am trying to use the gcr-cleaner that recommended by google to clean up my stale images. However, it cannot delete anything as expected even if it returns me it executes successfully.
I have granted the browser, cloud run admin, service account user, and storage admin roles for the service account. Also, the docker configuration is successful as well. i have tried both GitHub action and cloud run, none of them work.
Even if I give it a wrong repo name, it will show
Deleting refs older than 2022-10-25T20:08:16Z on 1 repo(s)...
gcr.io/project-id/my-repo
✗ no refs were deleted
But there are a bunch of images are older than that timestamp.
Anyone has the same issue before? How should i solve it?

I just had this issue right now, my problem was that without the flag -tag-filter-any it does not delete tagged images, and all images I wanted to delete are tagged. What solved for me was then setting this flag with the regex for my tags:
docker run -v "${HOME}/.config/gcloud:/.config/gcloud" -it us-docker.pkg.dev/gcr-cleaner/gcr-cleaner/gcr-cleaner-cli -grace 720h -keep 5 -repo gcr.io/[MY-PROJECT]/[MY-REPO] -tag-filter-any "^(\d+).(\d+).(\d+)$"

Related

Unable to delete gcloud composer environment

I'm trying to delete gcloud environments. One did not successfully create (no associated Airflow or Bucket) and one did. When I attempt to delete, I get an error message (after a really long time) of RPC Skipped due to required preoperation not finished yet. The logs don't provide any valuable information, and I wasn't able to find anything wrong in the cluster. The only solution I have found so far is to delete the entire project, but I would prefer not to. Any suggestions would be greatly appreciated!
Follow the steps below to delete the environment's resources manually:
Delete GKE cluster that corresponds to the environment
Delete the Google Storage bucket used by the environment
Delete the related deployment with:
gcloud deployment-manager deployments delete <DEPLOYMENT_NAME> --delete-policy=ABANDON
Then try again to delete the Composer environment with:
gcloud composer environments delete <ENVIRONMENT_NAME> --location <LOCATION>
I would like to share what worked for me in case someone else runs into this problem as I followed all the steps above and still could not delete the composer environment.
My 'gcloud composer environments list' command was returning '0', but I could see my environment was still in the console view and when I tried to delete it, I would get the same error message as honlicious. Additionally, I ran 'gcloud projects add-iam-policy-binding' to try to give my Compute Engine ServiceAccount the composer.serviceAgent role, but this still did not resolve my issue. What eventually worked was disabling the Cloud Composer API and then re-enabling it. This removed my old environment I was unable to previously delete.
I got this issue when I tried to create and delete Cloud Composer by Terraform.
I created a Service Account apart from the Composer and this led to deletion it in the first order during a terraform destroy operation.
So the correct order is:
Delete Composer environment
Delete Composer’s Service Account

Docker mkimage_yum.sh for centos 7 fails

A little confused at the moment. I've got docker on one my servers and as it doesn't have internet access, I'm trying to build a base image for centos7.4. The nice Docker site has a mkimage_yum.sh script for this purpose, but it consistently fails when it tries running:
yum -c /tmp/mkimage_yum.sh.gnagTv/etc/yum.conf --installroot=/tmp/mkimage_yum.sh.gnagTv -y clean all
with a "No enabled repos" error. The thing is, if I enter "yum repolist" I get back 17 entries, and I have manually tried to set several repos to enabled. Yet, this command still fails, and I do not understand what could be missing.
Anybody have some idea of what I can so this succeeds?
Jay
I figured out why this was failing, the docker file for mkimage_yum.sh does not contain the proper code if you're storing your repos in /etc/yum.repos.d, it assumes that everything is in /etc/yum.conf. This is really not correct, and it causes one of the later yum clean operations to fail. I fixed it, but I cannot upload the change as the server has no internet access.

Kubernetes: Unable to create repository

I'm following Kubernete's getting started guide. Everything went smoothly until I ran
$ gcloud docker push gcr.io/<PROJECT ID>/hello-node:v1
(Where is, well, my project id). For some reason, Kubernetes is not able to push to the registry. This is what I get:
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
The push refers to a repository [gcr.io/kubernetes-poc-1320/hello-node]
18465c0e312f: Preparing
5f70bf18a086: Preparing
9f7afc4ce40e: Preparing
828b3885b7b1: Preparing
5dce5ebb917f: Preparing
8befcf623ce4: Waiting
3d5a262d6929: Waiting
6eb35183d3b8: Waiting
denied: Unable to create the repository, please check that you have access to do so.
Any ideas on what I might be doing wrong? Note that I have run. $ gcloud init, so I've logged in.
Thanks in advance!
This solved it in my case:
Short version:
Press Enable billing in the Container Engine screen in the https://console.cloud.google.com.
Long version:
In my case I got the error because of an issue with setting billing in the google cloud platform console.
Although I entered all my credit card information and the screen of my Container Engine Screen in the google cloud platform console said Container Engine is getting ready. This may take a minute or more., it didn't work before I pressed Enable billing on the same screen. Then the gcloud docker push command finally worked.
Oddly enough after later returning to the Container Engine screen, it shows me Container Engine is getting ready. This may take a minute or more. and the button Enable billing again.. must be a bug in the console.
None of the above solutions worked for me and I finally found out a solution. I'm using Windows 10 and looked at my C:/Users//.docker/config.json file and it looked like this.
{
"auths": {
"https://appengine.gcr.io": {},
"https://asia.gcr.io": {},
"https://b.gcr.io": {},
"https://bucket.gcr.io": {},
"https://eu.gcr.io": {},
"https://gcr.io": {},
"https://gcr.kubernetes.io": {},
"https://us.gcr.io": {}
},
"credsStore": "wincred"
}
Removing the "credsStore": "wincred" line fixed the problem!
If you're using a GCE instance, you need to make sure it has the right Cloud API access scope.
Since you can't edit the scopes on running instances, you can create a new instance using your current disk.
To do that, do the following
Go to your instance page and click Edit
Uncheck Delete boot disk when instance is deleted and click save
Create a new instance using your previous disk and with write permissions on Storage.
I was getting this same error because I was accidentally using the project name rather than the auto-generated id. The PROJECT_ID can be found via:
$ gcloud info
as well as in the Google Cloud dashboard: https://console.cloud.google.com/home/dashboard
Silly, I realize, but I can imagine others making the same mistake :)
Ensure you are authenticated with Google Cloud.
$ gcloud auth application-default login
Double-check gcloud is pointing to your current project.
$ gcloud config set project PROJECT_ID
If you still have trouble, run gcloud info and take a look at the Last Log File. Note: gcloud auth login no longer writes application default credentials.
In https://stackoverflow.com/a/39996807/598513 I answered switching user/account
gcloud auth list
gcloud config set account example#gmail.com
Edit: This worked for me months ago. New versions of Kubernetes might not have this problem, or this solution might not solve it :)
Ok, after struggling for hours with this, I finally managed to push it to th grc.io registry by changing my tag from a image:version notation to image/version, like this:
gcloud docker push gcr.io/<PROJECT ID>/hello-node/v1
after reading another guide from Kubernetes' documentation: https://cloud.google.com/container-registry/docs/pushing#pushing_to_the_registry
Hope this helps!
For me, having the same error, I found I missed the "gcloud" in the beginning. That was because previous 2 commands started with docker and I just glanced over the changes after docker.
~/gs-spring-boot/complete$ docker -- push gcr.io/kubernetes-codelab-1xxxxx/hello-java:v1
correct:
~/gs-spring-boot/complete$ gcloud docker -- push gcr.io/kubernetes-codelab-1xxxxx/hello-java:v1
run gcloud init and see whether you have logged in to the correct account. I once had this error because of i was trying to push image from different google account
When using docker-credential-helpers to store docker credentials in the OSX Keychain, gcloud docker -- push $registry/$project_id/<image>:<tag> fails as well.
Solution for me was to revert ~/.docker/config.json to not store credentials securely with the keychain
See also: https://github.com/GoogleCloudPlatform/gcloud-common/issues/198
What do you use as a project id? It shouldn't be "my-kubernetes-codelab", it should be "my-kubernetes-codelab-234231" or whatever your numbered version is. This was my problem.

when pushing docker image to private docker registry, having trouble marking it 'public' via my script (but can do via web ui)

I am pushing a docker image to a private docker registry, and am having trouble marking it 'public' via
a script.
For this discussion, I'm guessing the content of the Dockerfile doesn't matter... so lets assume I have the following in my
current working directory:
Dockerfile
from ubuntu
touch /tmp/foo
I build like this:
docker build -t my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04 .
Then, I am doing my push like this:
docker push my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04
Next, I navigate to the web site that allows me to manage my private registry (# the url http://my.private.docker.registry.com)
I look at my image, and I see it has a padlock icon next to it, indicating that it is private. I can manually unlock from the
web UI, but I'd like to know if there are any options to docker's 'push command that will allow me to mark the image
as 'public' without manual intervention.
One thing I tried was setting global settings for my namespace such that all new repos would be readable/writable by all users.
Specifically: I went into the Docker web ui for my private registry and for the namespace 'foo' I tried adding default permissions
(for any newly created repos) such that all users will have 'write' access to any new repo pushed under the 'foo' namespace.
However, even after doing the above, when I pushed a new image to my private registry under namespace foo, that image was still
marked with the pad-lock. I looked up the command line options for 'docker push', and I did not see any option that looked like
it would affect the visibility of the image at the time of push.
thanks in advance for your help !
-chris
So, according to the folks who manage the Docker registry at the company I'm at now: there is no command line way to enable permissions for users other than the repository creator to have write access to that repo. You have to go to the web UI and manually mark the repo 'public', and you have to add permissions for each user (although it is possible to have groups of users, and then add a whole group -- this still is clunky because new employees have to be manually added to the group).
I find it hard to believe that there's no command line way.. But this is what our experts say.. If there are other experts out there who have a better idea, please chime in ! Otherwise I will do it manually through the web UI (grrrrRRrr).

Error deploying rails app to heroku

I am following the rails tutorial, and I am at a point where it instructs to deploy the app to heroku for the second time. I have successfully deployed an app in the past, but it will not work now.
I get this error : Permission denied (public key)
fatal: could not read from remote repository.
The remote exists and is correct, and when using the "heroku key" my key appears. I can add a new stack to heroku as well. I also tried re-adding the key, and that did not work.
Very confused, all the solutions I have found have not worked.
Sounds like you need to configure your ssh keys (usually located at ~/.ssh). Are you using github? If so, your ssh keys should already be set up (you won't be able to push to github.com without setting those up).
If you haven't already set up your ssh keys, follow these instructions from github to do so.
Once your ssh keys are set up, performing the command 'git push heroku' should do the trick. Make sure Heroku is set up correctly by following the instructions from the tutorial
You are probably not deploying as the same user you deployed the first app as. If you are in a linux environment this probably means you deployed as root one time and tried to as a user the other time, maybe you used sudo .
Or possibly you deleted your ssh public keys....or maybe you changed the permissions of your ssh keys.
I am not high enough rated to comment, so please navigate to ~/.ssh and type "ls -l" so I can see your permissions. Then navigate one directory up to ~/ and type "ls -la" so I can see your permissions on the actual .ssh folder
then navigate to /.ssh and do the same permissions posting so I can see them.

Resources