Kubernetes: Unable to create repository - docker

I'm following Kubernete's getting started guide. Everything went smoothly until I ran
$ gcloud docker push gcr.io/<PROJECT ID>/hello-node:v1
(Where is, well, my project id). For some reason, Kubernetes is not able to push to the registry. This is what I get:
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
The push refers to a repository [gcr.io/kubernetes-poc-1320/hello-node]
18465c0e312f: Preparing
5f70bf18a086: Preparing
9f7afc4ce40e: Preparing
828b3885b7b1: Preparing
5dce5ebb917f: Preparing
8befcf623ce4: Waiting
3d5a262d6929: Waiting
6eb35183d3b8: Waiting
denied: Unable to create the repository, please check that you have access to do so.
Any ideas on what I might be doing wrong? Note that I have run. $ gcloud init, so I've logged in.
Thanks in advance!

This solved it in my case:
Short version:
Press Enable billing in the Container Engine screen in the https://console.cloud.google.com.
Long version:
In my case I got the error because of an issue with setting billing in the google cloud platform console.
Although I entered all my credit card information and the screen of my Container Engine Screen in the google cloud platform console said Container Engine is getting ready. This may take a minute or more., it didn't work before I pressed Enable billing on the same screen. Then the gcloud docker push command finally worked.
Oddly enough after later returning to the Container Engine screen, it shows me Container Engine is getting ready. This may take a minute or more. and the button Enable billing again.. must be a bug in the console.

None of the above solutions worked for me and I finally found out a solution. I'm using Windows 10 and looked at my C:/Users//.docker/config.json file and it looked like this.
{
"auths": {
"https://appengine.gcr.io": {},
"https://asia.gcr.io": {},
"https://b.gcr.io": {},
"https://bucket.gcr.io": {},
"https://eu.gcr.io": {},
"https://gcr.io": {},
"https://gcr.kubernetes.io": {},
"https://us.gcr.io": {}
},
"credsStore": "wincred"
}
Removing the "credsStore": "wincred" line fixed the problem!

If you're using a GCE instance, you need to make sure it has the right Cloud API access scope.
Since you can't edit the scopes on running instances, you can create a new instance using your current disk.
To do that, do the following
Go to your instance page and click Edit
Uncheck Delete boot disk when instance is deleted and click save
Create a new instance using your previous disk and with write permissions on Storage.

I was getting this same error because I was accidentally using the project name rather than the auto-generated id. The PROJECT_ID can be found via:
$ gcloud info
as well as in the Google Cloud dashboard: https://console.cloud.google.com/home/dashboard
Silly, I realize, but I can imagine others making the same mistake :)

Ensure you are authenticated with Google Cloud.
$ gcloud auth application-default login
Double-check gcloud is pointing to your current project.
$ gcloud config set project PROJECT_ID
If you still have trouble, run gcloud info and take a look at the Last Log File. Note: gcloud auth login no longer writes application default credentials.

In https://stackoverflow.com/a/39996807/598513 I answered switching user/account
gcloud auth list
gcloud config set account example#gmail.com

Edit: This worked for me months ago. New versions of Kubernetes might not have this problem, or this solution might not solve it :)
Ok, after struggling for hours with this, I finally managed to push it to th grc.io registry by changing my tag from a image:version notation to image/version, like this:
gcloud docker push gcr.io/<PROJECT ID>/hello-node/v1
after reading another guide from Kubernetes' documentation: https://cloud.google.com/container-registry/docs/pushing#pushing_to_the_registry
Hope this helps!

For me, having the same error, I found I missed the "gcloud" in the beginning. That was because previous 2 commands started with docker and I just glanced over the changes after docker.
~/gs-spring-boot/complete$ docker -- push gcr.io/kubernetes-codelab-1xxxxx/hello-java:v1
correct:
~/gs-spring-boot/complete$ gcloud docker -- push gcr.io/kubernetes-codelab-1xxxxx/hello-java:v1

run gcloud init and see whether you have logged in to the correct account. I once had this error because of i was trying to push image from different google account

When using docker-credential-helpers to store docker credentials in the OSX Keychain, gcloud docker -- push $registry/$project_id/<image>:<tag> fails as well.
Solution for me was to revert ~/.docker/config.json to not store credentials securely with the keychain
See also: https://github.com/GoogleCloudPlatform/gcloud-common/issues/198

What do you use as a project id? It shouldn't be "my-kubernetes-codelab", it should be "my-kubernetes-codelab-234231" or whatever your numbered version is. This was my problem.

Related

GoogleCloudPlatform/gcr-cleaner cannot delete the stale images

I am trying to use the gcr-cleaner that recommended by google to clean up my stale images. However, it cannot delete anything as expected even if it returns me it executes successfully.
I have granted the browser, cloud run admin, service account user, and storage admin roles for the service account. Also, the docker configuration is successful as well. i have tried both GitHub action and cloud run, none of them work.
Even if I give it a wrong repo name, it will show
Deleting refs older than 2022-10-25T20:08:16Z on 1 repo(s)...
gcr.io/project-id/my-repo
✗ no refs were deleted
But there are a bunch of images are older than that timestamp.
Anyone has the same issue before? How should i solve it?
I just had this issue right now, my problem was that without the flag -tag-filter-any it does not delete tagged images, and all images I wanted to delete are tagged. What solved for me was then setting this flag with the regex for my tags:
docker run -v "${HOME}/.config/gcloud:/.config/gcloud" -it us-docker.pkg.dev/gcr-cleaner/gcr-cleaner/gcr-cleaner-cli -grace 720h -keep 5 -repo gcr.io/[MY-PROJECT]/[MY-REPO] -tag-filter-any "^(\d+).(\d+).(\d+)$"

Puppet Code Manager setup issue with Bitbucket

I have just installed puppet server enterprise and successfully added a few nodes and got some custom modules running also. I am now wanting to move to Code Manager before we get too deep in it.
I have followed the instructions for creating an empty Bitbucket repo here and initializing it with one single file environment.conf on a production branch as described in that link.
I have then followed the steps here to configure Code Manager but when I get to Test the control repository section to test the connection with puppet-code deploy --dry-run I get the following error:
--dry-run implies --all.
--dry-run implies --wait.
Dry-run deploying all environments.
2021/12/21 20:21:12 ERROR - [POST /deploys][500] Errors while collecting a list of environments to deploy (exit code: 1).
"/opt/puppetlabs/puppet/lib/ruby/gems/2.7.0/gems/rugged-0.27.7/lib/rugged/repository.rb:258: warning: Using the last argument as keyword parameters is deprecated\nERROR\t -\u003e Unable to determine current branches for Git source 'puppet' (/etc/puppetlabs/code-staging/environments)\nOriginal exception:\nFailed to authenticate SSH session: Unable to send userauth-publickey request at /opt/puppetlabs/server/data/code-manager/git/git#git.company.com-1234-in-puppet-control-repo.git\n"
I have added the puppet server's SSH pub key to the bitbucket repo's access tokens.
There are a few things in that error message im not fully understanding.
Unable to determine current branches for Git source 'puppet' - What is meant by source 'puppet' - my repo is called puppet-control-repo...?
Failed to authenticate SSH session: Unable to send userauth-publickey request - My puppet master's SSH keys are in the token list for that repo so confused here also.
Any guidance would be appreciated.
UPDATE (13-01-2022):
I can successfully clone on puppet server using command
git clone ssh://git#git.example.com:1234/project/puppet-control-repo.git --config core.sshCommand="ssh -i /etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa"
Note sure why puppet is still returning:
Failed to authenticate SSH session: Unable to send userauth-publickey request
I don't know if you saw the instructions here https://puppet.com/docs/pe/2021.4/control_repo.html#managing_environments_with_a_control_repository but you can run
puppet infrastructure configure
which makes sure the files have right permissions.
I would also test attempting a clone with keys works outside of code deploy
git clone -i /etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa your_gir_url
If this works it may be worth being aware of an issue we experienced on github https://puppet.com/blog/how-githubs-protocol-changes-impact-your-puppet-code-deployments/ which depending on bitbuckets approach to protocal may be having a similar affect.
We are updating docs to recommend the usage of more secure keys ed25519 creating as per the article.
if a manual clone doesnt work it suggests bitbucket doesn't have your public key correctly
Also a more complete debugging command is
runuser -u pe-puppet -- /opt/puppetlabs/puppet/bin/r10k -c /opt/puppetlabs/server/data/code-manager/r10k.yaml deploy environment production --puppetfile --verbose debug2
FOLLOWUP
On investigation we found https://support.puppet.com/hc/en-us/articles/227829007 which showed ssh:// was required at the start of r10k_remote making an example command of ssh://git#bitbucket.org:davidsandilands/control-repo.git
I have requested updates to https://support.puppet.com/hc/en-us/articles/227829007 to highlight this is not a version confined issue and asked for the puppet code manager configuration docs to be updated to reflect this may be required.
I see that you have a .pub file in the ssh directory. I believe it's expecting a private key there.
Also do you have the master class set up to point to your repo inside of Puppet Enterprise web ui?
You'll want to set the following parameters on that class.
code_manager_auto_configure = true
r10k_private_key = $PRIVATE_KEY_IN_SSH_FOLDER_ABSOLUTE_PATH
r10k_remote = Your git URL
The PE Master can be found in Node Groups on the PE Web UI Node Groups -> PE Infrastructure -> PE Master
Thanks to #david-sandilands for helping me resolve this and guiding me to this article via the puppet community slack. Top guy!
EDIT 1:
The solution was documented here: https://support.puppet.com/hc/en-us/articles/227829007-Fix-your-Bitbucket-Stash-Code-Manager-configuration-in-Puppet-Enterprise-2015-3-to-2017-2
However the documentation was out of date as it affected version 2021.4 also.
In short:
r10k_remote = "ssh://git#git.company.com:1234/project/control-repo.git"
Not
r10k_remote = "git#git.company.com:1234/project/control-repo.git"
When working with Bitbucket Server.
EDIT 2:
Puppet have since updated their documentation:
https://puppet.com/docs/pe/2021.5/code_mgr_config.html#code_mgr_enable

Unable to delete gcloud composer environment

I'm trying to delete gcloud environments. One did not successfully create (no associated Airflow or Bucket) and one did. When I attempt to delete, I get an error message (after a really long time) of RPC Skipped due to required preoperation not finished yet. The logs don't provide any valuable information, and I wasn't able to find anything wrong in the cluster. The only solution I have found so far is to delete the entire project, but I would prefer not to. Any suggestions would be greatly appreciated!
Follow the steps below to delete the environment's resources manually:
Delete GKE cluster that corresponds to the environment
Delete the Google Storage bucket used by the environment
Delete the related deployment with:
gcloud deployment-manager deployments delete <DEPLOYMENT_NAME> --delete-policy=ABANDON
Then try again to delete the Composer environment with:
gcloud composer environments delete <ENVIRONMENT_NAME> --location <LOCATION>
I would like to share what worked for me in case someone else runs into this problem as I followed all the steps above and still could not delete the composer environment.
My 'gcloud composer environments list' command was returning '0', but I could see my environment was still in the console view and when I tried to delete it, I would get the same error message as honlicious. Additionally, I ran 'gcloud projects add-iam-policy-binding' to try to give my Compute Engine ServiceAccount the composer.serviceAgent role, but this still did not resolve my issue. What eventually worked was disabling the Cloud Composer API and then re-enabling it. This removed my old environment I was unable to previously delete.
I got this issue when I tried to create and delete Cloud Composer by Terraform.
I created a Service Account apart from the Composer and this led to deletion it in the first order during a terraform destroy operation.
So the correct order is:
Delete Composer environment
Delete Composer’s Service Account

when pushing docker image to private docker registry, having trouble marking it 'public' via my script (but can do via web ui)

I am pushing a docker image to a private docker registry, and am having trouble marking it 'public' via
a script.
For this discussion, I'm guessing the content of the Dockerfile doesn't matter... so lets assume I have the following in my
current working directory:
Dockerfile
from ubuntu
touch /tmp/foo
I build like this:
docker build -t my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04 .
Then, I am doing my push like this:
docker push my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04
Next, I navigate to the web site that allows me to manage my private registry (# the url http://my.private.docker.registry.com)
I look at my image, and I see it has a padlock icon next to it, indicating that it is private. I can manually unlock from the
web UI, but I'd like to know if there are any options to docker's 'push command that will allow me to mark the image
as 'public' without manual intervention.
One thing I tried was setting global settings for my namespace such that all new repos would be readable/writable by all users.
Specifically: I went into the Docker web ui for my private registry and for the namespace 'foo' I tried adding default permissions
(for any newly created repos) such that all users will have 'write' access to any new repo pushed under the 'foo' namespace.
However, even after doing the above, when I pushed a new image to my private registry under namespace foo, that image was still
marked with the pad-lock. I looked up the command line options for 'docker push', and I did not see any option that looked like
it would affect the visibility of the image at the time of push.
thanks in advance for your help !
-chris
So, according to the folks who manage the Docker registry at the company I'm at now: there is no command line way to enable permissions for users other than the repository creator to have write access to that repo. You have to go to the web UI and manually mark the repo 'public', and you have to add permissions for each user (although it is possible to have groups of users, and then add a whole group -- this still is clunky because new employees have to be manually added to the group).
I find it hard to believe that there's no command line way.. But this is what our experts say.. If there are other experts out there who have a better idea, please chime in ! Otherwise I will do it manually through the web UI (grrrrRRrr).

Error deploying rails app to heroku

I am following the rails tutorial, and I am at a point where it instructs to deploy the app to heroku for the second time. I have successfully deployed an app in the past, but it will not work now.
I get this error : Permission denied (public key)
fatal: could not read from remote repository.
The remote exists and is correct, and when using the "heroku key" my key appears. I can add a new stack to heroku as well. I also tried re-adding the key, and that did not work.
Very confused, all the solutions I have found have not worked.
Sounds like you need to configure your ssh keys (usually located at ~/.ssh). Are you using github? If so, your ssh keys should already be set up (you won't be able to push to github.com without setting those up).
If you haven't already set up your ssh keys, follow these instructions from github to do so.
Once your ssh keys are set up, performing the command 'git push heroku' should do the trick. Make sure Heroku is set up correctly by following the instructions from the tutorial
You are probably not deploying as the same user you deployed the first app as. If you are in a linux environment this probably means you deployed as root one time and tried to as a user the other time, maybe you used sudo .
Or possibly you deleted your ssh public keys....or maybe you changed the permissions of your ssh keys.
I am not high enough rated to comment, so please navigate to ~/.ssh and type "ls -l" so I can see your permissions. Then navigate one directory up to ~/ and type "ls -la" so I can see your permissions on the actual .ssh folder
then navigate to /.ssh and do the same permissions posting so I can see them.

Resources