Unable to send google container registry in docker image - serverless

I'm trying to send my first image to gcr(google container reg.) via local bash, but somehow I couldn't do it even though I added my current user as 'owner' to the project. In the last link that gave me an error, the following was written.
{"errors":[{"code":"UNAUTHORIZED","message":"Unauthorized access."}]}
Also, my ubuntu distribution ip that I use on wsl2 was banned by google on the grounds that I tried too much. This is my 2nd problem that I need to solve.
I encountered my problem in the first item through powershell on my local computer.
What should I do in this case?

The refusal to connect to GCP might be related to the IP ban that you mentioned, was there any specified length to the ban? Usually, an email is sent with more details about the ban. Otherwise, there is specific documentation dealing with authenticating to Container Registry. The documentation lists several authentication methods:
gcloud credential helper
Standalone credential helper
Access token
JSON key file
Which of these methods are you having issues with? The documentation lists the procedure to authenticate properly with each of these methods. Is the correct account configured? It could be a different account or a service account is being used instead.

Related

Use multiple logins in the same docker-compose.yml file

I am trying to pull images from the same Artifactory repo using 2 different access tokens. This is because one image is available to one user, and another one is accessible by another user.
I tried using docker login, but I can login only once to a repo. Is there a way to specify in the docker-compose.yml file a user and token that Compose should use in order to pull the image?
The docker-compose file specification does not support providing credentials per service / image.
But putting this technicality aside, the described use case clearly indicates there is a user who needs access to both images...

Nexus 3 Docker Content Selector selects too many images

I am using Nexus 3 as a docker repository and want to create a user that has only read-only access to a specific docker image (and its related tags)
For this I created a Content Selector with the following query (The name of the image is test for demonstration purposes):
format == "docker" and path =~ "^(/v2/|/v2/library/)?(test(/.*)?)?$".
Then I created a Privilege with the action read, bound that to a role and added it to the user.
All is well, when I use the limited user I can fetch the image and not push.
However, I can still pull images I should not be able to pull.
Consider the following: I create an image called testaaa:1 on the docker registry. Afterwards I docker login to the registry using my user with read-only access. I am suddenly able to pull docker pull hub.my-registry.com/testaaa:1 even though according to the query I should not be able to.
I tested the query in a Java Regex Tester, the query would not select testaaa. Am I missing something? I am having a hard time finding clues on this topic.
EDIT: Some more testing reveals that my user is actually able to pull all images from this registry. The Content Selector query I used is exactly the one suggested by the Sonatype documentation Content Selectors and Docker - REST API vs Docker Client
I have figured it out. The issue was not the Content Selector query, but a capability that I previously added. The capability granted any authenticated user the role nx-anonymous which lets anyone view any repository in Nexus. This meant that any authenticated user was allowed to read/pull any image from the repository.
This error was entirely on my part. In case anyone has similar issues go have a look in the Nexus Settings -> System -> Capabilities and check if there are any capabilities that give your users unwanted roles.

mailu docker - how to include container id to let's encrypt certificate?

I'm searching now for hours threw the internet but found nothing what would suit my case.
I have a mailu docker installed on my server and I want to send emails from my meteor application threw this container.
I set my MAIL_URL variable like process.env.MAIL_URL = 'smtps://USERNAME:PASSWORD#DOCKER-IP:465'; and this is working when I use also the global variable NODE_TLS_REJECT_UNAUTHORIZED = 0 but I don't want to use it, because of security reasons.
When I send emails from my meteor app on my laptop and using my email server mail.foo.com instead of the docker-id like smtps://USERNAME:PASSWORD#mail.foo.com:465 then it also works. So from outside I have no problem but when I'm on the server I can't use localhost like smtps://USERNAME:PASSWORD#localhost:465 or smtps://USERNAME:PASSWORD#mail.foo.com:465.
As #natevw said in Node.js Hostname/IP doesn't match certificate's altnames:
It would be better to first diagnose why the certificate is not authorizing and see if that could be fixed instead.
I would say my problem is that the internal docker-ip address is not in the certificate included.
So in my view I would say I have two options:
I could add somehow the ip address to the certificate
I could use somehow the localhost or domain name instead of the internal container id
But I sadly don't know how to achieve one of them.
If you need some configs or something like that please comment and I will edit this post.
Thanks in advance,
Michael

Getting "ECONNREFUSED" error when trying to upload to Wolkenkit Blob Server

I'm currently developing a Wolkenkit application which is run on my local machine.
I want to upload a file from the Wolkenkit app to the blob server (as documented here).
When sending a POST request from the server to https://local.wolkenkit.io:3001/, Node.js gives me the error ECONNREFUSED.
I've tested the POST-Request with another program and it works there. Any idea why it doesn't work from the wolkenkit application itself?
Thanks!
The Storing files sample you linked to shows code that is to be run in the browser, not in the backend itself. Of course, both should work, but there are a few minor differences you need to watch out for.
Fixing the host name
First, I suppose that local.wolkenkit.io in your case maps to 127.0.0.1, which is the default for wolkenkit. That means that when you try to connect to this domain from within a Docker container, the container does not try to call out to the blog storage container, but it stays within itself. So, the first thing that needs to be fixed is the host name.
Basically, there are two options for this: You can either setup local.wolkenkit.io so that it resolves to the external IP address of your machine. This would work, but is pretty cumbersome. The other option is to directly address the appropriate container that is responsible for blob storage, by its internal name. The internal name is <name-of-your-app>-depot-file. So you need to replace https://local.wolkenkit.io:3001/ by https://<...>-depot-file.wolkenkit.io:3001/.
Fixing the port
Second, the port is wrong. This is because the blob storage service is internally running on port 3000, externally on 3001. So instead of https://<...>-depot-file.wolkenkit.io:3001/ you need to use https://<...>-depot-file.wolkenkit.io:3000/.
Once you have done this you should not get any more errors like ECONNREFUSED, since now the service can be found.
Fixing SSL issues
Third, since you are now connecting to the blob storage service using a different domain name, the SSL certificate doesn't match any more, since it was issued for local.wolkenkit.io. As a result, you will get SSL errors when trying to connect.
The simplest way to get around this is to disable any SSL checks (albeit this is also the most insecure way to handle this!). How to do this depends on the HTTP client module you are using. E.g., in request there is an option called strictSSL that you can set to false.
Of course, what you actually should do is to either use a custom certificate which includes this domain name as well, or to write a function that handles the certificate check and accepts the presented one, especially in this case.
If you do all of this, things should work :-)
PS: I am one of the authors of wolkenkit. Thanks a lot for bringing up this issue, and we will take care of this in the future, to make storing blobs easier.

changing gerrit's canonical web url

I have had an issue with setting up my gerrit server. The machine has Ubuntu 12.04 LTS Server 64-bit installed on it. I am setting up git and gerrit as a way to manage source code and code review.
I require internal and external access to it. I setup a DNS that would work externally. However, during the initial setup, i left the canonicalWebUrl to its default value. It usually take's the machine's hostname (in this case it was vmserver).
The issue I was running into is exactly as explained here https://stackoverflow.com/questions/14702198/the-requested-url-openid-was-not-found-on-this-server, where after trying to sign in/register account with OPEN ID, it was saying url not found.
For some reason, it was changing the url in the address bar from the the DNS i setup to the CanonicalWebURL.
I tried to change the canonical web url in the gerrit.conf file found in etc of the gerrit site. After restarting the server, however, we were able to see the git project files present as they should be, but the account that was administrator seemed to no longer be registered and none of the projects were visible through gerrit.
I was wondering if there was a special procedure to changing the canonical web url in gerrit without disrupting access to a server?
any help or information on canonical urls would be much appreciated as i cannot find too much information on them.
edit:
looking deeper, i found some information that is way over my head regarding "submodules"
i do not understand if this is what i am looking for or not.
https://gerrit-review.googlesource.com/#/c/36190/
The canonical web url must be set, and it sounds like you have done that correctly.
I suspect the issue you are seeing is caused by changing the canonical web url - some OpenID providers (Google being the big one) will return a different user ID based on the URL of the request. This is a privacy thing and cannot be changed. So previous users will now show up as new users and won't be in their old groups (Administrators group in this case).
If you don't have many users, it might be easiest to migrate them by hand. You can modify the database to map the new user ID to the old user account.

Resources