Following is my docker-compose.yml file where I have hosted my private docker registry with domain registry.MY-DOMAIN.com
version: "3.9"
services:
registry:
image: registry:latest
environment:
REGISTRY_HTTP_SECRET: b8f62d22-9a3f-4c73-bf5e-e0864b400bc8
#S3 bucket as docker storage
REGISTRY_STORAGE: s3
REGISTRY_STORAGE_S3_ACCESSKEY: XXXXXXXXX
REGISTRY_STORAGE_S3_SECRETKEY: XXXXXXXXX
REGISTRY_STORAGE_S3_REGION: us-east-1
REGISTRY_STORAGE_S3_BUCKET: docker-registry
#Docker token based authentication
REGISTRY_AUTH: token
REGISTRY_AUTH_TOKEN_REALM: "https://api.MY-DOMAIN.com/api/developer-auth/login"
REGISTRY_AUTH_TOKEN_SERVICE: Authentication
REGISTRY_AUTH_TOKEN_ISSUER: "Let's Encrypt"
REGISTRY_AUTH_TOKEN_AUTOREDIRECT: false
#Letsencrupt certificate
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: "/certs/live/registry.MY-DOMAIN.com/fullchain.pem"
REGISTRY_HTTP_TLS_CERTIFICATE: "/certs/live/registry.MY-DOMAIN.com/fullchain.pem"
REGISTRY_HTTP_TLS_KEY: "/certs/live/registry.MY-DOMAIN.com/privkey.pem"
ports:
- 5000:5000
restart: always
volumes:
- "/etc/letsencrypt:/certs"
When I try to login to my API server it return the following error
❯ docker login registry.MY-DOMAIN.com
Username: Elda86#yahoo.com
Password:
Error response from daemon: login attempt to https://api.MY-DOMAIN.com/v2/ failed with status: 400 Bad Request
I don't have username field in my NodeJS API talking to the MongoDB database. Can I pass the email instead of username?
I want to do Docker Registry Token Authentication with my custom API that is written in NodeJS (ExpressJS) application. So that users can log in as "docker login registry.mydomain.com" and push the image once authenticated. I want the same experience as that of DockerHub. I am setting up a similar to DockerHub for my product. It acts as a docker store.
May I know how can I fix the issue?
It looks like your token auth service is not correctly implemented.
You should implement it according to the specification.
See
Token Authentication Specification
Token Authentication Implementation
Token Scope Documentation
OAuth2 Token Authentication
for more information.
I would also recommend to look into actual existing implementations.
Such as:
https://github.com/cesanta/docker_auth
https://github.com/adigunhammedolalekan/registry-auth
https://github.com/twosigma/docker-repo-auth-demo
Related
I am using keycloak to implement OAuth2 code authorization flow in a kubernetes cluster governed by an API gatware Ambassador, I am using Istio Service mesh to add all the tracability, mTLS features to my cluster. One of which is Jaeger which requires all the services to forward x-request-id header in order to link the spans into a specific trace.
When request is sent, Istio's proxy attached to ambassador will generate the x-request-id and forward the request keycloak for authorization, when the results are sent back to the ambassador, the header is dropped and therefore, the istio proxy of keycloak will be generating a new x-header-id. The following image shows the problem:
Here is a photo of the trace where I lost the x-request-id:
Is there a way I can force Keycloak to forward the x-request-id header if passed to it?
Update
here is the environment variables (ConfigMap) associated with Keycloak:
kind: ConfigMap
apiVersion: v1
metadata:
name: keycloak-envars
data:
KEYCLOAK_ADMIN: "admin"
KC_PROXY: "edge"
KC_DB: "postgres"
KC_DB_USERNAME: "test"
KC_DB_DATABASE: "keycloak"
PROXY_ADDRESS_FORWARDING: "true"
You may need to restart your keycloak docker container with the environment variable PROXY_ADDRESS_FORWARDING=true.
ex: docker run -e PROXY_ADDRESS_FORWARDING=true jboss/keycloak
I am learning Kubernetes in GCP. To deploy the cluster into google cloud, I used skaffold. Following is the YAML file:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
# local:
# push: false
googleCloudBuild:
projectId: ticketing-dev-368011
artifacts:
- image: gcr.io/{project-id}/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
In google cloud CLI, when I give the CLI command skaffold dev, the error popped up saying:
getting cloudbuild client: google: could not find default credentials.
Then I give the command in my local terminal gcloud auth application-default login, I was prompted to give consent on the browser, and I gave consent and the page was redirected to a page "successfull authentication". But When I see my terminal, there is an error message showing as
Error saving Application Default Credentials: Unable to write file [C:\Users\Admin\AppData\Roaming\gcloud\application_default_credentials.json]: [Errno 9] Bad file descriptor
And I found there is no such file being created in the above directory. Can someone please help me What I have done wrong?
I'd like to push images from my docker-client in WSL2 to my self-hosted docker-registry which is running on kubernetes in my local network.
After setting up the registry, I get a "no basic auth credentials" when trying to push to or pull from the registry. On my client, I have configured docker to use pass and the docker-credential-helper (v 0.6.4) as the credentials-store.
Here is an example of a failed attempt trying to connect and push to the registry:
ubuntu#X:~$ cat .docker/config.json
{
"auths": {},
"credsStore": "pass"
}
ubuntu#X:~$docker login kubernetesmaster:30000
Username: ubuntu
Password:
Login Succeeded
ubuntu#X:~$ cat .docker/config.json
{
"auths": {
"kubernetesmaster:30000": {}
},
"credsStore": "pass"
}
ubuntu#X:~$ docker-credential-pass list
{"kubernetesmaster:30000":"ubuntu"}
ubuntu#X:~$ docker push kubernetesmaster:30000/postgres:14.1-alpine3.15
The push refers to repository [kubernetesmaster:30000/postgres]
84c1bdf77e22: Preparing
176b9203da6e: Preparing
efb18f6577c9: Preparing
6c651825e7c4: Preparing
be6c168b4af5: Preparing
b737c2580132: Waiting
6cab14f8a434: Preparing
8d3ac3489996: Preparing
ERRO[2021-12-12T09:39:07.873814200+01:00] Upload failed: no basic auth credentials
ERRO[2021-12-12T09:39:07.873821500+01:00] Upload failed: no basic auth credentials
ERRO[2021-12-12T09:39:07.873796800+01:00] Upload failed: no basic auth credentials
INFO[2021-12-12T09:39:07.874817900+01:00] Attempting next endpoint for push after error: no basic auth credentials
6cab14f8a434: Waiting
no basic auth credentials
ubuntu#X:~$ docker pull kubernetesmaster:30000/postgres:14.1-alpine3.15
INFO[2021-12-12T09:39:15.293957000+01:00] Attempting next endpoint for pull after error: Head "https://kubernetesmaster:30000/v2/postgres/manifests/14.1-alpine3.15": no basic auth credentials
ERRO[2021-12-12T09:39:15.301905100+01:00] Handler for POST /v1.41/images/create returned error: Head "https://kubernetesmaster:30000/v2/postgres/manifests/14.1-alpine3.15": no basic auth credentials
Error response from daemon: Head "https://kubernetesmaster:30000/v2/postgres/manifests/14.1-alpine3.15": no basic auth credentials
ubuntu#X:~$ docker --version
Docker version 20.10.11, build dea9396
Note: The login works with docker-login, pushing and pulling just seems to use some bad credentials. I'm a bit stuck here. Help would be much apprechiated!
I followed the docker manuals for setting up a private registry, and acquired a Let's Encrypt certificate. This is my docker-compose.yml:
version: '2'
services:
registry:
restart: always
image: registry:2.3.1
ports:
- 5000:5000
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/live/git.xxxx.com/fullchain.pem
REGISTRY_HTTP_TLS_KEY: /certs/live/git.xxxx.com/privkey.pem
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
- ./data:/var/lib/registry
- /etc/letsencrypt:/certs
- ./auth:/auth
This is my curl command and result:
curl https://git.xxxx.com:5000/v2/
<htpassword auth succeeds>
{}
Also Chrome/Firefox are green and can reach this without cert errors.
But docker login keeps failing.
docker login https://git.xxxx.com:5000/v2/
Username: raarts
Password:
Email:
Error response from daemon: invalid registry endpoint https://git.xxxx.com:5000/v2/: Get https://git.xxxx.com:5000/v2/: x509: certificate signed by unknown authority. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry git.xxxx.com:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/git.xxxx.com:5000/ca.crt
Using docker 1.10.3
I fixed the problem. And it's embarrassing. I'd rather not talk about it if it weren't for the stupid and confusing error message I got.
I had on my own laptop pointed git.xxxx.com to another ip. So docker could not actually reach the registry server, connections were refused.
But the error message I got really pointed me in the wrong direction and cost me several hours of my time.
I am using use the ice command line interface for IBM Container Services, and I am seeing a couple of different problems from a couple of different boxes I am testing with. Here is one example:
[root#cds-legacy-monitor ~]# ice --verbose login --org chrisr#ca.ibm.com --space dev --user chrisr#ca.ibm.com --registry registry-ice.ng.bluemix.net
#2015-11-26 01:38:26.092288 - Namespace(api_key=None, api_url=None, cf=False, cloud=False, host=None, local=False, org='chrisr#ca.ibm.com', psswd=None, reg_host='registry-ice.ng.bluemix.net', skip_docker=False, space='dev', subparser_name='login', user='chrisr#ca.ibm.com', verbose=True)
#2015-11-26 01:38:26.092417 - Executing: cf login -u chrisr#ca.ibm.com -o chrisr#ca.ibm.com -s dev -a https://api.ng.bluemix.net
API endpoint: https://api.ng.bluemix.net`
Password>
Authenticating...
OK
Targeted org chrisr#ca.ibm.com
Targeted space dev
API endpoint: https://api.ng.bluemix.net (API version: 2.40.0)
User: chrisr#ca.ibm.com
Org: chrisr#ca.ibm.com
Space: dev
#2015-11-26 01:38:32.186204 - cf exit level: 0
#2015-11-26 01:38:32.186340 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.186640 - Bearer: <long string omitted>
#2015-11-26 01:38:32.186697 - cf login succeeded. Can access: https://api-ice.ng.bluemix.net/v3/containers
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v3/containers completed successfully
You can issue commands now to the container service
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
#2015-11-26 01:38:32.187317 - using bearer token
#2015-11-26 01:38:32.187350 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.187489 - Bearer: <long pw string omitted>
#2015-11-26 01:38:32.187517 - Org Guid: dae00d7c-1c3d-4bfd-a207-57a35a2fb42b
#2015-11-26 01:38:32.187551 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
FATA[0012] Error response from daemon: </html>
#2015-11-26 01:38:44.689721 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:38:44.689842 - Exit err level = 2**
On the other box, it also fails, but the final error is slightly different.
#2015-11-26 01:44:48.916034 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
Error response from daemon: Unexpected status code [502] : <html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
#2015-11-26 01:45:02.582753 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:45:02.582868 - Exit err level = 2
Any thoughts on what might be causing these issues?
The errors are referring the same problem, ice isn't finding any docker env locally.
It doesn't prevent working remotely on Bluemix but without a local docker env ice cannot work with local containers