Gitlab Registry: login inconsistency - docker

I've an on-prem instance of Gitlab-CE 13.0.5 running, I'm using the official docker image of Gitlab.
I've enabled the integrated container registry.
Testing the login and push at the registry using a personal access token works, both on the commandline and within a CI script.
Using the CI job token in a CI script, the docker login passes, the docker push fails.
Using a group access token (with the read and write registry privilege), both login and then of course also push fails. Testing the group access token manually on the commandline the login step also fails.
I've checked the logfile of the registry, I only see the access denied message, no further hint whats might be wrong.
I've considered to tag the image with the correct hiearchy of group and project name.
Has anyone an idea where I should continue to search?
Thanks and cheers
Wolfgang

Finally, I found it!
If there is a port number in the registry name in the login command, exactly the same name including the port number has to be used when tagging and pushing an image.
So, if in the gitlab configuration in the variable gitlab_rails['registry_port'] = "443" the default port number 443 is mentioned, it appears in the variable $CI_REGISTRY and you have to use it in the tag and the push command.
Setting the variable gitlab_rails['registry_port'] = "" to an empty string let the system still use the port 443 - since it is the default port. However, it will be removed from the name.
To be honest, I was a bit surprised.

Related

How to connect via http instead of default https on nifi docker container

I am currently running latest versions Nifi and Postgresql via docker compose.
as of 1.14 version update of Nifi, when you accesss the UI on web it connects via https, thus asking you for ID and Password every time you log in. Its too cumbersome to go to nifi-app.log file and look for credentials every time I access the UI. I know that you can change the setting where it keeps https as the default method but I am not sure how to do that in a docker container. Can anyone help me with this?
You could use some env like AUTH in the documentation
You can find the full explanations here

action: push: unauthorized to access repository docker harbor registry

I’m trying to push to harbor registry 2.2.
It works with ssl and the storage is on locally mounted NFS share.
The error I get is: unauthorized to access repository: test/flask, action: push: unauthorized to access repository: test/flask, action push.
I tried to push with the admin user to project that I’ve created it with.
I tried to change the permission of the nfs share and it didn’t work.
The registry is on compose and not on Kubernetes.
Had the same inexplicable issue, just started happening one day after several months with no issues. Required me to explicitly logout of Harbor registry and then login.
docker logout registry.example.com
docker login registry.example.com
After this sequence, the "unauthorized to access" went away, and pushes began working again.
I had the similar problem and the solution was docker login registry.example.com .
I had the same issue. In my case, the problem was that the username and password that were used in the GitLab pipeline were protected. This means that they were only shared with pipelines from a protected branch like master for example. Since I was testing my changes in the pipeline in a feature branch, all I had to do was to go to variable settings and uncheck the protected flag for harbor user and password so it can be shared with the pipelines that were running from feature branches.

unable to docker push images in artifactory

I'm having problems pushing images to my docker repo in Artifactory. Pulling the images works as expected, but pushing them gives me an error. I can see the progress bar pushing the image, but somehow it times out w/ a "I/O Timeout"
My setup consists of an Artifactory instance running in my k8 cluster and I have a F5 in front of it for SSL offloading. I followed these instruction for using the repository path method.
On the http settings I've tried using the nginx/http reverse proxy or just using the embedded tomcat. I either the the "I/O timeout" or a "503 Service Unavailable" (when using the embedded).
I know network wise everything is ok, since I can push other items. i.e, files, npm etc... It's a bit frustrating that I'm able to pull but not push. Has anyone seen this before??
Do the docker push command again with artifactory UI open ( Admin -> System logs -> Request log )
You should see a few requests coming in with '/api/docker' in the path. What's the return code and full path shows in request log?
The docker registry push would require docker login. You may need to get credentials for the docker registry so that you push. Say if you have saved password in a file
docker login --username=yourhubusername --email=youremail#company.com
And then try push.

Login Issue with Weblogic in Docker

I created a Weblogic generic container for version 12.1.3 based on the official Docker images from Oracle at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles
Command: buildDockerImage.sh -g -s -v 12.1.3
This creates the image oracle/weblogic:12.1.3-generic
Using a modified version of sample dockerfile for 1213-domain, I built the Weblogic container.
Note: changed the base image to be generic, instead of developer
docker build -t 1213-domain --build-arg ADMIN_PASSWORD="admin123" -f myDockerfile .
Pushed the built image to Amazon ECR and ran the container using the AWS ECS. Configured the port mappings as 0:7001, set memory soft limit as 1024, nothing else changed in default ECS settings. I have an application load balancer in the front, which receives traffic at 443 port and forwards to the containers. In the browser I get a login page for Weblogic, when I enter username as weblogic and password as admin123, I get the error:
Authentication Denied
Interestingly when I go to the container and connect to the weblogic using WLST, it works fine.
[ec2-user#ip-10-99-103-141 ~]$ docker exec -it 458 bash
[oracle#4580238db23f mydomain]$ /u01/oracle/oracle_common/common/bin/wlst.sh
Initializing WebLogic Scripting Tool (WLST) ...
Jython scans all the jar files it can find at first startup. Depending on the system, this process may take a few minutes to complete, and WLST may not return a prompt right away.
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
wls:/offline> connect("weblogic","admin123","t3://localhost:7001")
Connecting to t3://localhost:7001 with userid weblogic ...
Successfully connected to Admin Server "AdminServer" that belongs to domain "mydomain".
Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.
wls:/mydomain/serverConfig>
Any hints on what can be going wrong?
Very interesting indeed. :) .. You are sure there is no special characters or so when you entering the username and password. Try typing the same if you are coping.
Also as backup, since you are able to login to WLST you can try two option.
Resetting the current password of weblogic or try adding new username and password.
below link will help
http://middlewarebuzz.blogspot.com/2013/06/weblogic-password-reset.html
or
http://middlewaremagic.com/weblogic/?p=4962

Deploy to IBM Containers without cf/ice CLI

I currently have a workflow that goes like this: Bitbucket -> Wercker.
Wercker correctly builds my app, but when it comes to deploying I am lost. I am attempting to deploy to my IBM Containers registry on Bluemix (recently out of beta).
Running docker login registry.ng.bluemix.net with my IBM account credentials returns a 401: bad credentials on my local machine (boot2docker on OSX). It does the same on Wercker in my deploy step.
Here is my deploy step:
deploy:
box:
id: node
tag: 0.12.6-slim
steps:
- internal/docker-push:
username: $USERNAME
password: $PASSWORD
tag: main
entrypoint: node bundle/main.js
repository: <my namespace/<my container name> (removed for this post)
registry: registry.ng.bluemix.net
As you can see: I have the username and password passed in as environment variables as per the Wercker Docs (and I have tested that they are passed in correctly).
Basically: how do you push containers to an IBM registry WITHOUT using the ice/cf CLI? I have a feeling that I'm missing something obvious. I just can't find it.
You need to use either the Containers plugin for cf or the ICE tool to login.
Documentation
Cloud Foundry plug-in:
cf ic login
ICE:
ice login
Can you create a custom script that can log in first? If the environment already has cf with the containers extension:
- script:
name: Custom login for Bluemix Containers
code: cf login -u <username> -p <password> -o <org> -s <space>
Excuse my wercker newb.
The problem is that the authentication with the registry uses a token rather than your userID and password. ice login and cf ic login take care of that but unfortunately a straight up docker login won't work.
Some scripts for initializing, building and cleaning up images are also available here: https://github.com/Osthanes/docker_builder. These are used in the DevOps Services delivery pipeline which is likely a similar flow to what you are building.
Turns out: it's very possible.
Basically:
Install CF cli
cf login -a https://api.ng.bluemix.net
Extract token from ~/.cf/config.json (text after bearer in AccessToken + "|" + OrganizationFields.Guid
It depends what you want to do with it. I have a very detailed write-up here on Github.
You can use the token as the password, passing 'bearer' as the username.
#mods: Is this enough for me to link to another site? I really hate to duplicate stuff like this...
You can now generate tokens to access the IBM Bluemix Container Registry using the container-registry plugin for the bx command.
These tokens can be read-only or read-write and either non-expiring (unless revoked) or expire after 24 hours.
The tokens can be used directly with docker login.
Read the docs here

Resources