Does anyone know how to specify username / password for pulling images from a private registry during a docker stack deploy? I want to be able to do this in one command:
docker stack deploy --compose-file docker-compose.yml --username <user> --password <pass> mystack
Without this, I first have to do a
docker login -u <user> -p <pass> <registry-url>
Can this be done in one command?
This worked for me, I had a private repo on dockerhub
docker login -u <<UserName>> -p <<Password>>
registry.hub.docker.com/<<Repo_Name>>
&& docker stack deploy -c docker-swarm.yml mystack --with-registry-auth
The key is to pass Username Password along with registry name and then followed it up with this flag --with-registry-auth
Here is the link which provides step by step information.
I think it cannot do it in one command typically, but you can configure private registry for clients with specific ip without authentication if you insist.
Or you can just do it in different commands but one line:
$ docker login -u <user> -p <pass> <registry-url> && docker stack deploy --compose-file docker-compose.yml mystack
The issue Use SSH pub key in order to allow access to a repository #531 address the ability to connect to an repository using SSH and keys. It's a best way to mantain security and privacity.
But you can create a read only user to perform that. If you automate the process, you can recreate de user, or change password, by the way, you can choose any turn around to solve that.
Related
I am trying to write a bash script to automatize the setup of a multi-containers environment.
Each container is built from images pulled from a private protected repository.
The problem is that when the script calls for docker-compose up for the first time, access to the repository is denied, like if it does not know I have properly done docker login before running the script.
If I docker pull an image manually, that very image is no longer a problem when the script tries to build its container. But when it has to docker pull on its own from a Dockerfile definition, it gets access denied.
Considering that I would like this script to be portable to other devs' environments, how can I get it to be able to access the repository using the credentials each dev will have already set on its computer with docker login?
You can do something like:
#!/bin/bash
cat ~/pwd.txt | docker login <servername> -u <username> --password-stdin
docker pull
This reads the password from pwd.txt and logs in to the specified server.
In case you have multiple servers you want to log in you can try:
#!/bin/bash
serverlist="server1.com server2.com"
for server in $serverlist; do
cat ~/${server}_pwd.txt | docker login $server -u <username> --password-stdin
done
docker pull
This reads the passwords from files like server1.com_pwd.txt.
When running docker build on my Dockerfile, I pull the most up to date code from a private gitlab repo using a FROM gitlab statment. I am getting a access forbidden error as I have not given my credentials. How do you give your credentials so that I can pull from this private repo?
(Assuming you are talking about Gitlab Container Registry)
To be able to pull docker images from private registries, you need to first run this at the command line:
$ docker login -u $DOCKER_USER -p $DOCKER_PASS
If you are running this in a CI environment, you should set these as secret environment variables.
With Gitlab, I believe it is something along these lines:
$ docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.example.com
See the above linked page (search for "login") to see more examples and instructions.
Is it possible to add an --insecure-registry=docker.my-registry to the current session only with environment variables or similar?
I'd like to do some testing without changing my current Docker setup (for example I might not be able to restart the service).
Or any similar idea?
Sounds like a bad idea from a security point of view. If that were possible you (or any user) will be able to download images from an insecure registry that's not allowed by docker's sysadmin.
There is no concept of per session images in docker, any downloaded image will be available for all users
edit:
And to answer your question: No, it is not possible.
I was able to solve this issue by using the docker:18.02.0-dind Docker image (Docker in Docker).
I start the DID container:
$ docker run -d --name did --privileged docker:18.02.0-dind --insecure-registry=my.insecure.reg
Then I go into the running container:
$ docker exec -it did /bin/sh
And inside the running container I login to my insecure registry:
/ # docker login -u me -p mypass my.insecure.reg
Login Succeeded
In the running container I can now do some tests against my insecure registry.
I'm trying to execute docker commands inside of a Docker container (don't ask why). To do so I start up a container by running.
sudo docker run -v /var/run/docker.sock:/var/run/docker.sock -it my_docker_image
I am able to run all of the docker commands (pull, login, images, etc) but when I try to push to my remote (Gitlab) registry I get denied access. Yes, I did do a docker login and was able to successfully log in.
When looking at the Gitlab logs I see an error telling me no access token was sent with the push. After I do a docker login I see a /root/.docker/config.json with the remote url and a string of random characters (my credentials in base 64 I believe)? I'm using an access token as my password because i have MFA enabled on my Gitlab server.
Appreciate the help!
I ended up resolving the issue by using docker:stable as my runner image. Not quite sure what the problem was with the centos:centos7 image.
I would like to pull a Docker image that was built inside an OpenShift Container Platform 3.9 cluster out of that cluster. To this end I try the following:
username=$(oc whoami)
api_token=$(oc whoami -t)
docker login -u $username -p $api_token my-cluster:443
image=$(oc get is/my-is -o jsonpath='{.status.tags[0].items[0].dockerImageReference}')
docker pull $image
Now docker login works, but docker image produces the error message
lookup docker-registry.default.svc on 1.2.3.4: no such host
where 1.2.3.4 is a placeholder for my local nameserver according to /etc/resolv.conf and $image is of the form docker-registry.default.svc:5000/registry/my-is#sha256:my-id.
Am I doing something wrong or could it be that the cluster administrator must first expose the registry (but should it not be exposed by default)? If I try oc get svc -n default as suggested here I get this error message:
User "my-user" cannot list services in project "default"
So what steps are needed (preferably without intervention by the cluster's administrator) for me successfully pulling out that image? Would the situation change if the pull occurred in a container also executing inside the OpenShift cluster?
The lead provided in a comment was the right one. (Thanks!). The following script now does work; no intervention by a cluster admin was required:
username=$(oc whoami)
api_token=$(oc whoami -t)
docker login -u $username -p $api_token my-cluster:443
docker pull my-cluster:443/my-project/my-is
docker images