Docker login failing (at most 1 argument) - docker

I am failing to login to a remote docker registry using a command of the form:
docker login –u my-username –p my-password registry.myclient.com
The error I get is the following:
"docker login" requires at most 1 argument.
See 'docker login --help'.
Usage: docker login [OPTIONS] [SERVER]
How can login to the remote registry?

You don't have tacks in front of your options, it's some other dash like character. Try this instead:
docker login -u my-username -p my-password registry.myclient.com
While it looks similar, -u and -p are not the same as –u and –p.

This one worked for me if ci environment is in play:
echo ${MVN_PASSWORD} | docker login -u ${MVN_USER} --password-stdin ${MVN_URL}
these variables need to be set up via Settings > CI/CD > Variables (gitlabci example)

Here is what worked for me:
I saved the password in a file called my_password.txt.
Then, I ran the following command:
cat ~/my_password.txt | docker login -u AWS --password-stdin https://{YOUR_AWS_ACCOUNT_ID}.dkr.ecr.{YOUR_AWS_REGION}.amazonaws.com

Related

Non-interactive docker login in Github Actions

I am trying to run this command inside a Github Actions step running on a Windows agent:
echo ${ACR_SERVICE_PRINCIPAL_PASSWORD} | docker login -u ${ACR_SERVICE_PRINCIPAL} --password-stdin spetestregistry.azurecr.io
But it returns this error:
Error: Cannot perform an interactive login from a non TTY device
Then I tried this command:
echo ${ACR_SERVICE_PRINCIPAL_PASSWORD} | winpty docker login -u ${ACR_SERVICE_PRINCIPAL} --password-stdin spetestregistry.azurecr.io
Which produced this error:
stdin is not a tty
Does anyone know how I can do this?
Finally figured out that I was using the Github secrets variable incorrectly. This works:
echo ${{ secrets.ACR_SERVICE_PRINCIPAL_PASSWORD }} | docker login --username ${{ secrets.ACR_SERVICE_PRINCIPAL }} --password-stdin spetestregistry.azurecr.io

Build and push image to DockerHub from CircleCI

I'm new to CI / CD and I'm trying with CircleCI to build and push my app on DockerHub.
I researched some things on the internet, and tried some things, without success.
I'm having an error:
#!/bin/bash -eo pipefail
sudo docker login -u $DOCKER_LOGIN -p $DOCKER_PASSWORD
sudo docker tag $HUB_NAME $DOCKER_LOGIN/$HUB_NAME
sudo docker push $DOCKER_LOGIN/$HUB_NAMEr
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Exited with code 1
My config-yml where I am having trouble:
# run tests!
- run: mvn integration-test
- setup_remote_docker
- run:
name: Build and deploy docker images
command: |
docker build -t $HUB_NAME:latest .
- deploy:
name: Push application Docker image
command: |
sudo docker login -u $DOCKER_LOGIN -p $DOCKER_PASSWORD
sudo docker tag $HUB_NAME $DOCKER_LOGIN/$HUB_NAME
sudo docker push $DOCKER_LOGIN/$HUB_NAME
It seems to me you should not be using sudo docker in your login, tag and push commands.
Just use docker login, docker tag and docker push without sudo and you should be good to go.
Explanation
The whole point of the setup_remote_docker step, which you are using in your configuration, is to set the environment variables that allow the docker command to access a remote docker environment with the current docker user.
In your pipeline output, if you open the step with the label Setup a remote Docker engine, you'll likely see an output like:
Allocating a remote Docker Engine
[ ... skip some output ...]
Remote Docker engine created. Using VM '...'
Created container accessible with:
DOCKER_CERT_PATH=/tmp/docker-certs(...)
DOCKER_HOST=tcp://XXX.XXX.XXX.XXX:YYYY
DOCKER_MACHINE_NAME=ZZZZ
DOCKER_TLS_VERIFY=1
NO_PROXY=127.0.0.1,localhost,circleci-internal-outer-build-agent,XXX.XXX.XXX.XXX:YYYY
[ ... some more output ...]
If you sudo into another user, you'll be missing out on those environment variables, and the docker command will attempt to connect to the standard docker unix socket in the local machine. Which is why you see:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Check the Building Docker Images documentation to see that they don't use sudo anywhere.
You probably copied those sudo commands from your own environment where your local machine restricts access to the docker unix socket.

connecting to Docker Hub in travis CI

I tried to connect to my dockerHub via travis ci
echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
but i got this error
Error: Cannot perform an interactive login from a non TTY device
Try this instead:
docker login --username "$USER" --password "$PWD"

How to avoid password when using sudo in gitlab-ci.yml?

I have a private docker repository where i store my build images.
I did copy my registry certificates and updated my /etc/hosts file to authenticate registry from my local machine.
I could login to registry with 'sudo docker login -u xxx -p xxx registry-name:port'
But when i try same docker login command from gitlab-ci stage, it is failing with this error:
sudo: no tty present and no askpass program specified.
This is how i'm trying to achieve this.
ssh manohara#${DEPLOY_SERVER_IP} "sudo docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}"
I also tried adding this gitlab-runner ALL=(ALL) NOPASSWD: ALL at the bottom of /etc/sudoers file, but no luck
Where am i going wrong?
According to this Source you can use
ssh -t remotehost "sudo ./binary"
The -t allocates a pseudo-tty.
Or in your example:
ssh -t manohara#${DEPLOY_SERVER_IP} "sudo docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}"

using gcloud docker client on coreos

Can I use the 'gcloud docker' client on coreos? I want to do pull a container, but when I do
gcloud docker pull
I get
WARNING: 'docker' was not discovered on the path. Credentials have been stored, but are not guaranteed to work with the 1.11 Docker client ifan external credential store is configured.
Can I install a full-fledged gcloud client? And where is gcloud anyway? I can run it, but which gcloud comes back empty handed.
You have to use this command:
$ docker login -e 1234#5678.com -u _token -p "$(gcloud auth print-access-token)" https://gcr.io
You can also change the https://gcr.io to e.g.: https://us.gcr.io if your image is stored somewhere else.
If this not works, try the JSON Keyfile Method it is more reliable.
docker login -e 1234#5678.com -u _json_key -p "$(cat keyfile.json)" https://gcr.io
It is also documented here:
https://cloud.google.com/container-registry/docs/advanced-authentication

Resources