SSH work from terminal but Jenkins can't connect - docker

- there are 2 containers in docker compose:
- jenkins
- remote_host
- there are also 2 keys:
- id_rsa
- id_rsa3.pub
- when remote_host is building id_rsa3.pub is coping into: home/a12/.ssh/authorized_keys
- when I connect to jenkins container via:
- docker exec -it jenkins bash
and try:
- ssh -i id_rsa3 a12#remote_host -> it can connect without password
- but when I configure same Jenkins SSH with this id_rsa3:
- Can't connect to server
SSH was generate by: ssh-keygen -t rsa -m PEM -f id_rsa3
LOGS from Jenkins show -> Auth fail and Can't connect to server
I try: ssh-keygen -t rsa -m PEM -f id_rsa3 instead of ssh-keygen -f id_rsa3.
I try: restart Jenkins service.
I try: connect from Jenkins container terminal to remote_host (it`s working)
I try: reinstall SSH in Jenkins.

SOLVED
Solution was related to this topic: Ubuntu 22.04 SSH the RSA key isn't working since upgrading from 20.04.
Way to solve:
I checked logs on remote_host server and saw this error:
"key type ssh-rsa not in PubkeyAcceptedAlgorithms"
After I generate ssh-key in this way:
"ssh-keygen -t ecdsa -m PEM -f id_rsa3"
instead of old one:
"ssh-keygen -t rsa -m PEM -f id_rsa3".
In my case ssh connect start working.

Related

How do I specify my SSH key when connecting to a remote docker server through ssh?

Generally you can execute remotely using
docker -H ${DOCKER_HOST} ssh://ubuntu#${EC2_INSTANCE} run -it container
but how do I specify my ssh key? The equivalent of ssh -i
ssh -i ${AWS_ACCESS_KEY} ubuntu#${EC2_INSTANCE}
Edit:
ssh-add -k ${SSH_KEY_LOCATION}
Will work, but is there an equivalent to doing ssh -i ?

docker buildkit mount ssh when using remote agent forwarding

I use the --ssh docker buildkit feature and it works fine locally.
I want to build Docker at a remote server and for that I use the -A flag to forward my local github key, like:
ssh -i "server.pem" -A <user>#<server-ip>
Then in server terminal I run:
ssh -T git#github.com
And I get the "Hello user" message, which means the key forwarding works fine.
(In the server, $SSH_AUTH_SOCK is indeed set, and I can git clone)
Now, when building locally I use:
DOCKER_BUILDKIT=1 docker build --ssh default=~/.ssh/id_rsa -t myimage:latest .
Which works fine.
But in the server the private key does not exists at ~/.ssh/id_rsa. So how can I forward it to docker build?
Tried this in the server:
DOCKER_BUILDKIT=1 docker build --ssh default=$SSH_AUTH_SOCK -t myimage:latest .
But it does not work. The error is:
could not parse ssh: [default]: invalid empty ssh agent socket, make sure SSH_AUTH_SOCK is set
Even though SSH_AUTH_SOCK is set
Docker version: 19.03
I had a similar issue and it was fixed quite simply, I wrapped ${SSH_AUTH_SOCK} within curly braces
eval $(ssh-agent)
ssh-add ~/.ssh/id_rsa
DOCKER_BUILDKIT=1 docker build -t myimage:latest --ssh default=${SSH_AUTH_SOCK} .
In the Docker file, I have appropriate RUN instruction to run a command that requires sensitive data
RUN --mount=type=ssh \
mkdir vendor && composer install
You need to have ssh-agent running on your machine and the key added to it with ssh-add or use ssh -A -o AddKeysToAgent=true when logging in. SSH will not automatically forward the key specified with -i if you set -A afaik. After logging in you can run ssh-add -L to make sure your keys were forwarded and if you see records there then docker build --ssh default . should work fine now.
eval `ssh-agent`
ssh-add server.pem
ssh -A <user>#<server-ip>

I need to remotely connect to my docker swarm to create a service from my ci / cd pipeline using a shell script

I am using docker for aws, I have a cluster and I need to create a service from a github actions pipeline
ssh -t -o StrictHostKeyChecking=no -i "${SSH_KEY_PATH}" "${DOCKER_REMOTE_HOST}" "\"${COMMNAD}\""
I have tried a lot ...., now I have this error I have tried to do this
time="2019-11-08T15:07:50Z" level=debug msg="commandconn: starting ssh with [-l docker ****.****.compute.amazonaws.com -- docker system dial-stdio]"
time="2019-11-08T15:07:50Z" level=debug msg="commandconn (ssh):Host key verification failed.\r\n"
doing something like that
SSH_HOST=${DOCKER_REMOTE_HOST#"ssh://"}
SSH_HOST=${SSH_HOST#*#}
echo "Registering SSH keys..."
# Save private key to a file and register it with the agent.
mkdir -p "$HOME/.ssh"
printf '%s' "$INPUT_DOCKER_SSH_PRIVATE_KEY" >"$HOME/.ssh/docker"
chmod 600 "$HOME/.ssh/docker"
eval $(ssh-agent)
ssh-add "$HOME/.ssh/docker"
# eval $(ssh-agent)
# ssh-add "${SSH_KEY_PATH}"
# Add public key to known hosts.
printf '%s %s\n' "$SSH_HOST" "$INPUT_DOCKER_SSH_PUBLIC_KEY" >>/etc/ssh/ssh_known_hosts
printf '%s %s\n' "$SSH_HOST" "$INPUT_DOCKER_SSH_PUBLIC_KEY" >>~/.ssh/known_hosts
docker --log-level debug --host "$DOCKER_REMOTE_HOST" service create -d -q my-image
Using openssh-client to connect by ssh to my amazon EC2 and create a service in my docker swarm

BitBucket Pipeline cannot find container after ssh into DigitalOcean Droplet

Here is my code
- step:
name: SSH to Digital Ocean and update docker image
script:
- head ~/.ssh/config
- ssh -i ~/.ssh/config root#XXX.XXX.XXX.XXX
- docker ps
- docker rm -f gvcontainer
- docker image rm -f myrepo/myimage:tag
- docker pull myrepo/myimage:tag
- docker run --name gvcontainer -p 12345:80 -d=true --restart=always myrepo/myimage:tag
services:
- docker
Here I can see that the Pipeline ssh into my DO droplet successfully, but for some reason(I guess, it was too quick to type the "docker ps". it should to wait a few seconds, but I just don't know how to postpone the operation) it could not find the container.
So I manually ssh into my droplet and checked, the gvcontainer is there.
Please enlighten me with any possible reasons.
Thanks
The commands listed after your SSH session are not being run on the remote system - they're being run in Pipelines. Since the Pipelines container doesn't have a gvcontainer to remove, it returns that error.
You have several options, one of which I outlined in answering your other question (pass the commands as arguments to SSH, as in ssh -i /path/to/key user#host "command1 && command2"). Another option would be to put a script on the droplet that does all the things you want, and have Pipelines execute it via SSH (ssh -i /path/to/key user#host "./do-all-the-things.sh").

How to config docker registry mirror?

I have learn tutorial from that.I have create docker mirror in this command:
docker run -d -p 5555:5000 -e STORAGE_PATH=/mirror -e STANDALONE=false -e MIRROR_SOURCE=https://registry-1.docker.io -e MIRROR_SOURCE_INDEX=https://index.docker.io -v /Users/v11/Documents/docker-mirror:/mirror --restart=always --name mirror registry
And it succeed. Then I start my docker daemon using this command:
docker --insecure-registry 192.168.59.103:5555 --registry-mirror=http://192.168.59.103:5555 -d
Then I use command to pull image like that:
docker pull hello-world
Then it throw error in log, and more detail is:
ERRO[0012] Unable to create endpoint for http://192.168.59.103:5555/:
invalid registry endpoint https://192.168.59.103:5555/v0/: unable to
ping registry endpoint https://192.168.59.103:5555/v0/ v2 ping attempt
failed with error: Get https://192.168.59.103:5555/v2/: EOF v1 ping
attempt failed with error: Get https://192.168.59.103:5555/v1/_ping:
EOF. If this private registry supports only HTTP or HTTPS with an
unknown CA certificate, please add --insecure-registry
192.168.59.103:5555 to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the
flag; simply place the CA certificate at
/etc/docker/certs.d/192.168.59.103:5555/ca.crt
As you can see, it tell me to add '--insecure-registry 192.168.59.103:5555',But I have added it when I start docker daemon. Anyone have idea about it?
You´re probably using boot2docker?
Could you try that:
$ boot2docker init
$ boot2docker up
$ boot2docker ssh "echo $'EXTRA_ARGS=\"--insecure-registry <YOUR INSECURE HOST>\"' | sudo tee -a /var/lib/boot2docker/profile && sudo /etc/init.d/docker restart"
Taken from here

Resources