My Gilab CI script falls and exists with this error
Permission denied, please try again.
Permission denied, please try again.
$SSH_USER#$IPADDRESS: Permission denied (publickey,password).
This is my CI script:
image: alpine
before_script:
- echo "Before script"
- apk add --no-cache rsync openssh openssh-client
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- echo "${SSH_PRIVATE_KEY}" | tr -d '\r' | ssh-add - > /dev/null
- ssh -o 'StrictHostKeyChecking no' $SSH_USER#$IPADDRESS
- cd /var/www/preview.hidden.nl/test
building:
stage: build
script:
- git reset --hard
- git pull origin develop
- composer install
- cp .env.example .env
- php artisan key:generate
- php artisan migrate --seed
- php artisan cache:clear
- php artisan config:clear
- php artisan storage:link
- sudo chown -R deployer:www-data /var/www/preview.hidden.nl/test/
- find /var/www/preview.hidden.nl/test -type f -exec chmod 664 {} \;
- find /var/www/preview.hidden.nl/test -type d -exec chmod 775 {} \;
- chgrp -R www-data storage bootstrap/cache
- chmod -R ug+rwx storage bootstrap/cache
My setup is as follows.
Gitlab server > Gitlab.com
Gitlab runner > Hetzner server
Laravel Project > Same Hetzner server
I generated a new SSH-key (without password) pair for this runner, named gitlab & gitlab.pub. I added the content of "gitlab.pub" to the $KNOWN_HOST variable. I added the content of "gitlab" to the $SSH_PRIVATE_KEY variable.
The problem is, I don't really know what's going on. What I think is happening is the following. The GitLab Ci job is its own separate container. A container can't just ssh to a remote server. So the private key of my Hetzner server needs to be known to the docker container ( task: - echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts ).
Because the key is then known, the docker container should not ask for a password. Yet it prompts for a password and also returns that the password is incorrect. I also have my own private key pair, besides the GitLab key pair, not sure if that causes the issue; but removing my own key there would block access to my server, so I did not test removing that.
Could someone help me in this manner? I've been working on this for two weeks and I can't even deploy a simple hello-world.txt file yet! :-)
Related
I have problem with SSH connection while deploying in GitLab CI.
Error:
ssh: connect to host 207.154.196.22 port 22: Connection refused
Error: exit status 255
deployment_stage:
stage: deployment_stage
only:
- main
script:
- export SSH_PRIVATE_KEY=~/.ssh/id_rsa
- export SSH_PUBLIC_KEY=~/.ssh/id_rsa.pub
- export DROPLET_IPV4=$(doctl compute droplet get board-api-backend --template={{.PublicIPv4}} --access-token $DIGITALOCEAN_API_TOKEN)
- export DROPLET_NAME=board-api-backend
- export CLONE_LINK=https://oauth2:$GITLAB_ACCESS_TOKEN#gitlab.com/$CI_PROJECT_PATH
- export SSH_GIT_CLONE_COMMAND="git clone $CLONE_LINK ~/app"
- export SSH_VAR_INIT_COMMAND="export DIGITALOCEAN_API_TOKEN=$(echo $DIGITALOCEAN_API_TOKEN)"
- export SSH_COMMAND="./deployment/scripts/production/do-deploy.sh"
- echo "Deployment stage"
- mkdir ~/.ssh
- chmod 700 ~/.ssh
- echo "$PRODUCTION_SSH_PRIVATE_KEY" > $SSH_PRIVATE_KEY
- cat $SSH_PRIVATE_KEY
- chmod 400 $SSH_PRIVATE_KEY
- ssh-keyscan -H $DROPLET_IPV4 >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- cat ~/.ssh/known_hosts
- eval `ssh-agent -s`
- ssh-add $SSH_PRIVATE_KEY
- echo $SSH_VAR_INIT_COMMAND
- echo $SSH_GIT_CLONE_COMMAND
- echo $SSH_COMMAND
- doctl compute ssh $DROPLET_NAME --ssh-command "$SSH_VAR_INIT_COMMAND && rm -R ~/app || true && $SSH_GIT_CLONE_COMMAND" --ssh-key-path $SSH_PRIVATE_KEY --verbose --access-token $DIGITALOCEAN_API_TOKEN
- doctl compute ssh $DROPLET_NAME --ssh-command "cd ~/app && $SSH_COMMAND" --access-token $DIGITALOCEAN_API_TOKEN
Command
- cat $SSH_PRIVATE_KEY
displays private key in correct format
---header---
key without spaces
---footer---
How can I troubleshoot it?
It was working some days ago (maybe about two weeks), but today something went wrong, nothing was changed about ssh deployment.
What it can be?
I have created an image where I run some tasks.
I want to be able to push some files to a remote server that runs Windows Server 2022.
The gitlab-runner runs on an Ubuntu machine.
I managed to do that using shell executors. But now I want to do the same inside a docker container.
Using the following guide
https://docs.gitlab.com/ee/ci/ssh_keys/#ssh-keys-when-using-the-docker-executor
I don't understand in which user I will create the keys.
In a shell executor case I used gitlab-runner user in which I created a pair of keys. I added the public key to the server that I want to push files to and it worked.
However, I added the same private key into the gitlab CI/CD variable as the guide suggests.
Then inside the job I added the following:
before_script:
- 'command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- scp -P <port> myfile.txt username#ip:remote_path
But the job fails with errors
Host key verification failed.
lost connection
Should I use the same private key from gitlab-runner user?
PS: The echo "$SSH_PRIVATE_KEY" works. I can see the key I added in the gitlab CI/CD variable.
I used something similar in my CI process, works like a charm, I recall I've used some base64 formatted runner key due to some formatting errors:
- echo $GITLAB_RUNNER_SSH_KEY | base64 -d > $HOME/.ssh/runner_key
- chmod -R 600 ~/.ssh
- eval $(ssh-agent -s)
- ssh-add $HOME/.ssh/runner_key
Just recently I stumbled on an SSH issue that I cannot figure out what is missing. We use GitLab CI to build and deploy the project to one of our remote servers. As a part of the upgrade plan, we need to replace the degrading Debian 6 server with a new RHEL 7 server. I cannot get the passwordless SSH to work right from GitLab Runner to a remote machine.
I created a reproducible example in a Dockerfile, the IP of the remote server and the user is replaced with non-sensitive data.
FROM centos:7
RUN yum install -y epel-release
RUN yum update -y
RUN yum install -y openssh-clients
RUN useradd -m joe
RUN mkdir -p /home/joe/.ssh
COPY id_rsa_shared /home/joe/.ssh/id_rsa
RUN echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
RUN ssh-keyscan 10.x.x.x >> /home/joe/.ssh/known_hosts
RUN chown -R joe:joe /home/joe/.ssh
USER joe
CMD ["/bin/bash"]
The file id_rsa_shared is created on local machine with the following command:
ssh-keygen -t rsa -b 2048 -f ./id_rsa_shared
ssh-copy-id -i ./id_rsa_shared joe#10.x.x.x
This works on local. A simple ssh joe#10.x.x.x uname -a in the docker container will output the following:
Linux newweb01p.company.local 3.10.0-1160.25.1.el7.x86_64 #1 SMP Tue Apr 13 18:55:45 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux
However, if I commit this to a branch as GitLab CI script, as shown:
image: centos:7
stages:
- deploy
dev-www:
stage: deploy
tags:
- docker
environment:
name: dev-www
url: http://dev-www.company.local
variables:
DEV_HOST: 10.x.x.x
APP_ENV: dev
DEV_USER: joe
script:
- whoami
- yum install -y epel-release
- yum update -y
- yum install -y openssh-clients
- useradd -m joe
- mkdir -p /home/joe/.ssh
- cp "./gitlab/known_hosts" /home/joe/.ssh/known_hosts
- echo "$DEV_USER_OPENSSH_KEY" >> /home/joe/.ssh/id_rsa
- echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
- chown -R joe:joe /home/joe/.ssh/
- chmod 600 /home/joe/.ssh/*
- chmod 700 /home/joe/.ssh
- ls -Fsal /home/joe/.ssh
- su - joe
- ssh -oStrictHostKeyChecking=no "${DEV_USER}#${DEV_HOST}" uname -a
when: manual
The pipeline will fail authentication as shown:
Running with gitlab-runner 13.12.0 (7a6612da)
on docker.hqgitrunner01d.company.local K47w1s77
Preparing the "docker" executor
Using Docker executor with image centos:7 ...
Authenticating with credentials from job payload (GitLab Registry)
Pulling docker image centos:7 ...
Using docker image sha256:xxx for centos:7 with digest centos:7#sha256:xxxx ...
Preparing environment
Running on runner-k47w1s77-project-93-concurrent-0 via hqgitrunner01d.company.local...
Getting source from Git repository
Fetching changes...
Reinitialized existing Git repository in /builds/webversion3/API/.git/
Checking out 6a7c193b as tdr/psr4-composer...
Updating/initializing submodules recursively...
Executing "step_script" stage of the job script
Using docker image sha256:xxx for centos:7 with digest centos:7#sha256:xxx ...
$ whoami
root
$ useradd -m joe
$ mkdir -p /home/joe/.ssh
$ cp "./gitlab/known_hosts" /home/joe/.ssh/known_hosts
$ echo "$DEV_USER_OPENSSH_KEY" >> /home/joe/.ssh/id_rsa
$ echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
$ chown -R joe:joe /home/joe/.ssh/*
$ chmod 600 /home/joe/.ssh/*
$ chmod 700 /home/joe/.ssh
$ ls -Fsal /home/joe/.ssh
total 16
0 drwx------ 2 root root 53 Apr 1 15:19 ./
0 drwx------ 3 joe joe 74 Apr 1 15:19 ../
4 -rw------- 1 joe joe 37 Apr 1 15:19 config
4 -rw------- 1 joe joe 3414 Apr 1 15:19 id_rsa
8 -rw------- 1 joe joe 6241 Apr 1 15:19 known_hosts
$ su - joe
$ ssh -oStrictHostKeyChecking=no "${DEV_USER}#${DEV_HOST}" uname -a
Warning: Permanently added '10.x.x.x' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Cleaning up file based variables
ERROR: Job failed: exit code 1
Maybe there’s a step I missed because I get a ‘Permission denied, please try again’ message. How do I get Docker Executor to use passwordless SSH to a remote server?
The solution was really simple, and straightforward. The important part is understanding SSH.
The solution works. A snippet from the .gitlab-ci.yml for those who has the same problem as I do.
...
- mkdir -p ~/.ssh
- touch ~/.ssh/id_rsa ~/.ssh/config ~/.ssh/known_hosts
- chmod 600 ~/.ssh/id_rsa ~/.ssh/config ~/.ssh/known_hosts
- echo "$OPENSSH_KEY" >> ~/.ssh/id_rsa
- echo "Host *\n\tStrictHostKeyChecking no" >> ~/.ssh/config
- ssh-keyscan ${DEV_HOST} >> ~/.ssh/known_hosts
Just inline all your ssh options. Use -i to specify your key file. You can also use -o UserKnownHostsFile to specify your known hosts file -- you don't need to copy all that it into an ssh configuration.
This should be enough to ssh successfully:
# ...
- echo "$DEV_USER_OPENSSH_KEY" > "${CI_PROJECT_DIR}/id_rsa.key"
- chmod 600 "${CI_PROJECT_DIR}/id_rsa.key"
- |
ssh -i "${CI_PROJECT_DIR}/id_rsa.key" \
-o IdentitiesOnly=yes \
-o UserKnownHostsFile="${CI_PROJECT_DIR}/gitlab/known_hosts" \
-o StrictHostKeyChecking=no \
user#host ...
Also, since you're disabling StrictHostKeyChecking, you can also just use /dev/null for your UserKnownHostsFile. If you want key checking, omit the StrictHostKeyChecking=no option.
When I'm trying to build the Dockerfile from aws codebuild I'm getting the this error as
"Error response from daemon: failed to parse Dockerfile: Syntax error - can't find = in "RUN". Must be of the form: name=value"
In code build I'm using ubuntu as environmment with aws/codebuild/standard 5.0 runtime version.
Here is my Dockerfile and buildspec.yml:
FROM alipine:3.12
ARG SONARQUBE_VERSION=8.9.0.43852
COPY ./sonarqube-8.9.0.43852.zip /opt/'sonarqube-${SONARQUBE_VERSION}.zip'
RUN chmod 777 /opt/sonarqube-8.9.0.43852.zip
---------------
--------------
COPY --chown=sonarqube:sonarqube run.sh sonar.sh ${SONARQUBE_HOME}/bin/
USER sonarqube
RUN chmod 777 ${SONARQUBE_HOME}/bin/run.sh; \
RUN chmod 777 ${SONARQUBE_HOME}/bin/sonar.sh
VOLUME $SONARQUBE_HOME
WORKDIR ${SONARQUBE_HOME}
EXPOSE 9000
ENTRYPOINT ["/opt/sonarqube/bin/run.sh"]
CMD ["/opt/sonarqube/bin/sonar.sh"]
And the below is my buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
java: corretto11
pre_build:
commands:
- echo Logging in to Amazon ECR.....
- aws --version
- aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin <MyAccountId>.dkr.ecr.eu-west-1.amazonaws.com/sonarqube
build:
commands:
- aws s3 cp s3://ravitej/SonarQube/sonarqube-8.9.0.43852.zip .
- chmod +x sonarqube-8.9.0.43852.zip
- echo $(pwd)
- echo Build started on `date`
- echo Building the Docker image...
- echo $(docker version)
- REPOSITORY_URI=<MyAccountID>.dkr.ecr.eu-west-1.amazonaws.com/sonarqube
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- docker build -t $REPOSITORY_URI:$IMAGE_TAG .
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"sonarqube","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
- echo imagedefinitions.json
artifacts:
files: imagedefinitions.json
Banging my head againt the table for the past 2 days.
Please help..
This is the error from code build:
enter image description here
I just had a similar problem which lead me here and the issue was a trailing \ in my Dockerfile. In your case I guess it is this line:
RUN chmod 777 ${SONARQUBE_HOME}/bin/run.sh; \
Which causes problems for the next line.
This also happens when you try to add a comment using hash at the end of the line, e.g. like # OK here:
$ cat Dockerfile
[..]
ENV RSTUDIO_VER=1.3.1073 # OK
[..]
The docker build's parser won't like it!
$ docker build -t mirekphd/ml-cpu-r40-rs-cust .
Sending build context to Docker daemon 142.2MB
Error response from daemon: failed to parse Dockerfile: Syntax error - can't find = in "#". Must be of the form: name=value
I have the weirdest error in GitHub Actions that I have been trying to resolve for multiple hours now and I am all out of ideas.
I currently use a very simple GitHub Action. The end goal is to run specific bash commands via ssh in other workflows.
Dockerfile:
FROM ubuntu:latest
COPY entrypoint.sh /entrypoint.sh
RUN apt update && apt install openssh-client -y
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
mkdir -p ~/.ssh/
echo "$1" > ~/.ssh/private.key
chmod 600 ~/.ssh/private.key
echo "$2" > ~/.ssh/known_hosts
echo "ssh-keygen"
ssh-keygen -y -e -f ~/.ssh/private.key
echo "ssh-keyscan"
ssh-keyscan <IP>
ssh -i ~/.ssh/private.key -tt <USER>#<IP> "echo test > testfile1"
echo "known hosts"
cat ~/.ssh/known_hosts
wc -m ~/.ssh/known_hosts
action.yml
name: "SSH Runner"
description: "Runs bash commands in remote server via SSH"
inputs:
ssh_key:
description: 'SSH Key'
known_hosts:
description: 'Known Hosts'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.ssh_key }}
- ${{ inputs.known_hosts }}
current workflow file in the same repo:
on: [push]
jobs:
try-ssh-commands:
runs-on: ubuntu-latest
name: SSH MY_TEST
steps:
- name: Checkout
uses: actions/checkout#v2
- name: test_ssh
uses: ./
with:
ssh_key: ${{secrets.SSH_PRIVATE_KEY}}
known_hosts: ${{secrets.SSH_KNOWN_HOSTS}}
In the github action online console I get the following output:
ssh-keygen
---- BEGIN SSH2 PUBLIC KEY ----
Comment: "2048-bit RSA, converted by root#844d5e361d21 from OpenSSH"
AAAAB3NzaC1yc2EAAAADAQABAAABAQDaj/9Guq4M9V/jEdMWFrnUOzArj2AhneV3I97R6y
<...>
9f/7rCMTJwae65z5fTvfecjIaUEzpE3aen7fR5Umk4MS925/1amm0GKKSa2OOEQnWg2Enp
Od9V75pph54v0+cYfJcbab
---- END SSH2 PUBLIC KEY ----
ssh-keyscan
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
<IP> ssh-ed25519 AAAAC3NzaC1lZD<...>9r5SNohBUitk
<IP> ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRNWiDWO65SKQnYZafcnkVhWKyxxi5r+/uUS2zgYdXvuZ9UIREw5sumR95kbNY1V90<...>
qWXryZYaMqMiWlTi6ffIC5ZoPcgGHjwJRXVmz+jdOmdx8eg2llYatRQbH7vGDYr4zSztXGM77G4o4pJsaMA/
***
Host key verification failed.
known hosts
***
175 /github/home/.ssh/known_hosts
As far as I understand *** is used to replace GitHub secrets which in my case is the key of the known host. Getting *** as a result for the ssh-keyscan and the cat known_host should mean, that the known_hosts file is correct and a connection should be possible. Because in both cases the console output is successfully censored by GitHub. And since the file contains 175 characters I can assume it contains the actual key. But as one can see the script fails with Host key verification failed.
When I do the same steps manually in another workflow with the exact same input data I succeed. Same goes for ssh from my local computer with the same private_key and known_host files.
This for example works with the exact same secrets
- name: Create SSH key
run: |
mkdir -p ~/.ssh/
echo "$SSH_PRIVATE_KEY" > ../private.key
sudo chmod 600 ../private.key
echo "$SSH_KNOWN_HOSTS_PROD" > ~/.ssh/known_hosts
shell: bash
env:
SSH_PRIVATE_KEY: ${{secrets.SSH_PRIVATE_KEY}}
SSH_KNOWN_HOSTS: ${{secrets.SSH_KNOWN_HOSTS}}
- name: SSH into DO and run
run: >
ssh -i ../private.key -tt ${SSH_USERNAME}#${SERVER_IP}
"
< commands >
"
Using the -o "StrictHostKeyChecking no" flag on the ssh command in the entrypoint.sh also works. But I would like to avoid this for security reasons.
I have been trying to solve this issue for hours, but I seem to miss a critical detail. Has someone encountered a similar issue or knows what I am doing wrong?
So after hours of searching I found out what the issue was.
When force accepting all host keys with the -o "StrictHostKeyChecking no" option no ~/.ssh/known_hosts file is created. Meaning that the openssh-client I installed in the container does not seem to read from that file.
So telling the ssh command where to look for the file solved the issue:
ssh -i ~/.ssh/private.key -o UserKnownHostsFile=/github/home/.ssh/known_hosts -tt <USER>#<IP> "echo test > testfile1"
Apparently one can also change the location of the known_hosts file within the ssh_config permanently (see here).
Hope this helps someone at some point.
First, add a chmod 600 ~/.ssh/known_hosts as well in your entrypoint.
For testing, I would check if options around ssh-keyscan make any difference:
ssh-keyscan -H <IP>
# or
ssh-keyscan -t rsa -H <IP>
Check that your key is generated using the default rsa public-key cryptosystems.
The HostKeyAlgorithms used might be set differently, in which case:
ssh-keyscan -H -t ecdsa-sha2-nistp256 <IP>