Using SSH in gitlab job to restart docker-container - docker

I am trying to add some continuous deployment for a typescript API built with node, and mongodb.
I would like to do so via the gitlab instance that I already have :
Runner config (/etc/gitlab-runner/config.toml) :
[[runners]]
name = "runner"
url = "https://git.[DOMAIN].[EXT]"
token = "[ID]"
executor = "docker"
[runners.docker]
tls_verify = false
image = "mhart/alpine-node:6.5"
privileged = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
So my deploy job looks as follow :
Deployment_preprod:
stage: Deploy
before_script:
# https://docs.gitlab.com/ee/ci/ssh_keys/
- 'which ssh-agent || ( apk add --no-cache --virtual openssh-client )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- chmod 700 ~/.ssh
script:
- scp -r dist user#[IP]:/home/[user]/preprod-back
- ssh -tt user#[IP] cd /home/[user]/preprod-back && yarn run doc && docker-compose restart
environment:
name: preprod
url: https://preprod.api.[DOMAIN].[EXT]
only:
- develop
Question :
this job fail on /bin/sh: eval: line 91: docker-compose: not found which suprise me since running docker-compose [whatever] just works fine server-side when I log in the server via ssh.

The && are tripping you up. You should quote the entire remote command.
script:
- scp -r dist user#[IP]:/home/[user]/preprod-back
- ssh -tt user#[IP] "cd /home/[user]/preprod-back && yarn run doc && docker-compose restart"

Related

Gitlab DigitalOcean SSH Connection Refused

I have problem with SSH connection while deploying in GitLab CI.
Error:
ssh: connect to host 207.154.196.22 port 22: Connection refused
Error: exit status 255
deployment_stage:
stage: deployment_stage
only:
- main
script:
- export SSH_PRIVATE_KEY=~/.ssh/id_rsa
- export SSH_PUBLIC_KEY=~/.ssh/id_rsa.pub
- export DROPLET_IPV4=$(doctl compute droplet get board-api-backend --template={{.PublicIPv4}} --access-token $DIGITALOCEAN_API_TOKEN)
- export DROPLET_NAME=board-api-backend
- export CLONE_LINK=https://oauth2:$GITLAB_ACCESS_TOKEN#gitlab.com/$CI_PROJECT_PATH
- export SSH_GIT_CLONE_COMMAND="git clone $CLONE_LINK ~/app"
- export SSH_VAR_INIT_COMMAND="export DIGITALOCEAN_API_TOKEN=$(echo $DIGITALOCEAN_API_TOKEN)"
- export SSH_COMMAND="./deployment/scripts/production/do-deploy.sh"
- echo "Deployment stage"
- mkdir ~/.ssh
- chmod 700 ~/.ssh
- echo "$PRODUCTION_SSH_PRIVATE_KEY" > $SSH_PRIVATE_KEY
- cat $SSH_PRIVATE_KEY
- chmod 400 $SSH_PRIVATE_KEY
- ssh-keyscan -H $DROPLET_IPV4 >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- cat ~/.ssh/known_hosts
- eval `ssh-agent -s`
- ssh-add $SSH_PRIVATE_KEY
- echo $SSH_VAR_INIT_COMMAND
- echo $SSH_GIT_CLONE_COMMAND
- echo $SSH_COMMAND
- doctl compute ssh $DROPLET_NAME --ssh-command "$SSH_VAR_INIT_COMMAND && rm -R ~/app || true && $SSH_GIT_CLONE_COMMAND" --ssh-key-path $SSH_PRIVATE_KEY --verbose --access-token $DIGITALOCEAN_API_TOKEN
- doctl compute ssh $DROPLET_NAME --ssh-command "cd ~/app && $SSH_COMMAND" --access-token $DIGITALOCEAN_API_TOKEN
Command
- cat $SSH_PRIVATE_KEY
displays private key in correct format
---header---
key without spaces
---footer---
How can I troubleshoot it?
It was working some days ago (maybe about two weeks), but today something went wrong, nothing was changed about ssh deployment.
What it can be?

Instance deployment: The Docker container unexpectedly ended after it was started

hi guys not sure what im doing wrong here but when ever i upload my project docker image to elastic beanstalk i get this error: Instance deployment: The Docker container unexpectedly ended after it was started. I am new to this and i am not sure why this happens please help if you can.
docker image
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --force
COPY . .
ENV APP_PORT 8080
EXPOSE 8080
CMD [ "node", "app.js" ]
.gitlab-ci.yml file
- build
- run
variables:
APP_NAME: ${CI_PROJECT_NAME}
APP_VERSION: "1.0.0"`enter code here`
S3_BUCKET: "${S3_BUCKET}"
AWS_ID: ${MY_AWS_ID}
AWS_ACCESS_KEY_ID: ${MY_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${MY_AWS_SECRET_ACCESS_KEY}
AWS_REGION: us-east-1
AWS_PLATFORM: Docker
create_eb_version:
stage: build
image: python:latest
allow_failure: false
script: |
pip install awscli #Install awscli tools
echo "Creating zip file ${APP_NAME}"
python zip.py ${APP_NAME}
echo "Creating AWS Version Label"
AWS_VERSION_LABEL=${APP_NAME}-${APP_VERSION}-${CI_PIPELINE_ID}
S3_KEY="$AWS_VERSION_LABEL.zip"
echo "Uploading to S3"
aws s3 cp ${APP_NAME}.zip s3://${S3_BUCKET}/${S3_KEY} --region ${AWS_REGION}
echo "Creating app version"
aws elasticbeanstalk create-application-version \
--application-name ${APP_NAME} \
--version-label $AWS_VERSION_LABEL \
--region ${AWS_REGION} \
--source-bundle S3Bucket=${S3_BUCKET},S3Key=${S3_KEY} \
--description "${CI_COMMIT_DESCRIPTION}" \
--auto-create-application \
only:
refs:
- main
deploy_aws_eb:
stage: run
image: coxauto/aws-ebcli
when: manual
script: |
AWS_VERSION_LABEL=${APP_NAME}-${APP_VERSION}-${CI_PIPELINE_ID}
echo "Deploying app to tf test"
eb init -i ${APP_NAME} -p ${AWS_PLATFORM} -k ${AWS_ID} --region ${AWS_REGION}
echo "Deploying to enviroment"
eb deploy ${APP_ENVIROMENT_NAME} --version ${AWS_VERSION_LABEL}
echo "done"
only:
refs:
- main
I got it to work my node version in my docker file was set to 10 and not 16 changed the node version to 16 and also replaced APP_PORT with PORT because that's what I named it

Permission denied (publickey,password). Gitlab CI/CD

My Gilab CI script falls and exists with this error
Permission denied, please try again.
Permission denied, please try again.
$SSH_USER#$IPADDRESS: Permission denied (publickey,password).
This is my CI script:
image: alpine
before_script:
- echo "Before script"
- apk add --no-cache rsync openssh openssh-client
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- echo "${SSH_PRIVATE_KEY}" | tr -d '\r' | ssh-add - > /dev/null
- ssh -o 'StrictHostKeyChecking no' $SSH_USER#$IPADDRESS
- cd /var/www/preview.hidden.nl/test
building:
stage: build
script:
- git reset --hard
- git pull origin develop
- composer install
- cp .env.example .env
- php artisan key:generate
- php artisan migrate --seed
- php artisan cache:clear
- php artisan config:clear
- php artisan storage:link
- sudo chown -R deployer:www-data /var/www/preview.hidden.nl/test/
- find /var/www/preview.hidden.nl/test -type f -exec chmod 664 {} \;
- find /var/www/preview.hidden.nl/test -type d -exec chmod 775 {} \;
- chgrp -R www-data storage bootstrap/cache
- chmod -R ug+rwx storage bootstrap/cache
My setup is as follows.
Gitlab server > Gitlab.com
Gitlab runner > Hetzner server
Laravel Project > Same Hetzner server
I generated a new SSH-key (without password) pair for this runner, named gitlab & gitlab.pub. I added the content of "gitlab.pub" to the $KNOWN_HOST variable. I added the content of "gitlab" to the $SSH_PRIVATE_KEY variable.
The problem is, I don't really know what's going on. What I think is happening is the following. The GitLab Ci job is its own separate container. A container can't just ssh to a remote server. So the private key of my Hetzner server needs to be known to the docker container ( task: - echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts ).
Because the key is then known, the docker container should not ask for a password. Yet it prompts for a password and also returns that the password is incorrect. I also have my own private key pair, besides the GitLab key pair, not sure if that causes the issue; but removing my own key there would block access to my server, so I did not test removing that.
Could someone help me in this manner? I've been working on this for two weeks and I can't even deploy a simple hello-world.txt file yet! :-)

Access GitLab CI docker container for testing purpose

I am currently trying to deploy an application using GitLab CI and docker.
Spoiler : My pipeline fails everytime so I went a bit into it but I do not understand everything that's happening.
My gitlab runner is a Google Cloud Platform instance. This instance runs the GitLabCI file that call docker builds (so far so good ?). One of the docker build fails for a python/gensim version incompatibility problem.
What I don't understand :
This is my GitLab CI :
image: "ubuntu:bionic"
default:
before_script:
- apt-get update -y
- apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- apt-key fingerprint *KEY*
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update -y
- apt-get install -y docker-ce docker-ce-cli containerd.io
api:
script:
- "pwd"
- "cp scraping/src/variables.env /srv/variables.env"
- "cp scraping/src/crontab /etc/crontab"
- "cat /etc/crontab"
- "docker build -t app-api api/"
- "docker rm -f app-api || :"
- "docker run --restart always -d --env-file=/srv/variables.env --name app-api app-api"
- "docker ps"
As you can see, the runner starts by printing his current working directory and copying some files.
Problem is : When I access to the GCP instance and go into the runner docker container (with docker exec -ti docker-ID /bin/bash), the current working directory isn't the same
builds/username/myapp is printed in pipeline but / is printed whenever I connect with SSH. And I can't manage to find the "builds" directory or any of my app directories/files.
My assumption is that when called, the gitlab runner runs a docker container with my project copied inside. And this container builds the other containers.
Am I right ? If yes, how could I access to this container for testing purpose ?
As you can see, I'm a bit lost about what's happening between GitLabCI runner and dockers container. If someone could explain to me who's calling what from where, that'd be great !
EDIT :
config.toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "gcp"
url = "https://gitlab.com/"
token = *TOKEN*
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.docker]
tls_verify = false
image = "ubuntu:latest"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache","/var/run/docker.sock:/var/run/docker.sock","/srv:/srv","/var/spool/cron/crontabs/:/crons"]
shm_size = 0

How do I set docker-credential-ecr-login in my PATH before anything else in GitLab CI

I'm using AWS ECR to host a private Dockerfile image, and I would like to use it in GitLab CI.
Accordingly to the documentation I need to set docker-credential-ecr-login to fetch the private image, but I have no idea how to do that before anything else. That's my .gitlab-ci file:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Thank you.
I confirm the feature at stake is not yet available in GitLab CI; however I've recently seen it is possible to implement a generic workaround to run a dedicated CI script within a container taken from a private Docker image.
The template file .gitlab-ci.yml below is adapted from the OP's example, using the Docker-in-Docker approach I suggested in this other SO answer, itself inspired by the GitLab CI doc dealing with dind:
stages:
- test
variables:
IMAGE: "0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest"
REGION: "ap-northeast-1"
tests:
stage: test
image: docker:latest
services:
- docker:dind
variables:
# GIT_STRATEGY: none # uncomment if "git clone" is unneeded for this job
before_script:
- ': before_script'
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $(aws ecr get-login --no-include-email --region "$REGION")
- docker pull "$IMAGE"
script:
- ': script'
- |
docker run --rm -v "$PWD:/build" -w /build "$IMAGE" /bin/bash -c "
export PS4='+ \e[33;1m($CI_JOB_NAME # line \$LINENO) \$\e[0m ' # optional
set -ex
## TODO insert your multi-line shell script here ##
echo \"One comment\" # quotes must be escaped here
: A better comment
echo $PWD # interpolated outside the container
echo \$PWD # interpolated inside the container
bundle install
bundle exec rspec
## (cont'd) ##
"
- ': done'
allow_failure: true # for now as we do not have tests
This example assumes the Docker $IMAGE contains the /bin/bash binary, and relies on the so-called block style of YAML.
The above template already contains comments, but to be self-contained:
You need to escape double quotes if your Bash commands contain them, because the whole code is surrounded by docker run … " and ";
You also need to escape local Bash variables (cf. the \$PWD above), otherwise these variables will be resolved prior running the docker run … "$IMAGE" /bin/bash -c "…" command itself.
I replaced the echo "stuff" or so commands with their more effective colon counterpart:
set -x
: stuff
: note that these three shell commands do nothing
: but printing their args thanks to the -x option.
[Feedback is welcome as I can't directly test this config (I'm not an AWS ECR user), but I'm puzzled by the fact the OP's example contained at the same time some apt and apk commands…]
Related remark on a pitfall of set -e
Beware that the following script is buggy:
set -e
command1 && command2
command3
Namely, write instead:
set -e
command1 ; command2
command3
or:
set -e
( command1 && command2 )
command3
To be convinced about this, you can try running:
bash -e -c 'false && true; echo $?; echo this should not be run'
→ 1
→ this should not be run
bash -e -c 'false; true; echo $?; echo this should not be run'
bash -e -c '( false && true ); echo $?; echo this should not be run'
From GitLab documentation. In order to interact with your AWS account, the GitLab CI/CD pipelines require both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be defined in your GitLab settings under Settings > CI/CD > Variables. Then add to your before script:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $( aws ecr get-login --no-include-email )
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Also, you had a typo is awscli, not awsclir.Then add the builds, tests and push accordingly.
I think that you have some sort of logic error in the case. image in the build configuration is a CI scripts runner image, not image you build and deploy.
I think you don't have to use it in any case since it is just an image which has utilities & connections to the GitLab CI & etc. The image shouldn't have any dependencies of your project normally.
Please check examples like this one https://gist.github.com/jlis/4bc528041b9661ae6594c63cd2ef673c to get it more clear how to do it a correct way.
I faced the same problem using docker executor mode of gitlab runner.
SSH into the EC2 instance showed that docker-credential-ecr-login was present in /usr/bin/. To pass it to the container I had to mount this package to the gitlab runner container.
gitlab-runner register -n \
--url '${gitlab_url}' \
--registration-token '${registration_token}' \
--template-config /tmp/gitlab_runner.template.toml \
--executor docker \
--tag-list '${runner_name}' \
--description 'gitlab runner for ${runner_name}' \
--docker-privileged \
--docker-image "alpine" \
--docker-disable-cache=true \
--docker-volumes "/var/run/docker.sock:/var/run/docker.sock" \
--docker-volumes "/cache" \
--docker-volumes "/usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login" \
--docker-volumes "/home/gitlab-runner/.docker:/root/.docker"
More information on this thread as well: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1583#note_375018948
We have a similar setup where we need to run CI jobs based off of an Image that is hosted on ECR.
Steps to follow:-
follow this guide here>> https://github.com/awslabs/amazon-ecr-credential-helper
gist of this above link is if you are on "Amazon Linux 2"
sudo amazon-linux-extras enable docker
sudo yum install amazon-ecr-credential-helper
open the ~/.docker/config.json on your gitlab runner in VI editor
Paste this code in the ~/.docker/config.json
{
"credHelpers":
{
"aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login"
}
}
source ~/.bashrc
systemctl restart docker
also remove any references of DOCKER_AUTH_CONFIG from your GitLab>>CI/CD>> Variables
That's it

Resources