run job on server using ssh key - docker

I want to deploy on this server serverBNP-prod1
I tried to write this code below. Using this code where should i add my ssh local key please?
Thank you
job_deploy_prod:
stage: deploy
only:
- master
- tags
when: manual
environment:
name: prod
variables:
SERVER: serverBNP-prod1
SSH_OPTS: -p 22 -l udoc -o BatchMode=true -o StrictHostKeyChecking=no
script:
- export VERSION=$(fgrep -m 1 -w version pom.xml | sed -re 's/^.*>(.*)<.*$/\1/')
- ssh $SSH_OPTS -i $HOME/.ssh/id_rsa $SERVER "docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com"
- ssh $SSH_OPTS -i $HOME/.ssh/id_rsa $SERVER "docker rm -f proj"
- ssh $SSH_OPTS -i $HOME/.ssh/id_rsa $SERVER "docker pull registry.gitlab.com/bnp/proj:$VERSION"
- ssh $SSH_OPTS -i $HOME/.ssh/id_rsa $SERVER "docker run -d -p 8080:8080 -e 'SPRING_PROFILES_ACTIVE=prod' -v /etc/localtime:/etc/localtime -v /etc/timezone:/etc/timezone --name proj registry.gitlab.com/bnp/proj:$VERSION"
tags:
- prod

You can either:
use a Settings / CI-CD / Variable of type File, in which you will put your private key data, or
if you have only a username and password for your server you can use the ''sshpass'' command and provide a SSHPASS environment variable, still in the CI-CD variables section
More details on how to use the GitLab CI/CD environment variables (File type, security, etc.) can be found here:
https://docs.gitlab.com/ce/ci/variables/
You can mask variables but be aware that contrary to Jenkins there is no way to remove the "Reveal value" button when a user has sufficient rights in Gitlab to view/edit the settings, so you will have to carefully select your project rights, e.g., by allowing other people the Developer right but without the Maintainer one (which allows to edit the settings).

Related

How to pass environment variable to docker run in gitlab ci cd

I am trying to pass the env variable to my node js docker build image ,while running as shown below
stages:
- publish
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
publish:
image: docker:latest
stage: publish
services:
- docker:dind
script:
- touch env.txt
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: alpine:latest
stage: deploy
tags:
- deployment
script:
- chmod og= $ID_RSA
- apk update && apk add openssh-client
- echo "AWS_ACCESS_KEY_ID"=$AWS_ACCESS_KEY_ID >> "env.txt"
- echo "AWS_S3_BUCKET"=$AWS_S3_BUCKET >> "env.txt"
- echo "AWS_S3_REGION"=$AWS_S3_REGION >> "env.txt"
- echo "AWS_SECRET_ACCESS_KEY"=$AWS_SECRET_ACCESS_KEY >> "env.txt"
- echo "DB_URL"=$DB_URL >> "env.txt"
- echo "JWT_EXPIRES_IN"=$JWT_EXPIRES_IN >> "env.txt"
- echo "OTP_EXPIRE_TIME_SECONDS"=$OTP_EXPIRE_TIME_SECONDS >> "env.txt"
- echo "TWILIO_ACCOUNT_SID"=$TWILIO_ACCOUNT_SID >> "env.txt"
- echo "TWILIO_AUTH_TOKEN"=$TWILIO_AUTH_TOKEN >> "env.txt"
- echo "TWILLIO_SENDER"=$TWILLIO_SENDER >> "env.txt"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run --env-file env.txt -d -p 8080:8080 --name my-app $TAG_COMMIT"
environment:
name: development
url: 90900
only:
- master
I am running this command docker run --env-file env.txt ,but it gives me an error docker: open env.txt: no such file or directory.
How Can I solve the issue ,to pass multiple variables in my docker run command
Which job is failing? In your deploy job, you are creating the env.txt locally and using SSH to do the docker building, but you never scp your local env.txt to $SERVER_USER#$SERVER_ID for the remote process to pick it up.
I had the same issue using Gitlab ci/cd. i.e. Trying to inject env vars that were referenced in the project .env file via the runner (docker executor) into the output docker container.
We don't want to commit any sensitive info into git so one option is to save them on the server in a file and include via the --env-file flag but Gitlab runner creates a new container for every run so not possible to use this as the host server running the yaml script is ephemeral and not the actual server that Gitlab runner was installed onto.
The suggestion by #dmoonfire to scp the file over sounded like a good solution but I couldn't get it to work to copy a file from external to the gitlab runner. I'd need to copy the public key from the executor to the gitlab runner server but the docker executor is ephemeral.
I found the simplest solution to use the Gitlab CI/CD variable settings. It's possible to mask variables and restrict to protected branches or protected tags etc. These get injected into the container so that your .env file can access.

CI/CD script for build & deploy docker image in aws EC2

can I build ,push(to gitlab registry) and deploy the image (to aws EC2) using this CI/CD configuration?
stages:
- build
- deploy
build:
# Use the official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
# Run this job in a branch where a Dockerfile exists
deploy:
stage: deploy
before_script:
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- ssh -o StrictHostKeyChecking=no ubuntu#18.0.0.82 "sudo docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY; sudo docker pull $CI_REGISTRY_IMAGE${tag}; cd /home/crud_app; sudo docker-compose up -d"
after_script:
- sudo docker logout
rules:
- if: $CI_COMMIT_BRANCH
exists:
- Dockerfile
after the script build is getting suceed, deploy gets fail.
(build suceeded)
(deploy got failed)
the configuration must be build and deploy the image
There are a couple of errors, but the overall Pipeline seems good.
You cannot use ssh-add without having the agent running
Why you create the .ssh folder manually if afterwards you're explicitly ignoring the key that is going to be stored under known_hosts?
Using StrictHostKeyChecking=no is dangerous and totally unrecommended.
On the before_script add the following:
before_script:
- eval `ssh-agent`
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- ssh-keyscan -H 18.0.0.82 >> ~/.ssh/known_hosts
Also, don't use sudo on your ubuntu user, better add it to the docker group or connect through SSH to an user that is in the docker group.
In case you don't have already a docker group in your EC2 instance, now it's a good moment to configure it:
Access to your EC2 instance and create the docker group:
$ sudo groupadd docker
Add the ubuntu user to the docker group:
$ sudo usermod -aG docker ubuntu
Now change your script to:
script:
- echo $CI_REGISTRY_PASSWORD > docker_password
- scp docker_password ubuntu#18.0.0.82:~/tmp/docker_password
- ssh ubuntu#18.0.0.82 "cat ~/tmp/docker_password | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY; docker pull $CI_REGISTRY_IMAGE${tag}; cd /home/crud_app; docker-compose up -d; docker logout; rm -f ~/tmp/docker_password"
Also, remember that in the after_script you aren't in the EC2 instance but within the runner image so you don't need to logout, but it would be good to kill the SSH agent tho.
Final Job
deploy:
stage: deploy
before_script:
- eval `ssh-agent`
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- ssh-keyscan -H 18.0.0.82 >> ~/.ssh/known_hosts
script:
- echo $CI_REGISTRY_PASSWORD > docker_password
- scp docker_password ubuntu#18.0.0.82:~/tmp/docker_password
- ssh ubuntu#18.0.0.82 "cat ~/tmp/docker_password | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY; docker pull $CI_REGISTRY_IMAGE${tag}; cd /home/crud_app; docker-compose up -d; docker logout; rm -f ~/tmp/docker_password"
after_script:
- kill $SSH_AGENT_PID
- rm docker_password
rules:
- if: $CI_COMMIT_BRANCH
exists:
- Dockerfile

Gitlab runner: failure to log in to GitLab Container Registry

After setting up gitlab-runner as a Docker container with an executor of docker, I fail to run any builds. The displayed log reads like the following:
Running with gitlab-runner 11.4.2 (cf91d5e1)
on <hostname> 9f1c1a0d
Using Docker executor with image docker:stable-git ...
Starting service docker:stable-dind ...
Pulling docker image docker:stable-dind ...
Using docker image sha256:acfec978837639b4230111b35a775a67ccbc2b08b442c1ae2cca4e95c3e6d08a for docker:stable-dind ...
Waiting for services to be up and running...
Pulling docker image docker:stable-git ...
Using docker image sha256:a8a2d0da40bc37344c35ab723d4081a5ef6122d466bf0a0409f742ffc09c43b9 for docker:stable-git ...
Running on runner-9f1c1a0d-project-1-concurrent-0 via a7b6a57c58f8...
Fetching changes...
HEAD is now at 5430a3d <Commit message>
Checking out 5430a3d8 as master...
Skipping Git submodules setup
$ # Auto DevOps variables and functions # collapsed multi-line command
$ setup_docker
$ build
Logging to GitLab Container Registry with CI credentials...
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or password
ERROR: Job failed: exit code 1
Note the attempt to ligin to docker-hub (I guess) and the credentials-error. But I do not desire nor configured a username/password to access docker-hub. Any suggestion what is wrong here or how to go on debugging this?
The runner was registered with the following command (which also dictates the contents of the configuration file):
docker run --rm -ti \
-v <config-volume>:/etc/gitlab-runner \
-v $(pwd)/self-signed-server.crt:/etc/ssl/certs/server.crt \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner register \
--tls-ca-file /etc/ssl/certs/server.crt \
--url https://my.server.url/gitlab/ --registration-token <token> \
--name myserver --tag-list "" \
--executor docker --docker-privileged --docker-image debian \
--non-interactive
I used --docker-privileged because I originally had the same problem discussed here (thanks, wendellmva). I just can't configure running the gitlab-runner container itself privileged, but don't see link-failure-problem problem even though I don't.
To get past this point, one needs to overwrite the CI_REGISTRY_USER variable in the projects Settings -> CI / CD -> Variables block. Assigning an empty value will get past this point.
Background: by exporting the project and then parsing the JSON settings with jq, one can get the preconfigured list of commands that run:
jq -r .pipelines[0].stages[0].statuses[0].commands project.json
# ...
function registry_login() {
if [[ -n "$CI_REGISTRY_USER" ]]; then
echo "Logging to GitLab Container Registry with CI credentials..."
docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
echo ""
fi
}
# ...
So there is apparently some non-empty string preloaded to CI_REGISTRY_USER, but with an invalid CI_REGISTRY_PASSWORD.
What I haven't found yet is where to make such settings globally for all projects or how to edit the AutoDevOps pipeline.

How to avoid password when using sudo in gitlab-ci.yml?

I have a private docker repository where i store my build images.
I did copy my registry certificates and updated my /etc/hosts file to authenticate registry from my local machine.
I could login to registry with 'sudo docker login -u xxx -p xxx registry-name:port'
But when i try same docker login command from gitlab-ci stage, it is failing with this error:
sudo: no tty present and no askpass program specified.
This is how i'm trying to achieve this.
ssh manohara#${DEPLOY_SERVER_IP} "sudo docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}"
I also tried adding this gitlab-runner ALL=(ALL) NOPASSWD: ALL at the bottom of /etc/sudoers file, but no luck
Where am i going wrong?
According to this Source you can use
ssh -t remotehost "sudo ./binary"
The -t allocates a pseudo-tty.
Or in your example:
ssh -t manohara#${DEPLOY_SERVER_IP} "sudo docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}"

Puppet docker-modules does not work for Jenkins slave(node)

thirst of all thanks for spending time reading this..
I am trying to achieve:
installing Puppet on all my instances (Master, agent1, agent2, etc) DONE
from puppet master installing puppetlabs/docker now I got docker on all my instances.. DONE
I put all my instances in docker SWARM-manager MODE! DONE
on Master installing Jenkins docker service create --name jenkins-master -p 50000:50000 -p 80:8080 jenkins and in Jenkins installing self-organizing swarm plugin. DONE
creating docker secret for all instances echo "-master http://35.23...  -password admin -username admin" | docker secret create jenkins-v1 - DONE
When trying to create a jenkins node.. FAIL nothing happens
docker service create \
--mode=global \
--name jenkins-swarm-agent \
-e LABELS=docker-test \
--mount
"type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock" \
--mount "type=bind,source=/tmp/,target=/tmp/" \
--secret source=jenkins-v1,target=jenkins \
vipconsult/jenkins-swarm-agent
I read before.. puppet module doesn't work with docker SWARM mode..
Do you know any alternative ways to use.. Puppet>Docker>SWARM>Jenkins>slave-nodes/
please advice!
Done!
echo "-master http://35.23... -password admin -username admin" | docker secret create jenkins-v1 -
pssd and user should be exactly like in jenkins user log in !

Resources