I've created my Jenkinsfile for building my project in production and the pipeline looks like this:
pipeline {
agent any
stages {
stage('Pull') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
cd ${SOURCE_FOLDER}/project
git pull
git status
EOF'''
}
}
stage('Composer') {
parallel {
stage('Composer') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project:/app composer/composer:latest install
EOF'''
}
}
stage('Composer 2') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project/sub:/app
composer/composer:latest install
EOF'''
}
}
}
}
}
}
Is there a way to have all the stages all in one single SSH connection in order to minimise the overhead and the connection number?
I've done all the SSL stuff manually by creating the keys and pasting the public key on the production machine.
You can create a function for the connection and pass the SSH_USER & SERVER_ADDRESS as input parameters to that function. Call this function from all your stages.
Related
I want to connect to my server and then clone the project from my repo to the server in question and do a docker build. Could you show me a template? Because with my own code it only does the build on jenkins. It doesn't connect to the server to do the pull.
`
pipeline {
agent any
stages {
stage('Pulling our project') {
steps{
withCredentials([gitUsernamePassword(credentialsId: 'GitlabCred')]) {
sh 'git pull origin jks'
}
}
}
stage('Building our project') {
agent any
steps {
sh 'docker compose up -d --build'
}
}
}
}
`
I need to build a nodeJS application which will connect to a mysql DB and retrieve a string from the database ‘hello_world’. The infrastructure is built on a Minikube cluster running on MacOS. Docker Desktop is also installed.
There are 2 environments - dev and prod(each environment has its own namespace). There is already a MySQL DB pod setup in the dev namespace.
I have this working pipeline but wish to incorporate the testing stage. As of now, it just builds the nodeJS application inside a docker image which will be pushed to DockerHub. Then a stage will create a temporary K8 pod to pull and install the image(as a pod) in the ‘dev’ namespace via helm.
I see there are 2 options where I can insert this test stage.
In the docker environment, to insert the test stage after the docker image is built and started. Meaning there has to be a MySQL DB already setup inside the docker environment else the nodeJS application will fail.
After the image is setup as a pod inside Minikube, the testing stage can then start in the dev namespace.
The testing stage should check if s specific string is returned when the web page is loaded. If not the pipeline should fail and stop executing.
My question is base on real world use case, at which point should I start the testing stage and how do I achieve that to meet the above requirements?
Thanks.
pipeline {
agent { label 'slave' }
environment {
DOCKERHUB_CREDENTIALS = credentials('DOCKERHUB_LOGIN')
}
parameters {
gitParameter branchFilter: 'origin/(.*)', defaultValue: 'main', name: 'BRANCH', type: 'PT_BRANCH'
}
tools {
nodejs "nodejs"
}
stages {
stage('Clone Code Repository') {
steps {
git branch: "${params.BRANCH}", url: "${params.GITHUB_REPO}"
}
}
stage('Download docker binary') {
steps{
script {
def dockerHome = tool 'docker'
env.PATH = "${dockerHome}/bin:${env.PATH}"
}
}
}
stage('Install NPM application') {
steps {
sh 'npm install'
}
}
stage('Docker Build and Tag') {
steps {
script {
docker.withServer("${params.DOCKER_URL}", 'DOCKERHOST_LOGIN'){
sh "docker build -t ${params.DOCKER_IMAGE} ."
}
}
}
}
stage('DockerHub Login') {
steps {
script {
docker.withServer("${params.DOCKER_URL}", 'DOCKERHOST_LOGIN'){
sh 'echo $DOCKERHUB_CREDENTIALS_PSW | docker login -u $DOCKERHUB_CREDENTIALS_USR --password-stdin'
}
}
}
}
stage('Push Image to DockerHub') {
steps {
script {
docker.withServer("${params.DOCKER_URL}", 'DOCKERHOST_LOGIN'){
sh "docker push ${params.DOCKER_IMAGE}"
}
}
}
}
stage("Deploy to Dev") {
agent {
kubernetes {
inheritFrom "helm-dev"
cloud "kubernetes-dev"
}
}
steps {
container("helm-dev") {
git branch: "${params.BRANCH}", url: "${params.GITHUB_REPO}"
sh "cd Development; helm upgrade -i dev-webapps ./dev-webapps"
}
}
}
}
post {
always {
script {
docker.withServer("${params.DOCKER_URL}", 'DOCKERHOST_LOGIN'){
sh 'docker logout'
}
}
}
}
}
I got stuck in Jenkins Pipeline with ssh command. The error is:
+ ssh
/var/lib/jenkins/workspace/test-docker-jenkins#tmp/durable-2c3c7fb4/script.sh: line 1: ssh: not found
script returned exit code 127
My Jenkins File is:
pipeline {
agent {
docker {
image 'node:15.12.0-alpine'
}
}
stages {
stage("Prepare") {
steps {
sh "yarn"
}
}
stage("Build") {
steps {
sh "yarn build"
}
}
stage("Deploy") {
steps {
sh "ssh"
}
}
}
}
Does anyone know how to resolve this problem? Or is there anyway ssh to remote server in Jenkins Pipeline? Thank in advance. Have a good day!
You are trying to ssh from a docker container of image node:15.12.0-alpine and it doesn't contain ssh. From Jenkins, you can of course do SSH here is the SSH plugin of Jenkins and relevant documentation. https://www.jenkins.io/doc/pipeline/steps/ssh-steps/
I am using docker to simulate postgres database for my app. I was testing it in Cypress for some time and it works fine. I want to set up Jenkins for further testing, but I seem stuck.
On my device, I would use commands
docker create -e POSTGRES_DB=myDB -p 127.0.0.1:5432:5432 --name myDB postgres
docker start myDB
to create it. How can I simulate this in Jenkins pipeline? I need the DB for the app to work.
I use Dockerfile as my agent, and I have tried putting the ENV variables there, but it does not work. Docker is not installed on the pipeline.
The way I see it is either:
Create an image by using a
Somehow install docker inside the pipeline and use the same commands
Maybe with master/slave nodes? I don't understand them well yet.
This might be a use case for sidecar pattern one of Jenkins Pipeline's advanced features.
For example (from the above site):
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
The above example uses the object exposed by withRun, which has the
running container’s ID available via the id property. Using the
container’s ID, the Pipeline can create a link by passing custom
Docker arguments to the inside() method.
Best thing is that the containers should be automatically stopped and removed when the work is done.
EDIT:
To use docker network instead you can do the following (open Jira to support this OOTB). Following helper function
def withDockerNetwork(Closure inner) {
try {
networkId = UUID.randomUUID().toString()
sh "docker network create ${networkId}"
inner.call(networkId)
} finally {
sh "docker network rm ${networkId}"
}
}
Actual usage
withDockerNetwork{ n ->
docker.image('sidecar').withRun("--network ${n} --name sidecar") { c->
docker.image('main').inside("--network ${n}") {
// do something with host "sidecar"
}
}
}
For declarative pipelines:
pipeline {
agent any
environment {
POSTGRES_HOST = 'localhost'
POSTGRES_USER = myuser'
}
stages {
stage('run!') {
steps {
script {
docker.image('postgres:9.6').withRun(
"-h ${env.POSTGRES_HOST} -e POSTGRES_USER=${env.POSTGRES_USER}"
) { db ->
// You can your image here but you need psql to be installed inside
docker.image('postgres:9.6').inside("--link ${db.id}:db") {
sh '''
psql --version
until psql -h ${POSTGRES_HOST} -U ${POSTGRES_USER} -c "select 1" > /dev/null 2>&1 || [ $RETRIES -eq 0 ]; do
echo "Waiting for postgres server, $((RETRIES-=1)) remaining attempts..."
sleep 1
done
'''
sh 'echo "your commands here"'
}
}
}
}
}
}
}
Related to Docker wait for postgresql to be running
I'm in a process of migrating from freestyle jobs chained into pipeline to have the pipeline in a Jenkinsfile.
My current pipeline will execute 2 jobs in parallel, one will create a tunnel to database (with a randomly generated port) and the next job needs to get this port number, so I'm performing a curl command and reading the console of the create-db-tunnel job and storing the port number. The create-db-tunnel needs to keep running as the follow up job is connecting to the database and is taking DB dump. This is the curl command which I run on the second job and which is returning the randomly generated port number from the established DB tunnel:
Port=$(curl -u ${USERNAME}:${TOKEN} http://myjenkinsurl.com/job/create-db-tunnel/lastBuild/consoleText | grep Port | grep -Eo '[0-9]{3,5}')
I wonder if there is anything similar I can use in Jenkinsfile? I currently have the 2 jobs triggered in parallel, but since the create-db-tunnel is no longer a freestyle job, I'm not sure if I can get the port number still? I can confirm that the console logs for the db_tunnel stage has the port number in there, just not sure how can I query that console. Here is my jenkinsfile:
pipeline {
agent any
environment {
APTIBLE_LOGIN = credentials('aptible')
}
stages {
stage('Setup') {
parallel {
// run db_tunnel and get_port in parralel
stage ('db_tunnel') {
steps {
sh """
export PATH=$PATH:/usr/local/bin
aptible login --email=$APTIBLE_LOGIN_USR --password=$APTIBLE_LOGIN_PSW
aptible db:tunnel postgres-prod & sleep 30s
"""
}
}
stage('get_port') {
steps {
sh """
sleep 15s
//this will not work
Port=$(curl -u ${USERNAME}:${TOKEN} http://myjenkinsurl.com/job/db_tunnel/lastBuild/consoleText | grep Port | grep -Eo '[0-9]{3,5}')
echo "Port=$Port" > port.txt
"""
}
}
}
}
}
}
Actually, I found a solution to my question - it was a very similar curl command I had to run and I'm now getting the desired port number I needed. Here is the jenkinsfile if someone is interested:
pipeline {
agent any
environment {
APTIBLE_LOGIN = credentials('aptible')
JENKINS_TOKEN = credentials('jenkins')
}
stages {
stage('Setup') {
parallel {
// run db_tunnel and get_port in parralel
stage ('db_tunnel') {
steps {
sh """
export PATH=$PATH:/usr/local/bin
aptible login --email=$APTIBLE_LOGIN_USR --password=$APTIBLE_LOGIN_PSW
aptible db:tunnel postgres-prod & sleep 30s
"""
}
}
stage('get_port') {
steps {
sh """
sleep 20
Port=\$(curl -u $JENKINS_TOKEN_USR:$JENKINS_TOKEN_PSW http://myjenkinsurl.com/job/schema-archive-jenkinsfile/lastBuild/consoleText | grep Port | grep -Eo '[0-9]{3,5}')
echo "Port=\$Port" > port.txt
"""
}
}
}
}
}
}