I have 2 aws ubuntu instance: 1st-server and 2nd-server.
Below is my jenkins pipeline script which create docker image and runs container on 1st-server and push the image to docker hub repo. That's working fine.
I want to pull image and deploy it on 2nd-server.
When I do ssh for 2nd server through below pipeline script but it logins to 1st-server, even if ssh credential ('my-ssh-key') are of 2nd-server. I'm confused how it logging to 1st-server and I checked with touch commands so the file is creating on 1st-server.
pipeline {
environment {
registry = "docker-user/docker-repo"
registryCredential = 'docker-cred'
dockerImage = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git url: 'https://github.com/git-user/jenkins-flask-tutorial.git/'
}
}
stage('Building image') {
steps{
script {
sh "sudo docker build -t flask-app-one ."
sh "sudo docker run -p 5000:5000 --name flask-app-one -d flask-app-one "
sh "docker tag flask-app-one:latest docker-user/myrepo:flask-app-push-test"
}
}
}
stage('Push Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
sh "docker push docker-user/docker-repo:flask-app-push-test"
sshagent(['my-ssh-key']) {
sh 'ssh -o StrictHostKeyChecking=no ubuntu#2ndserver && cd /home/ubuntu/ && sudo touch test-file && docker pull docker-user/docker-repo:flask-app-push-test'
}
}
}
}
}
My question is, how to login to 2nd server and pull the docker image on 2nd server via through jenkins pipeline script? Help me out where I'm doing wrong.
This is more of an alternative than a solution. You can execute the remote commands as part of ssh. This will execute the command on the server and disconnect.
ssh name#ip "ls -la /home/ubuntu/"
Related
I need to build a docker image using docker-compose.yaml file and then I need to push it on dockerhub.
I used following command to build Dockerfile and push it to dockerhub with jenkins docker plugin.
stage('Building image and Push') {
steps{
script {
customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
}
}
Is there a similar method to write this kind of command to build and push docker-compose file?
PS: I heard about about a plugin called Docker compose build step. Can I do same as above with this plugin?
This does not seem supported in a scripted pipeline.
Using purely a declarative pipeline on an agent based on the docker-compose image (so with docker-compose preinstalled), I suppose you can shell script those commands directly:
stage('Build Docker Image') {
steps{
sh 'docker-compose build'
echo 'Docker-compose-build Build Image Completed'
}
}
And then login to DockerHub and push, as described in "Simple Jenkins Declarative Pipeline to Push Docker Image To Docker Hub"
stage('Login to Docker Hub') {
steps{
sh 'echo $DOCKERHUB_CREDENTIALS_PSW | sudo docker login -u $DOCKERHUB_CREDENTIALS_USR --password-stdin'
echo 'Login Completed'
}
}
stage('Push Image to Docker Hub') {
steps{
sh 'sudo docker push <dockerhubusername>/<dockerhubreponame>:$BUILD_NUMBER'
echo 'Push Image Completed'
}
}
A manual approach, since an "agent docker-compose" is not directly available (you have docker agent docker/dockerfile/kubernetes, but no docker-compose option)
I am working on a CI/DC Pipeline where I have a DEV, TEST and Prod Server. With a Jenkins Pipeline i deploy my newest Image onto my DEV-Server. Now i want to take the image of my DEV-Server by reading out the sha256 id and put it on my TEST-Server.
I have a Jenkins Pipeline for that:
pipeline {
agent any
tools {
dockerTool 'docker-19.03.9'
}
environment {
}
stages {
stage('DeployToTEST-Angular') {
steps {
script {
echo 'Deploying image...'
sh 'docker stop mycontainertest'
sh 'docker rm mycontainertest'
sh 'docker run -d --name mycontainertest [cant show the envirmoments i give with] --restart always angular:latest'
}
}
}
}
As you see i currently use the :latest tag, but i want something like this:
pipeline {
agent any
tools {
dockerTool 'docker-19.03.9'
}
environment {
}
stages {
stage('DeployToTEST-Angular') {
steps {
script {
echo 'Deploying image...'
sh 'docker stop mycontainertest'
sh 'docker rm mycontainertest'
sh 'docker run -d --name mycontainertest [cant show the envirmoments i give with] --restart always \$imageofDev'
}
}
}
}
$imageofDev = docker inspect mycontainerdev | grep -o 'sha256:[^"]*' // this command works and give my back the raw sha256 number
So that it uses the actuall sha256 number of my dev image
I dont know how i can define this variable and later use the value of it in this Jenkins Pipeline. How can i do this?
When you build the image, choose an image name and tag that you can remember later. Jenkins provides several environment variables that you can use to construct this, such as BUILD_NUMBER; in a multibranch pipeline you also have access to BRANCH_NAME and CHANGE_ID; or you can directly run git in your pipeline code.
def shortCommitID = sh script: 'git rev-parse --short HEAD', returnStdout: true
def dockerImage = "project:${shortCommitID.trim()}"
def registry = 'registry.example.com'
def fullDockerImage "${registry}/${dockerImage}"
Now that you know the Docker image name you're going to use everywhere you can just use it; you never need to go off and look up the image ID. Using the scripted pipeline Docker integration, for example:
docker.withRegistry("https://${registry}") {
def image = docker.build(dockerImage)
image.push
}
Since you know the registry/image:tag name, you can just use it in other Jenkins directives too
def containerName = 'container'
// stop an old container, if any
sh "docker stop ${containerName} || true"
sh "docker rm ${containerName} || true"
// start a new container that outlives this pipeline
sh "docker run -d --name ${containerName} ${fullDockerImage}"
I use Jenkins to build my maven Java app then create Docker image and push it. After all of that I have try-catch where I Try to stop and remove the container if it's already running - If not it should just skip it and run the new Image - It works but always marks the build as failed. I tried to change the build status, but apparently that is not possible.
Here is my pipeline:
node {
stage('Clone repository') {
git branch: 'main', credentialsId: 'realsnack-git', url: 'https://github.com/Realsnack/Java-rest-api.git'
}
stage('Build maven project') {
sh './mvnw clean package'
}
stage('Build docker image') {
sh 'docker build -t 192.168.1.27:49153/java-restapi:latest .'
}
stage('Push image') {
sh 'docker push 192.168.1.27:49153/java-restapi:latest'
}
try {
stage('Remove old container') {
sh 'docker stop java-rest_api && docker rm java-rest_api'
}
} catch(all) {
sh 'No container to remove - runnning it anyway'
} finally {
stage('Run image') {
sh 'docker run -d --name java-rest_api -p 8081:8081 192.168.1.27:49153/java-restapi:latest'
}
}
}
docker stop will fail if it fails to stop the container.
You can solve the issue in one of the two following ways:
Check that there is a running container before attempting to stop it:
sh "if [[ docker ps -a | grep java-rest_api ]]; docker stop java-rest_api; fi"
Ignore the docker error:
sh "docker stop java-rest_api || true"
I created a pipeline in Jenkins which takes an app from Github, builds the app, and then builds an image and then finally runs that image with the app.
the Dockerfile is:
FROM javastreets/mule:latest
COPY ./target/jenkins-demo-api-1.0.0-1.0.0-SNAPSHOT-mule-application.jar /opt/mule/apps/
CMD [ "/opt/mule/bin/mule"]
here jenkins-demo-api-1.0.0-1.0.0-SNAPSHOT-mule-application.jar is the app that is built in Jenkins from Github.
the pipeline script is as:
pipeline {
agent any
tools{
maven 'M3'
}
stages {
stage('git pull'){
steps{
git branch: 'master', credentialsId: '025fbee3-18cc-4298-ac9b-adac*****', url: 'https://github.com/treadston-e/mule-jenkins.git'
}
}
stage('Build') {
steps {
bat "mvn clean package"
}
}
stage('build image'){
steps{
bat 'docker build -t docker-demo .'
}
}
stage('run image'){
steps{
bat 'docker run -d -p 127.0.0.1:8081:8081 docker-demo'
}
}
}
}
the pipeline executes successfully but when I try to hit, http:localhost:8081 response I receive is This page isn’t working
what should I do?
The localhost you are referring to is a localhost of the docker container which is not the same as of your client. Try to specify network in your docker run command.
docker run -d --network host -p 8081:8081 docker-demo
if you would like to check on which IP address the bridge is running, you can check it as follows:
docker network inspect bridge
I am running a jenkins docker image by doing this:
docker run \
--rm \
-u root \
-p 8080:8080 \
-v /home/ec2-user/jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$HOME":/home \
jenkins/jenkins:lts
I have my jenkins server up but when I try to run a docker build image as below:
pipeline {
environment{
registry = "leexha/node_demo"
registyCredential = 'dockerhub'
dockerImage = ''
}
agent any
tools{
nodejs "node"
}
stages {
stage('Git clone'){
steps{
git 'https://github.com/leeadh/node-jenkins-app-example.git'
}
}
stage('Installing Node') {
steps {
sh 'npm install'
}
}
stage ('Conducting Unit test'){
steps{
sh 'npm test'
}
}
stage ('Building image'){
steps{
script{
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage ('Pushing to Docker Hub'){
steps{
script{
docker.withRegistry('',registyCredential){
dockerImage.push()
}
}
}
}
}
}
it keeps telling me that dcoker is not found.
I already enabled the docker process to communicate via the -v /var/run/docker.sock:/var/run/docker.sock \
So im pretty confused now whats going on.
ANy help?
You need to install docker on Jenkins Server (insider the Jenkins image container). And install and config Jenkins plugin: docker on your Jenkins Server.