terraform plan fails when run from Jenkins docker container - docker

I don't know what possessed me to do this. I am just a tinker so I started following this tutorial. https://www.jenkins.io/doc/book/pipeline/docker/ I got that to work. Then I wanted to create a pipeline that would download a terraform docker container and use that to create a docker container. Here's my Jenkinsfile:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-i --entrypoint='
}
}
stages {
stage('Hello') {
steps {
echo 'Hello World from Github.'
}
}
stage('Test') {
steps {
sh 'ls -al'
sh 'terraform --version'
sh 'terraform init'
sh 'terraform plan'
sh 'ls -al'
}
}
}
}
Here is my main.tf:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "~> 2.13.0"
}
}
}
// This didn't work either:
// provider "docker" {}
provider "docker" {
host = "tcp://127.0.0.1:2375/"
}
resource "docker_image" "nginx" {
name = "nginx:latest"
keep_locally = false
}
resource "docker_container" "nginx" {
image = docker_image.nginx.latest
name = "tutorial"
ports {
internal = 80
external = 8000
}
}
Here is the error message from my Jenkins console output:
[Pipeline] sh
+ terraform plan
[31m╷[0m[0m
[31m│[0m [0m[1m[31mError: [0m[0m[1mError pinging Docker server:
Cannot connect to the Docker daemon at tcp://127.0.0.1:2375/.
Is the docker daemon running?[0m
[31m│[0m [0m
[31m│[0m [0m[0m with provider["registry.terraform.io/kreuzwerker/docker"],
[31m│[0m [0m on main.tf line 10, in provider "docker":
[31m│[0m [0m 10: provider "docker" [4m{[0m[0m
[31m│[0m [0m
[31m╵[0m[0m
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Jenkins is running in a docker container on my Mac laptop:
red#Reds-MacBook-Pro ~ % docker ps | grep jenk
8401359dae3e myjenkins-blueocean:2.346.2-1 "/usr/bin/tini -- /u…" 19 hours ago Up 19 hours 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp jenkins-blueocean
507eb2212eb6 docker:dind "dockerd-entrypoint.…" 19 hours ago Up 19 hours 2375/tcp, 0.0.0.0:2376->2376/tcp jenkins-docker

Related

Stuck with Jenkins not building my docker images

I'm running Jenkins as a container and for some reason Im having issues :D.
After the pipeline runs docker build -t testwebapp:latest . I get docker: Exec format error on the Build image stage
The pipeline command docker.build seems to do what it should so something is wrong with my env?
The Jenkins docker-compose include the docker.sock so the running jenkins should be allowed to piggyback of the host docker?
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Pipeline script defined in Jenkins:
pipeline {
agent any
stages {
stage('Initialize Docker') {
steps {
script {
def dockerHome = tool 'myDocker'
env.PATH = "${dockerHome}/bin:${env.PATH}"
}
}
}
stage('Checkout') {
steps {
git branch: 'main', url: 'github url'
}
}
stage('Build image') {
steps {
script {
docker.build("testwebapp:latest")
}
}
}
}
post {
failure {
script {
currentBuild.result = 'FAILURE'
}
}
}
}
The global tool configuration is pretty standard:
Jenkins global tool config

Error when transferring Docker Image to Docker Hub using Jenkins Pipeline

Below is a portion of a Jenkins Pipeline Script that I am using to build a Docker Image and Deploy it to the Docker Hub. The problem that I am having is that after executing the Pipeline, the Docker Image is not transferred to Docker Hub ~and~ the local Docker Image (created during the process) is not erased.
pipeline {
environment {
registry = "<my_docker_hub userid>/object"
registryCredential = 'dockerhub'
}
agent { label 'ubuntu16.04-slave-two' }
stages {
stage('Cloning Git') {
steps {
...
}
}
stage('Building image') {
steps{
sh "/usr/local/bin/docker-compose -p $registry:$BUILD_NUMBER build "
}
}
stage('Deploy Image') {
steps{
sh "docker push $registry:$BUILD_NUMBER"
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
}
}
Even though I get a SUCCESS message when building the image:
Successfully built fe86784636c2
Successfully tagged <docker_hub_id>object44_website:latest
The image is not transferred over to the Docker Hub.
Below is the log I got when running the Pipeline code:
Started by user Jenkins Admin
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on ubuntu16.04-slave-two in /var/jenkins/workspace/oracle-client-with-flask
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Cloning Git)
[... snip ...']
Building authorize-service
Step 1/11 : FROM oraclelinux:7
---> a803b2474b20
[... snip ...]
Step 3/4 : COPY . /var/www/html/
---> Using cache
---> e0b4cd5713c0
Step 4/4 : EXPOSE 80
---> Using cache
---> fe86784636c2
Successfully built fe86784636c2
Successfully tagged <docker_hub_id>object44_website:latest
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy Image)
[Pipeline] sh
+ docker push <docker_hub_id>/object:44
The push refers to a repository [docker.io/<docker_hub_id>/object]
An image does not exist locally with the tag: <docker_hub_id>/object
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Remove Unused docker image)
Stage "Remove Unused docker image" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
How can I get the Image successfully to the Docker Hub?
TIA
EDIT: Here is the docker-compose.yml file
version: '3'
services:
authorize-service:
build: ./authorizations
container_name: authorize-service
environment:
- DB_IP=XXX.XX.XX.XX
- DB_PORT=1521
- DB_SID=TEST
- DB_USER=xxxxxxx
- DB_PASS=xxxxxxx
ports:
- 2700:5000
networks:
- kdcnetwork
testtab-service:
build: ./testtab
container_name: testtab-service
environment:
- DB_IP=XXX.XX.XX.XX
- DB_PORT=1521
- DB_SID=TEST
- DB_USER=xxxxx
- DB_PASS=xxxxx
ports:
- 2800:5000
networks:
- kdcnetwork
website:
build: ./website
container_name: testtab-website
links:
- testtab-service
volumes:
- ./website:/var/www/html
ports:
- 5000:80
networks:
- kdcnetwork
depends_on:
- testtab-service
networks:
kdcnetwork:
driver: bridge
You didn't provide docker-compose file so I cannot give very accurate answer but I can simply find that
Successfully tagged <docker_hub_id>object44_website:latest
differs from what you are trying to push:
docker push <docker_hub_id>/object:44
those 2 names must be the same.
Edit:
So you must change section in your docker-compose to following
website:
build: ./website
image: <docker_hub_id>/object:44
so docker-compose will build image <docker_hub_id>/object:44 and your docker push command should be able to push it

Jenkins Kubernetes Plugin doesn't executre entrypoint of Docker image

I'm fairly new into Jenkins Kubernetes Plugin and Kubernetes in general - https://github.com/jenkinsci/kubernetes-plugin
I want to use the plugin for E2E tests setup inside my CI.
Inside my Jenkinsfile I have a podTemplate which looks and used as follows:
def podTemplate = """
apiVersion: v1
kind: Pod
spec:
containers:
- name: website
image: ${WEBSITE_INTEGRATION_IMAGE_PATH}
command:
- cat
tty: true
ports:
- containerPort: 3000
- name: cypress
resources:
requests:
memory: 2Gi
limit:
memory: 4Gi
image: ${CYPRESS_IMAGE_PATH}
command:
- cat
tty: true
"""
pipeline {
agent {
label 'docker'
}
stages {
stage('Prepare') {
steps {
timeout(time: 15) {
script {
ci_machine = docker.build("${WEBSITE_IMAGE_PATH}")
}
}
}
}
stage('Build') {
steps {
timeout(time: 15) {
script {
ci_machine.inside("-u root") {
sh "yarn build"
}
}
}
}
post {
success {
timeout(time: 15) {
script {
docker.withRegistry("https://${REGISTRY}", REGISTRY_CREDENTIALS) {
integrationImage = docker.build("${WEBSITE_INTEGRATION_IMAGE_PATH}")
integrationImage.push()
}
}
}
}
}
}
stage('Browser Tests') {
agent {
kubernetes {
label "${KUBERNETES_LABEL}"
yaml podTemplate
}
}
steps {
timeout(time: 5, unit: 'MINUTES') {
container("website") {
sh "yarn start"
}
container("cypress") {
sh "yarn test:e2e"
}
}
}
}
}
In Dockerfile that builds an image I added an ENTRYPOINT
ENTRYPOINT ["bash", "./docker-entrypoint.sh"]
However it seems that it's not executed by the kubernetes plugin.
Am I missing something?
As per Define a Command and Arguments for a Container docs:
The command and arguments that you define in the configuration file
override the default command and arguments provided by the container
image.
This table summarizes the field names used by Docker and Kubernetes:
| Docker field name | K8s field name |
|------------------:|:--------------:|
| ENTRYPOINT | command |
| CMD | args |
Defining a command implies ignoring your Dockerfile ENTRYPOINT:
When you override the default ENTRYPOINT and CMD, these rules apply:
If you supply a command but no args for a Container, only the supplied command is used. The default ENTRYPOINT and the default CMD defined in the Docker image are ignored.
If you supply only args for a Container, the default ENTRYPOINT
defined in the Docker image is run with the args that you supplied.
So you need to replace the command in your pod template by args, which will preserve your Dockerfile ENTRYPOINT (acting equivalent to a Dockerfile CMD).

Jenkins - sshagent plugin doesn't work with Kubernetes plugin

Our env: Jenkins version: 2.138.3
Kubernetes plugin: 1.13.5
Sshagent plugin: 1.17
I have a job that runs OK on an AWS machine (use sshagent works as it should) but when I run the same job on our Kubernetes cluster it failed on ssh error.
Attached the working pipeline:
pipeline {
agent {
label 'deploy-test'
}
stages {
stage('sshagent') {
steps {
script {
sshagent(['deploy_user']) {
sh 'ssh -o StrictHostKeyChecking=no 99.99.999.99 ls'
}
}
}
}
}
}
If I change the label to label 'k8s-slave', it fails on:
+ ssh -o StrictHostKeyChecking=no 99.99.999.99 ls
Warning: Permanently added '99.99.999.99' (ECDSA) to the list of known hosts.
Permission denied (publickey).
Any idea?
just added my kubernetes configuration in Jenkins

Cannot specify flags when using variables for Docker agent args?

I am attempting to mount a volume for my Docker agent with Jenkins pipeline. The following is my JenkinsFile:
pipeline {
agent none
environment {
DOCKER_ARGS = '-v /tmp/my-cache:/home/my-cache'
}
stages {
stage('Build') {
agent {
docker {
image 'my-image:latest'
args '$DOCKER_ARGS'
}
}
steps {
sh 'ls -la /home'
}
}
}
}
Sadly it fails to run, and the following can be seen from the pipeline.log file.
java.io.IOException: Failed to run image 'my-image:latest'. Error: docker: Error response from daemon: create /tmp/my-cache: " /tmp/my-cache" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
See 'docker run --help'.
However, the following JenkinsFile does work:
pipeline {
agent none
environment {
DOCKER_ARGS = '/tmp/my-cache:/home/my-cache'
}
stages {
stage('Build') {
agent {
docker {
image 'my-image:latest'
args '-v $DOCKER_ARGS'
}
}
steps {
sh 'ls -la /home'
}
}
}
}
The only difference is the -v flag is hardcoded outside of the environment variable.
I am new to Jenkins, so I have struggled to find any documentation on this behaviour. Could somebody please explain why I can't define my Docker agent args entirely in an environment variable?

Resources