Use Nomad to build and push docker image - docker

Currently I use nomad with the docker driver for services and batch jobs.
I have a project which I can't simply use github/gitlab/circleci/etc to build the image because in order for the build to succeed it requires access to network resources that are otherwise private (i.e. no access from 3rd party platforms).
is there a way to build and push docker images using batch jobs?
The things I've tried and the issues I've run into:
exec: it's use of isolation primitives means it is not able to access the running docker daemon.
docker: via docker in docker, I was unable to get the docker container to access the host machine's daemon.

turns out raw_exec is the solution. here's an example task:
task "worker-image" {
driver = "raw_exec"
artifact {
source = "git::git#my-org/my-repo.git"
destination = "local/path"
options {
ref = var.branch
sshkey = var.ssh_key
}
}
env {
ENV = var.env
BUILD_DOCKERFILE = "local/path/Dockerfile"
BUILD_IMAGE_NAME = var.image_name
BUILD_CONTEXT = "local/path/."
}
config {
command = "/bin/bash"
args = [
"-xc",
"docker build ${BUILD_CONTEXT} -f ${BUILD_DOCKERFILE} -t ${BUILD_IMAGE_NAME} --build-arg ENV=${ENV} && docker push ${BUILD_IMAGE_NAME}"
]
}
}
Note that I tried using a bash script (so command = "myscript.sh") but it didn't work as I kept getting a "docker build" requires exactly 1 argument. error even though I passed the arg and options via env vars (same as I do in the task example above which works, but doesn't in the script file).

Related

How to pass docker run arguments in Jenkins?

I am trying to set up my Jenkins pipeline using this docker image. It requires to be executed as following:
docker run --rm \
-v $PROJECT_DIR:/input \
-v $PROJECT_DIR:/output \
-e PLATFORM_ID=$PLATFORM_ID \
particle/buildpack-particle-firmware:$VERSION
The implementation in my Jenkins pipeline looks like this:
stage('build firmware') {
agent {
docker {
image 'particle/buildpack-particle-firmware:4.0.2-tracker'
args '-v application:/input -v application:/output -e PLATFORM_ID=26 particle/buildpack-particle-firmware:4.0.2-tracker'
}
}
steps {
archiveArtifacts artifacts: 'application/target/*.bin', fingerprint: true, onlyIfSuccessful: true
}
}
Executing this on my PC system works just fine.
Upon executing the Jenkins pipeline, I am eventually getting this error:
java.io.IOException: Failed to run image 'particle/buildpack-particle-firmware:4.0.2-tracker'. Error: docker: Error response from daemon: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: exec: "-w": executable file not found in $PATH: unknown.
I read through the documentation of Jenkins + Docker, but I couldn't find out how to use such an image. All the guides usually explain how to run a docker image and execute shell commands.
If I get it right, this Dockerfile is the layout for the said docker image.
How do I get around this issue and call a docker container with run arguments?
The agent mode is intended if you want run Jenkins build steps inside a container; in your example, run the archiveArtifacts step instead of the thing the container normally does. You can imagine using an image that only contains a build tool, like golang or one of the Java images, in the agent { docker { image } } line, and Jenkins will inject several lines of docker command-line options so that it runs against the workspace tree.
The Jenkins Docker interface may not have a built-in way to wait for a container to complete. Instead, you can launch a "sidecar" container, then run docker wait still from outside the container to wait for it to complete. This would roughly look like
stage('build firmware') {
steps {
docker
.image('particle/buildpack-particle-firmware:4.0.2-tracker')
.withRun('-v application:/input -v application:/output -e PLATFORM_ID=26 particle/buildpack-particle-firmware:4.0.2-tracker') { c ->
sh "docker wait ${c.id}"
}
archiveArtifacts artifacts: 'application/target/*.bin', fingerprint: true, onlyIfSuccessful: true
}
}
In the end, it is up to Jenkins how the docker run command is executed and which entrypoint is taken. Unfortunately, I can't change the settings of the Jenkins Server so I had to find a workaround.
The solution for me is similar to my initial approach and looks like this:
agent {
docker {
image 'particle/buildpack-hal'
}
}
environment {
APPDIR="$WORKSPACE/tracker-edge"
DEVICE_OS_PATH="$WORKSPACE/device-os"
PLATFORM_ID="26"
}
steps {
sh 'make sanitize -s'
}
One guess is that calling the docker container as expected doesn't work on my Jenkins Server. It requires to be run and shell commands to be executed from within.

How to use docker image in docker desktop for terraform kubenetes

I have built my docker image in Docker desktop but I do not know how to config so that terraform kubernetes can refer to local image? (it stuck while creating the pod)
Here is my tf file look like
....
provider "kubernetes" {
config_path = "~/.kube/config"
}
resource "kubernetes_pod" "test" {
metadata {
name = "backend-api"
labels = {
app = "MyNodeJsApp"
}
}
spec {
container {
image = "backendnodejs:0.0.1"
name = "backendnodejs-container"
# I think it keep pulling from Docker Hub
port {
container_port = 5000
}
}
}
}
resource "kubernetes_service" "test" {
metadata {
name = "backendnodejs-service"
}
spec {
selector = {
app = kubernetes_pod.test.metadata.0.labels.app
}
port {
port = 5000
target_port = 5000
}
type = "LoadBalancer"
}
}
So after hours researching how to deploy to minikube (installed on minikube website not docker desktop kubernetes). I found out that minikube ran as container itself in docker desktop which is why you can not use images from docker desktop to deploy into minikube.
Here are the link solved about this:
Pushing images from minikube
How to use local docker images with Minikube?
So things you will need before using terraform deploy to minikube:
Pushing directly to the in-cluster Docker daemon (docker-env)
Windows
Powershell
PS> & minikube -p minikube docker-env --shell powershell | Invoke-Expression
CMD
CMD> #FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env --shell cmd') DO #%i
Linux/MacOS
> eval $(minikube docker-env)
Build docker image again (same terminal that been entered command above)
docker build -t your_image_tag your_docker_file
Run normal terraform file (same terminal)
This link also explained same as above

Jenkins: Connect to a Docker container from a stage that is run with an agent (another Docker container)

I am in the process of reworking a pipeline to use Declarative Pipelines approach so that I will be able to use Docker images on each stage.
At the moment I have the following working code which performs integration tests connecting to a DB which is run in a Docker container.
node {
// checkout, build, test stages...
stage('Integration Tests') {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
Now with Declarative Pipelines the same code would look somehow like this:
pipeline {
agent none
stages {
// checkout, build, test stages...
stage('Integration Test') {
agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
steps {
script {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
}
}
}
}
Problem: The stage is now run inside a Docker container and running docker.image() leads to docker: not found error in the stage (it is looking for docker inside the openjdk image which is now used).
Question: How to start a DB container and connect to it from a stage in Declarative Pipelines?
What essentially you are trying is to use is DIND.
You are using a jenkins slave that is essentially created using docker agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
Once the container is running you are trying to execute a docker command. the error docker: not found is valid as there is no docker cli installed. You need to update the dockerfile/create a custom image having openjdk:11.0.4-jdk-stretch and docker dameon installed.
Once the daemon is installed you need to volume mount the /var/run/docker.sock so that the daemon will talk to the host docker daemon via socket.
The user should be root or a privileged user to avoid permission denied issue.
So if I get this correctly your tests needs two things:
Java Environment
DB Connection
In this case have you tried a different approach like Docker In Docker (DIND) ?
Where you can have custom image that uses docker:dind as a base image and contains your java environment and use it in the agent section then the rest of the pipeline steps will be able to use the docker command as you expected.
In your example you are trying to run a container inside openjdk:11.0.4-jdk-stretch. If this image has not docker daemon installed you will not be able to execute docker, but in this case it will run a docker inside docker that you should not.
So it depends when you want.
Using multiple containers:
In this case you can combine multiple docker images, but they are not dependent each others:
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:7-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
Running "sidecar" containers:
This example show you to use two containers simultaneously, which will be able to interacts each others:
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
Please refer to the official documentation -> https://jenkins.io/doc/book/pipeline/docker/
I hope it will help you.
I have had a similar problem, where I wanted to be able to use a off-the-shelf Maven Docker image to run my builds in while also being able to build a Docker image containing the application.
I accomplished this by first starting the Maven container in which the build is to be run giving it access to the hosts Docker endpoint.
Partial example:
docker run -v /var/run/docker.sock:/var/run/docker.sock maven:3.6.1-jdk-11
Then, inside the build-container, I download the Docker binaries and set the Docker host:
export DOCKER_HOST=unix:///var/run/docker.sock
wget -nv https://download.docker.com/linux/static/stable/x86_64/docker-19.03.2.tgz
tar -xvzf docker-*.tgz
cp docker/docker /usr/local/bin
Now I can run the docker command inside my build-container.
As a, for me positive, side-effect any Docker image built inside a container in one step of the build will be available to subsequent steps of the build, also running in containers, since they will be retained in the host.

Terraform, docker, Debian 8

I am beginner with Terraform and I looking for help. I tried use google, but I can not find solutions for me.
I have Debian 8 server. I installed docker and terraform succesfully. Now I need create docker container with ubuntu and set up ssh access to this container with Terraform. My terraform config is for create docker container, set image and provider to docker, but I can not find how to set ssh access to it or configure some addition SW.
Terraform config:
# Configure the Docker provider
provider "docker" {
host = "tcp://127.0.0.1:2376/"
}
# Definition of ubuntu image
resource "docker_image" "ubuntu" {
name = "ubuntu:latest"
}
# Create a container
resource "docker_container" "Ubn_Con" {
image = "${docker_image.ubuntu.latest}"
name = "Ubn_Con"
}
Thank you for any help.
The docker_container resource has an attribute called network_data, which has ip_address. That is the IP address of your container, so you could use that with SSH.
However, Jan Mesarc is correct, you do not need to SSH into a container to set it up with software (or ever, actually, but that's a longer story). Instead, you create an image for the container to be brought up from, using a Dockerfile.
For example:
FROM ubuntu:latest
RUN apt-get update && \
apt-get install -y curl
Then you build that image with docker build . -t ubuntu-curl:0.0.1, and upload it to Dockerhub. If you want to use another registry, just change the value to -t to include the full URL.
Then you can use that image in your docker_image resource:
resource "docker_image" "ubuntu" {
name = "ubuntu-curl:0.0.1"
}

Use git in jenkins pipeline with docker agent

When I try to run the pipeline below, it fails with this error:
Cloning into '/go/src/github.com/gorilla/websocket'...
fatal: unable to look up current user in the passwd file: no such user
package github.com/gorilla/websocket: exit status 128
As far as I understand, the issue is, that jenkins starts the container with the userid from the jenkins user, to keep the file system permissions right (docker run -t -d -u 108:113 ....) but how can I use git then?
pipeline {
agent none
stages {
...
stage('Build Code') {
agent {
docker {
image 'xxx.de/go_build_container'
args '-v=$WORKSPACE:/go/src/bitbucket.org/xxx/service_donation'
}
}
environment {
HOME = "."
}
steps {
sh 'cd /go/src/bitbucket.org/xxx/service_donation && go get github.com/gorilla/websocket'
}
}
Lot of programs won't work when running with a user id not existing in /etc/passwd and git is one of those programs.
You must specify a correct -u argument to your docker container, that is, one that exists in your xxx.de/go_build_container image, root (or 0:0) being one of them.
So put something like: args '-v=$WORKSPACE:/go/src/bitbucket.org/xxx/service_donation -u 0:0' and it will work.
Then, you will face another problem: the files created in your volume will belong to the UID you are using in your container, you may need to add some chown if you want to reuse those files later in your pipeline.
Another option I found is mounting /etc/passwd to the container.
agent {
docker {
image 'xxxx'
args '-v=/etc/passwd:/etc/passwd'
}
}
I'm not sure, if that has any other problems but it seems to work and you don't have the problem with wrong permissions.

Resources