ConcourseCI - docker-image resource issue; mount: permission denied (are you root?) - docker

I have concourse 3.8.0 running on my workstation which is Ubuntu 17.04 and
here is my pipeline definition:
---
jobs:
- name: job-docker-image-resource
public: true
plan:
- get: "golang_tools_docker_image"
- task: docker-image-resource
config:
platform: linux
image_resource:
type: docker-image
source: {repository: busybox}
run:
path: echo
args: [docker-image-resource]
resources:
- name: "golang_tools_docker_image"
type: docker-image
source:
repository: "golang"
tag: "1.9.2-alpine3.7"
resource_types:
- name: docker-image
type: docker-image
source:
repository: concourse/docker-image-resource
tag: docker-1.12.6
And here is the output:
This works fine in concourse 2.7.7. I haven't tried any versions between 2.7.7 and 3.8.0 yet.

You need privileged: true on the resource type definition:
resource_types:
- name: docker-image
privileged: true
type: docker-image
source:
repository: concourse/docker-image-resource
tag: latest

Related

Drone Pipeline : Drone Cache mount path for Maven Repository not able to resolve

I'm new to Drone pipeline and is interested to use it in my current project for CICD.
My project tech stack is as follows:
Java
Spring Boot
Maven
I have created a sample drone pipeline, but not able to cache the maven dependencies which is downloaded and stored in .m2 folder.
Always say the mount path is not available or not found. Please find the screen shot for the same:
Drone mount path issue
Not sure of the path to provide here. Can someone help me to understand the mount path which we need to provide to cache all the dependencies in .m2 path.
Adding the pipeline information below:
kind: pipeline
type: docker
name: config-server
steps:
name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
name: build
image: maven:3.8.3-openjdk-17
pull: if-not-exists
environment:
M2_HOME: /usr/share/maven
MAVEN_CONFIG: /root/.m2
commands:
mvn clean install -DskipTests=true -B -V
volumes:
name: cache
path: /tmp/cache
name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
trigger:
branch:
main
event:
push
volumes:
name: cache
host:
path: /var/lib/cache
Thanks in advance..
Resolved the issue. Please find the solution below and working drone pipeline.
kind: pipeline
type: docker
name: data-importer
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
ttl: 1
cache_key: "volume"
archive_format: "gzip"
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
- name: maven-build
image: maven:3.8.6-amazoncorretto-11
pull: if-not-exists
commands:
- mvn clean install -DskipTests=true -Dmaven.repo.local=.m2/repository -B -V
volumes:
- name: cache
path: /tmp/cache
- name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
ttl: 1
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
trigger:
branch:
- main
- feature/*
event:
- push
volumes:
- name: cache
host:
path: /var/lib/cache

How to create tekton task for building a docker slim?

Has anyone tried to create tekton task on building a docker slim?
Turns out we can re-use the "dind-sidecar" sample from the tekton pipelines repository:
https://github.com/tektoncd/pipeline/blob/main/examples/v1alpha1/taskruns/dind-sidecar.yaml
I got it working using the following:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: docker-build-dockerslim
spec:
params:
- default: docker.io/dslim/docker-slim:latest
description: The location of the kaniko builder image.
name: builderimage
type: string
- default: docker.io/docker:stable
description: The location of the Docker builder image.
name: pushimage
type: string
- default: "registry.default.svc.cluster.local:5000"
description: When using an insecure (for push/pull to a non-TLS registry), we should set its name here. Don't set an empty string, remove option from task or set it to a dummy value if not required.
name: insecure
type: string
- default: docker.io/docker:dind
description: The location of the Docker in Docker image.
name: dindimage
type: string
- default: Dockerfile
description: The name of the Dockerfile
name: dockerfile
type: string
- default: .
description: Parent directory for your Dockerfile.
name: dockerroot
type: string
resources:
inputs:
- name: source
type: git
outputs:
- name: image
type: image
steps:
- args:
- --state-path
- /dslim-state
- --in-container
- build
- --http-probe=false
- --dockerfile
- $(inputs.params.dockerfile)
- --dockerfile-context
- $(inputs.params.dockerroot)
- $(outputs.resources.image.url)
env:
- name: DOCKER_HOST
value: tcp://127.0.0.1:2376
- name: DOCKER_TLS_VERIFY
value: '1'
- name: DOCKER_CERT_PATH
value: /certs/client
image: $(inputs.params.builderimage)
name: build
resources: {}
securityContext:
privileged: true
volumeMounts:
- mountPath: /dslim-state
name: state
- mountPath: /certs/client
name: dind-certs
workingDir: /workspace/source
- command:
- /bin/sh
- -c
- |
SLIM_IMAGE=$(docker images | awk '/docker-slim.*[0-9]*\.slim/{print $1;exit 0;}')
docker tag "$SLIM_IMAGE" $(outputs.resources.image.url)
docker push $(outputs.resources.image.url)
name: push
image: $(params.pushimage)
env:
- name: DOCKER_HOST
value: tcp://127.0.0.1:2376
- name: DOCKER_TLS_VERIFY
value: '1'
- name: DOCKER_CERT_PATH
value: /certs/client
volumeMounts:
- mountPath: /certs/client
name: dind-certs
sidecars:
- args:
- --storage-driver=vfs
- --userland-proxy=false
- --debug
- --insecure-registry=$(inputs.params.insecure)
env:
- name: DOCKER_TLS_CERTDIR
value: /certs
image: $(inputs.params.dindimage)
name: dind
readinessProbe:
periodSeconds: 1
exec:
command:
- ls
- /certs/client/ca.pem
resources: {}
securityContext:
privileged: true
volumeMounts:
- mountPath: /certs/client
name: dind-certs
volumes:
- name: dind-certs
emptyDir: {}
- emptyDir: {}
name: state
For some reason, the resulting images doesn't appear with the name I expected it to. Also tried setting a "--target" argument, though neither this nor the default "last-arg-is-image-name" behavior they document in their readme seems to work ( https://github.com/docker-slim/docker-slim ).
However I did find, listing images, the following:
docker-slim-tmp-fat-image.12.20211205135050.slim latest 0037ff15e1f5 2 seconds ago 13.8MB
docker-slim-empty-image latest 9dfd57fb50a8 35 seconds ago 0B
docker-slim-tmp-fat-image.12.20211205135050 latest 9ad36dd5e3f3 39 seconds ago 211MB
<Dockerfiles-FROM-image> master f11e63190556 3 months ago 211MB
Thus, I do some "docker images | awk ..." then "docker tag xxx.slim the-target-name-I-wanted", before my "docker push".
Resulting image is indeed smaller. I'ld to test this with other images, and make sure it doesn't introduce any regression, ... still, that's interesting.

Fixing 'invalid reference format' error in docker-image-resource put

Currently trying to build and push docker images, issue is that I'm receiving a this message from concourse during wordpress-release put step:
waiting for docker to come up...
invalid reference format
Here's the important bit of the Concourse Pipeline.
- name: wordpress-release
type: docker-image
source:
repository: #############.dkr.ecr.eu-west-1.amazonaws.com/wordpress-release
aws_access_key_id: #############
aws_secret_access_key: #############
- name: mysql-release
type: docker-image
source:
repository: #############.dkr.ecr.eu-west-1.amazonaws.com/mysql-release
aws_access_key_id: #############
aws_secret_access_key: #############
jobs:
- name: job-hello-world
plan:
- get: wordpress-website
- task: write-release-tag
config:
platform: linux
image_resource:
type: registry-image
source: { repository: alpine/git }
inputs:
- name: wordpress-website
outputs:
- name: tags
run:
dir: wordpress-website
path: sh
args:
- -exc
- |
printf $(basename $(git remote get-url origin) | sed 's/\.[^.]*$/-/')$(git tag --points-at HEAD) > ../tags/release-tag
- put: wordpress-release
params:
build: ./wordpress-website/.
dockerfile: wordpress-website/shared-wordpress-images/wordpress/wordpress-release/Dockerfile
tag_file: tags/release-tag
- put: mysql-release
params:
build: ./wordpress-website/
dockerfile: wordpress-website/shared-wordpress-images/db/mysql-release/Dockerfile
tag_file: tags/release-tag
Those images contain FROM #############.dkr.ecr.eu-west-1.amazonaws.com/shared-mysql (and shared-wordpress) could this be an issue?
The tag_file: tags/release-tag, doesn't seem to be the issue as even without it, this still happens.
This is Concourse 5.0 running on top of Docker in Windows 10.
Any thoughts?

Jenkins installation automation

Old Question
Is that possible to automate Jenkins installation(Jenkins binaries, plugins, credentials) by using any of the configuration management automation tool like Ansible and etc?
Edited
After this question asked I have learned and found many ways to achieve Jenkins Installation. I found docker-compose is interesting to achieve one way of Jenkins Installation automation. So my question is, Is there a better way to automate Jenkins Installation than I am doing, Is there any risk in the way I am handling this automation.
I have taken the advantage of docker Jenkins image and did the automation with docker-compose
Dockerfile
FROM jenkinsci/blueocean
RUN jenkins-plugin-cli --plugins kubernetes workflow-aggregator git configuration-as-code blueocean matrix-auth
docker-compose.yaml
version: '3.7'
services:
dind:
image: docker:dind
privileged: true
networks:
jenkins:
aliases:
- docker
expose:
- "2376"
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
jcac:
image: nginx:latest
volumes:
- type: bind
source: ./jcac.yml
target: /usr/share/nginx/html/jcac.yml
networks:
- jenkins
jenkins:
build: .
ports:
- "8080:8080"
- "50000:50000"
environment:
- DOCKER_HOST=tcp://docker:2376
- DOCKER_CERT_PATH=/certs/client
- DOCKER_TLS_VERIFY=1
- JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
- CASC_JENKINS_CONFIG=http://jcac/jcac.yml
- GITHUB_ACCESS_TOKEN=${GITHUB_ACCESS_TOKEN:-fake}
- GITHUB_USERNAME=${GITHUB_USERNAME:-fake}
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
read_only: true
networks:
- jenkins
volumes:
jenkins-home:
jenkins-docker-certs:
networks:
jenkins:
jcac.yaml
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
id: "github"
password: ${GITHUB_PASSWORD:-fake}
scope: GLOBAL
username: ${GITHUB_USERNAME:-fake}
- usernamePassword:
id: "slave"
password: ${SSH_PASSWORD:-fake}
username: ${SSH_USERNAME:-fake}
jenkins:
globalNodeProperties:
- envVars:
env:
- key: "BRANCH"
value: "hello"
systemMessage: "Welcome to (one click) Jenkins Automation!"
agentProtocols:
- "JNLP4-connect"
- "Ping"
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
disableRememberMe: false
markupFormatter: "plainText"
mode: NORMAL
myViewsTabBar: "standard"
numExecutors: 4
# nodes:
# - permanent:
# labelString: "slave01"
# launcher:
# ssh:
# credentialsId: "slave"
# host: "worker"
# port: 22
# sshHostKeyVerificationStrategy: "nonVerifyingKeyVerificationStrategy"
# name: "slave01"
# nodeDescription: "SSH Slave 01"
# numExecutors: 3
# remoteFS: "/home/jenkins/workspace"
# retentionStrategy: "always"
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "admin"
password: "${ADMIN_PASSWORD:-admin123}" #
- id: "user"
password: "${DEFAULTUSER_PASSWORD:-user123}"
authorizationStrategy:
globalMatrix:
permissions:
- "Agent/Build:user"
- "Job/Build:user"
- "Job/Cancel:user"
- "Job/Read:user"
- "Overall/Read:user"
- "View/Read:user"
- "Overall/Read:anonymous"
- "Overall/Administer:admin"
- "Overall/Administer:root"
unclassified:
globalLibraries:
libraries:
- defaultVersion: "master"
implicit: false
name: "jenkins-shared-library"
retriever:
modernSCM:
scm:
git:
remote: "https://github.com/samitkumarpatel/jenkins-shared-libs.git"
traits:
- "gitBranchDiscovery"
The command to start and stop Jenkins are
# start Jenkins
docker-compose up -d
# stop Jenkins
docker-compose down
Sure it is :) For Ansible you can always check Ansible Galaxy whenever you want to automate installation of something. Here is the most popular role for installing Jenkins. And here is its GitHub repo

Docker build inside kubernetes pod fails with "could not find bridge docker0"

I moved our build agents into Kubernetes / Container Engine. They used to run on container vm (version container-vm-v20160321) and mount docker.sock into the docker container so we can run docker build from inside the container.
This used the following manifest:
apiVersion: v1
kind: Pod
metadata:
name: gocd-agent
spec:
containers:
- name: gocd-agent
image: travix/gocd-agent:16.8.0
imagePullPolicy: Always
volumeMounts:
- name: ssh-keys
mountPath: /var/go/.ssh
readOnly: true
- name: gcloud-keys
mountPath: /var/go/.gcloud
readOnly: true
- name: docker-sock
mountPath: /var/run/docker.sock
- name: docker-bin
mountPath: /usr/bin/docker
env:
- name: "GO_SERVER_URL"
value: "https://server:8154/go"
- name: "AGENT_KEY"
value: "***"
- name: "AGENT_RESOURCES"
value: "docker"
- name: "DOCKER_GID_ON_HOST"
value: "107"
restartPolicy: Always
dnsPolicy: Default
volumes:
- name: ssh-keys
gcePersistentDisk:
pdName: sh-keys
fsType: ext4
readOnly: true
- name: gcloud-keys
gcePersistentDisk:
pdName: gcloud-keys
fsType: ext4
readOnly: true
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: docker-bin
hostPath:
path: /usr/bin/docker
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Now after moving it into a full-blown Container Engine cluster - version 1.3.5 - with the following manifest it fails.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gocd-agent
spec:
replicas: 2
strategy:
type: Recreate
revisionHistoryLimit: 1
selector:
matchLabels:
app: gocd-agent
template:
metadata:
labels:
app: gocd-agent
spec:
containers:
- name: gocd-agent
image: travix/gocd-agent:16.8.0
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- name: ssh-keys
mountPath: /k8s-ssh-secret
- name: gcloud-keys
mountPath: /var/go/.gcloud
- name: docker-sock
mountPath: /var/run/docker.sock
- name: docker-bin
mountPath: /usr/bin/docker
env:
- name: "GO_SERVER_URL"
value: "https://server:8154/go"
- name: "AGENT_KEY"
value: "***"
- name: "AGENT_RESOURCES"
value: "docker"
- name: "DOCKER_GID_ON_HOST"
value: "107"
volumes:
- name: ssh-keys
secret:
secretName: ssh-keys
- name: gcloud-keys
secret:
secretName: gcloud-keys
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: docker-bin
hostPath:
path: /usr/bin/docker
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
It seems to start building just fine, but eventually it fails with a no such interface error:
Executing "docker build --force-rm=true --no-cache=true --file=target/docker/Dockerfile --tag=****:1.0.258 ."
Sending build context to Docker daemon 557.1 kB
...
Sending build context to Docker daemon 78.04 MB
Step 1 : FROM travix/base-debian-jre8
---> a130b5e1b4d4
Step 2 : ADD ***-1.0.258.jar ***.jar
---> 8d53e68e93a0
Removing intermediate container d1a758c9baeb
Step 3 : ADD target/newrelic newrelic
---> 9dbbb1c1db58
Removing intermediate container 461e66978c53
Step 4 : RUN bash -c "touch /***.jar"
---> Running in 6a28f48c9fd1
Removing intermediate container 6a28f48c9fd1
failed to create endpoint stupefied_shockley on network bridge: adding interface veth095b905 to bridge docker0 failed: could not find bridge docker0: route ip+net: no such network interface
Is it impossible to run docker build inside a pod due to Kubernetes networking or do I need to configure the pod differently? Or is it a bug in the particular docker version on the host?
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:20:08 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:20:08 2016
OS/Arch: linux/amd64
The bridge actually seems to exist on the host:
$ sudo brctl show
bridge name bridge id STP enabled interfaces
cbr0 8000.063c847a631e no veth0a58740b
veth1f558898
veth8797ea93
vethb11a7490
vethc576cc01
docker0 8000.02428db6a46e no
And docker info for completeness
$ sudo docker info
Containers: 15
Running: 14
Paused: 0
Stopped: 1
Images: 67
Server Version: 1.11.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 148
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 7 (wheezy)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 25.57 GiB
Name: gke-tooling-default-pool-1fa283a6-8ufa
ID: JBQ2:Q3AR:TFJG:ILTX:KMHV:M67A:NYEM:NK4G:R43J:K5PS:26HY:Q57S
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
And
$ uname -a
Linux gke-tooling-default-pool-1fa283a6-8ufa 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linux

Resources