Docker build inside kubernetes pod fails with "could not find bridge docker0" - docker

I moved our build agents into Kubernetes / Container Engine. They used to run on container vm (version container-vm-v20160321) and mount docker.sock into the docker container so we can run docker build from inside the container.
This used the following manifest:
apiVersion: v1
kind: Pod
metadata:
name: gocd-agent
spec:
containers:
- name: gocd-agent
image: travix/gocd-agent:16.8.0
imagePullPolicy: Always
volumeMounts:
- name: ssh-keys
mountPath: /var/go/.ssh
readOnly: true
- name: gcloud-keys
mountPath: /var/go/.gcloud
readOnly: true
- name: docker-sock
mountPath: /var/run/docker.sock
- name: docker-bin
mountPath: /usr/bin/docker
env:
- name: "GO_SERVER_URL"
value: "https://server:8154/go"
- name: "AGENT_KEY"
value: "***"
- name: "AGENT_RESOURCES"
value: "docker"
- name: "DOCKER_GID_ON_HOST"
value: "107"
restartPolicy: Always
dnsPolicy: Default
volumes:
- name: ssh-keys
gcePersistentDisk:
pdName: sh-keys
fsType: ext4
readOnly: true
- name: gcloud-keys
gcePersistentDisk:
pdName: gcloud-keys
fsType: ext4
readOnly: true
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: docker-bin
hostPath:
path: /usr/bin/docker
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Now after moving it into a full-blown Container Engine cluster - version 1.3.5 - with the following manifest it fails.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gocd-agent
spec:
replicas: 2
strategy:
type: Recreate
revisionHistoryLimit: 1
selector:
matchLabels:
app: gocd-agent
template:
metadata:
labels:
app: gocd-agent
spec:
containers:
- name: gocd-agent
image: travix/gocd-agent:16.8.0
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- name: ssh-keys
mountPath: /k8s-ssh-secret
- name: gcloud-keys
mountPath: /var/go/.gcloud
- name: docker-sock
mountPath: /var/run/docker.sock
- name: docker-bin
mountPath: /usr/bin/docker
env:
- name: "GO_SERVER_URL"
value: "https://server:8154/go"
- name: "AGENT_KEY"
value: "***"
- name: "AGENT_RESOURCES"
value: "docker"
- name: "DOCKER_GID_ON_HOST"
value: "107"
volumes:
- name: ssh-keys
secret:
secretName: ssh-keys
- name: gcloud-keys
secret:
secretName: gcloud-keys
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: docker-bin
hostPath:
path: /usr/bin/docker
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
It seems to start building just fine, but eventually it fails with a no such interface error:
Executing "docker build --force-rm=true --no-cache=true --file=target/docker/Dockerfile --tag=****:1.0.258 ."
Sending build context to Docker daemon 557.1 kB
...
Sending build context to Docker daemon 78.04 MB
Step 1 : FROM travix/base-debian-jre8
---> a130b5e1b4d4
Step 2 : ADD ***-1.0.258.jar ***.jar
---> 8d53e68e93a0
Removing intermediate container d1a758c9baeb
Step 3 : ADD target/newrelic newrelic
---> 9dbbb1c1db58
Removing intermediate container 461e66978c53
Step 4 : RUN bash -c "touch /***.jar"
---> Running in 6a28f48c9fd1
Removing intermediate container 6a28f48c9fd1
failed to create endpoint stupefied_shockley on network bridge: adding interface veth095b905 to bridge docker0 failed: could not find bridge docker0: route ip+net: no such network interface
Is it impossible to run docker build inside a pod due to Kubernetes networking or do I need to configure the pod differently? Or is it a bug in the particular docker version on the host?
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:20:08 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:20:08 2016
OS/Arch: linux/amd64
The bridge actually seems to exist on the host:
$ sudo brctl show
bridge name bridge id STP enabled interfaces
cbr0 8000.063c847a631e no veth0a58740b
veth1f558898
veth8797ea93
vethb11a7490
vethc576cc01
docker0 8000.02428db6a46e no
And docker info for completeness
$ sudo docker info
Containers: 15
Running: 14
Paused: 0
Stopped: 1
Images: 67
Server Version: 1.11.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 148
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 7 (wheezy)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 25.57 GiB
Name: gke-tooling-default-pool-1fa283a6-8ufa
ID: JBQ2:Q3AR:TFJG:ILTX:KMHV:M67A:NYEM:NK4G:R43J:K5PS:26HY:Q57S
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
And
$ uname -a
Linux gke-tooling-default-pool-1fa283a6-8ufa 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linux

Related

jenkins operator v0.4.0 deployed on kubernetes but we can't persiste our jobs and pipelines

we face an issue with jenkins installed on kubernetes using jenkins operator, so we can't persist created jobs because after restarting the pods we lost our jobs, those are our configurations used to start it:
apiVersion: jenkins.io/v1alpha2
kind: Jenkins
metadata:
name: jenkins
namespace: integration
spec:
configurationAsCode:
configurations:
groovyScripts:
configurations:
backup:
containerName: backup
action:
exec:
command:
- /home/user/bin/backup.sh
interval: 30
makeBackupBeforePodDeletion: true
restore:
containerName: backup
action:
exec:
command:
- /home/user/bin/restore.sh
master:
basePlugins:
- name: kubernetes
version: 1.25.2
- name: workflow-job
version: "2.39"
- name: workflow-aggregator
version: "2.6"
- name: git
version: 4.2.2
- name: job-dsl
version: "1.77"
- name: configuration-as-code
version: "1.38"
- name: kubernetes-credentials-provider
version: "0.13"
plugins:
- name: maven-plugin
version: "3.8"
- name: ansible
version: "1.1"
- name: bitbucket
version: 1.1.27
- name: bitbucket-build-status-notifier
version: 1.4.2
- name: docker-plugin
version: 1.2.1
- name: generic-webhook-trigger
version: "1.72"
- name: github-pullrequest
version: 0.2.8
- name: job-import-plugin
version: "3.4"
- name: msbuild
version: "1.29"
- name: nexus-artifact-uploader
version: "2.13"
- name: pipeline-npm
version: 0.9.2
- name: pipeline-utility-steps
version: 2.6.1
- name: pollscm
version: 1.3.1
- name: postbuild-task
version: "1.9"
- name: ranorex-integration
version: 1.0.2
- name: sidebar-link
version: 1.11.0
- name: sonarqube-generic-coverage
version: "1.0"
- name: sonar
version: "2.13"
- name: simple-theme-plugin
version: "0.6"
priorityClassName:
disableCSRFProtection: false
containers:
- name: jenkins-master
image: jenkins/jenkins:lts
imagePullPolicy: Always
livenessProbe:
failureThreshold: 12
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 80
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 1500m
memory: 3Gi
requests:
cpu: 1
memory: 500Mi
- name: backup
image: virtuslab/jenkins-operator-backup-pvc:v0.0.8
imagePullPolicy: IfNotPresent
env:
- name: BACKUP_DIR
value: /backup
- name: JENKINS_HOME
value: /jenkins-home
- name: BACKUP_COUNT
value: "3"
volumeMounts:
- mountPath: /jenkins-home
name: jenkins-home
- mountPath: /backup
name: backup
volumes:
- name: backup
persistentVolumeClaim:
claimName: jenkins-backup
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-home
securityContext:
fsGroup: 1000
runAsUser: 1000
seedJobs:
- description: Jenkins Operator repository
id: jenkins-operator
repositoryBranch: master
repositoryUrl: https://github.com/jenkinsci/kubernetes-operator.git
targets: cicd/jobs/*.jenkins
the operator have two scripts backup and restore and what we seen is that our pre-configured jobs are persisted but the new created one (using GUI) they aren't. any ideas about this problem? or the jenkins operator doesn't permit the persistency ?
From the docs (https://jenkinsci.github.io/kubernetes-operator/docs/getting-started/latest/configuring-backup-and-restore/):
Because of Jenkins Operator’s architecture, the configuration of Jenkins should be done using ConfigurationAsCode or GroovyScripts and jobs should be defined as SeedJobs. It means that there is no point in backing up any job configuration up. Therefore, the backup script makes a copy of jobs history only.
So yes, this is the intended behaviour, you should create new jobs as SeedJobs, not in the GUI.

convert docker-compose.yml file to kubernetes

I am converting a docker-compose file to kubernetes using kompose running the follwing command:
$kompose convert -f docker-compose.yml -o kubernetes_image.yaml
After the command finish the ouput is the following.
WARN Volume mount on the host "/usr/docker/adpater/dbdata" isn't supported - ignoring path on the host
INFO Network integration is detected at Source, shall be converted to equivalent NetworkPolicy at Destination
WARN Volume mount on the host "/usr/docker/adpater/license.json" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/certificates/ssl.crt" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/certificates/ssl.key" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/server.xml" isn't supported - ignoring path on the host
INFO Network integration is detected at Source, shall be converted to equivalent NetworkPolicy at Destination
To push the converted file to kubernetes I run the follwoing command:
$kubectl apply -f kubernetes_image.yaml
NAME READY STATUS RESTARTS AGE
mysql-557dd849c8-bsdq7 1/1 Running 1 17h
tomcat-7cd65d4556-spjbl 0/1 CrashLoopBackOff 76 18h
if I run:
$ kubectl describe pod tomcat-7cd65d4556-spjbl
I get the following message:
Last State: Terminated
Reason: ContainerCannotRun
Message: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/usr/docker/adapter/server.xml\\\" to rootfs \\\"/var/lib/docker/overlay2/a6df90a0ef4cbe8b2a3fa5352be5f304cd7b648fb1381492308f0a7fceb931cc/merged\\\" at \\\"/var/lib/docker/overlay2/a6df90a0ef4cbe8b2a3fa5352be5f304cd7b648fb1381492308f0a7fceb931cc/merged/usr/local/tomcat/conf/server.xml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Exit Code: 127
Started: Sun, 31 May 2020 13:35:00 +0100
Finished: Sun, 31 May 2020 13:35:00 +0100
Ready: False
Restart Count: 75
Environment: <none>
Mounts:
/run/secrets/rji_license.json from tomcat-hostpath0 (rw)
/usr/local/tomcat/conf/server.xml from tomcat-hostpath3 (rw)
/usr/local/tomcat/conf/ssl.crt from tomcat-hostpath1 (rw)
/usr/local/tomcat/conf/ssl.key from tomcat-hostpath2 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8dhnk (ro)
This is my docker-compose.yml file:
version: '3.6'
networks:
integration:
services:
mysql:
environment:
MYSQL_USER: 'integrationdb'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
image: db:poc
networks:
- integration
ports:
- '3306:3306'
restart: always
volumes:
- ./dbdata:/var/lib/mysql
tomcat:
image: adapter:poc
networks:
- integration
ports:
- '8080:8080'
- '8443:8443'
restart: always
volumes:
- ./license.json:/run/secrets/rji_license.json
- ./certificates/ssl.crt:/usr/local/tomcat/conf/ssl.crt
- ./certificates/ssl.key:/usr/local/tomcat/conf/ssl.key
- ./server.xml:/usr/local/tomcat/conf/server.xml
Versions of the tools:
kompose: 1.21.0 (992df58d8)
docker: 19.03.9
kubectl:Major:"1", Minor:"18"
I think my challange here is whithin this type of volumes or files, I dont know how can I migrate or convert them to kubernetes and put the tomcat pod running fine.
Could someone give me a hand?
volumes:
- ./license.json:/run/secrets/rji_license.json
- ./certificates/ssl.crt:/usr/local/tomcat/conf/ssl.crt
- ./certificates/ssl.key:/usr/local/tomcat/conf/ssl.key
- ./server.xml:/usr/local/tomcat/conf/server.xml
thanks in advance.
When Kompose warns you:
WARN Volume mount on the host "/usr/docker/adpater/license.json" isn't supported - ignoring path on the host
It means that it can't translate this fragment of the docker-compose.yml file into Kubernetes syntax:
volumes:
- ./license.json:/run/secrets/rji_license.json
In native Kubernetes, you'd need to provide this content in ConfigMap or Secret objects, and then mount the file into the pod. You can't directly access content on the system from which you're launching the containers.
You can't really get around directly working with the Kubernetes YAML files here. You could run kompose convert to generate the skeleton files, but then you'll need to edit those to add the ConfigMaps, PersistentVolumeClaims (for the database storage), and relevant volume and mount declarations, and then run kubectl apply -f to actually run them. I'd check the Kubernetes YAML files into source control, and maintain them in parallel with your Docker Compose setup.
Move2Kube (which does support docker-compose translation), can handle this case and tries to convert the volumes by interacting with you.
? 6. [] What type of container registry login do you want to use?
Hints:
[Docker login from config mode, will use the default config from your local machine.]
No authentication
? 7. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/dbdata]?:
Hints:
[Use PVC for persistent storage wherever applicable]
Yes
? 8. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/license.json]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 9. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.crt]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 10. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.key]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 11. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/server.xml]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 12. Which storage class to use for persistent volume claim [vol17655897939759777588] used by [mysql]
Hints:
[If you have a custom cluster, you can use collect to get storage classes from it.]
default
? 13. Provide the ingress host domain
Hints:
[Ingress host domain is part of service URL]
myproject.com
? 14. Provide the TLS secret for ingress
Hints:
[Enter TLS secret name]
If the above choices were made Move2Kube creates the following artifacts:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: tomcat
name: tomcat
spec:
replicas: 2
selector:
matchLabels:
move2kube.konveyor.io/service: tomcat
strategy: {}
template:
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: tomcat
name: tomcat
spec:
containers:
- image: adapter:poc
imagePullPolicy: Always
name: tomcat
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /run/secrets/rji_license.json
name: vol16871681589659214643
- mountPath: /usr/local/tomcat/conf/ssl.crt
name: vol12635587774184387470
- mountPath: /usr/local/tomcat/conf/ssl.key
name: vol7446232639477381794
- mountPath: /usr/local/tomcat/conf/server.xml
name: vol4920239289720818926
restartPolicy: Always
volumes:
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/license.json
name: vol16871681589659214643
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.crt
name: vol12635587774184387470
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.key
name: vol7446232639477381794
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/server.xml
name: vol4920239289720818926
status: {}
and
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: mysql
name: mysql
spec:
replicas: 2
selector:
matchLabels:
move2kube.konveyor.io/service: mysql
strategy: {}
template:
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: mysql
name: mysql
spec:
containers:
- env:
- name: MYSQL_USER
value: integrationdb
- name: MYSQL_PASSWORD
value: password
- name: MYSQL_ROOT_PASSWORD
value: password
image: db:poc
imagePullPolicy: Always
name: mysql
ports:
- containerPort: 3306
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /var/lib/mysql
name: vol17655897939759777588
restartPolicy: Always
volumes:
- name: vol17655897939759777588
persistentVolumeClaim:
claimName: vol17655897939759777588
status: {}
and
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: vol17655897939759777588
spec:
resources:
requests:
storage: 100Mi
storageClassName: default
volumeName: vol17655897939759777588
status: {}
Essentially depending on your choice Move2Kube will create the appropriate artifacts for you.
You can check out how it works in https://konveyor.github.io/move2kube/tutorials/docker-compose/.

Kubernetes deploy failed jenkins deployment "failed to create containerd task"

While deploying jenkins pod in our kubernetes cluster, kubernetes return the following error:
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/var/run/docker.sock\\\" to rootfs \\\"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/jenkins/rootfs\\\" at \\\"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/jenkins/rootfs/run\\\" caused \\\"not a directory\\\"\"": unknown Back-off restarting failed container
My Deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
imagePullSecrets:
- name: my-secret-key
containers:
- name: jenkins
image: image-repo-url
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-sock
mountPath: /var/run/
- name: docker-storage
mountPath: /var/lib/docker
securityContext:
privileged: true
volumes:
- name: jenkins-home
emptyDir: {}
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: docker-storage
emptyDir: {}
I tried for docker-sock volume:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
type: file
--- and ---
- name: docker-sock
hostPath:
path: /var/run/docker.sock
type: Socket
But it doesn't work. Actually, this configuration was working. But ıt doesn't work right now.
I tried for volume mounts:
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-sock
mountPath: /var/run/docker.sock
Deployment created. But Docker couldn't work.
We are using IBM Cloud Kubernetes Service.
Cluster Version:
1.15.11_1533
Kubernetes Api Version:
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
batch/v2alpha1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11+IKS", GitCommit:"0562ba8a2dfdd05f7f8721ab4952c02fe1605860", GitTreeState:"clean", BuildDate:"2020-03-13T14:45:42Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
Newer IKS clusters don't have Docker installed - they use containerd to execute containers.
If you still want to execute Docker on Jenkins you can either use Kubernetes plugin and pods with dind containers or rebuild your own jenkins based on dind - something like this: https://hub.docker.com/r/vixns/jenkins-dind/

Kubernetes DaemonSet Permission Denied on mounted Volume - Docker in Docker dind

I tried running simple DaemonSet on kube cluster - the Idea was that other kube pods would connect to that containers docker daemon (dockerd) and execute commands on it. (The other pods are Jenkins slaves and would have just env DOCKER_HOST point to 'tcp://localhost:2375'); In short the config looks like this:
dind.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: dind
spec:
selector:
matchLabels:
name: dind
template:
metadata:
labels:
name: dind
spec:
# tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
containers:
- name: dind
image: docker:18.05-dind
resources:
limits:
memory: 2000Mi
requests:
cpu: 100m
memory: 500Mi
volumeMounts:
- name: dind-storage
mountPath: /var/lib/docker
volumes:
- name: dind-storage
emptyDir: {}
Error message when running
mount: mounting none on /sys/kernel/security failed: Permission denied
Could not mount /sys/kernel/security.
AppArmor detection and --privileged mode might break.
mount: mounting none on /tmp failed: Permission denied
I took the idea from medium post that didn't describe it fully: https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25 describing docker of docker, docker in docker and Kaniko
found the solution
apiVersion: v1
kind: Pod
metadata:
name: dind
spec:
containers:
- name: jenkins-slave
image: gcr.io/<my-project>/myimg # it has docker installed on it
command: ['docker', 'run', '-p', '80:80', 'httpd:latest']
resources:
requests:
cpu: 10m
memory: 256Mi
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
- name: dind-daemon
image: docker:18.05-dind
resources:
requests:
cpu: 20m
memory: 512Mi
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
volumes:
- name: docker-graph-storage
emptyDir: {}

kubernetes set list of all physical nodes

I've setup Kubernetes with the steps below. Everything looks fine - but it's running on a single node/server.
Now I want to take the next step for running on multiple nodes. I wonder where should I configure my physical servers ip's so I could create the pod in more than one physical server.
I run:
hack/local-up-cluster.sh
then (In another terminal):
cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
cluster/kubectl.sh config set-context local --cluster=local
cluster/kubectl.sh config use-context local
And:
cluster/kubectl.sh create -f run-aii.yaml
my run-aii.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: aii
spec:
replicas: 1
template:
metadata:
labels:
run: aii
spec:
containers:
- name: aii
image: localhost:5000/dev/aii
ports:
- containerPort: 5144
env:
- name: KAFKA_IP
value: kafka
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /home/aii/core
name: core-aii
readOnly: false
- mountPath: /home/aii/genome
name: genome-aii
readOnly: true
- mountPath: /home/aii/main
name: main-aii
readOnly: false
- name: kafka
image: localhost:5000/dev/kafkazoo
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /root/config
name: config-data
readOnly: true
volumes:
- name: scripts-data
hostPath:
path: /home/aii/general/infra/script
- name: config-data
hostPath:
path: /home/aii/general/infra/config
- name: core-aii
hostPath:
path: /home/aii/general/core
- name: genome-aii
hostPath:
path: /home/aii/general/genome
- name: main-aii
hostPath:
path: /home/aii/general/main
- mountPath: /home/aii/main
name: main-aii
readOnly: false
- name: kafka
image: localhost:5000/dev/kafkazoo
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /root/config
name: config-data
readOnly: true
volumes:
- name: scripts-data
hostPath:
path: /home/aii/general/infra/script
- name: config-data
hostPath:
path: /home/aii/general/infra/config
- name: core-aii
hostPath:
path: /home/aii/general/core
- name: genome-aii
hostPath:
path: /home/aii/general/genome
- name: main-aii
hostPath:
path: /home/aii/general/main
Additional info:
[aii#localhost kubernetes]$ cluster/kubectl.sh get pod
NAME READY STATUS RESTARTS AGE
aii-3934754246-yilg3 2/2 Running 0 59s
[aii#localhost kubernetes]$ cluster/kubectl.sh describe pod aii-3934754246-yilg3
Name: aii-3934754246-yilg3
Namespace: default
Node: 127.0.0.1/127.0.0.1
Start Time: Sun, 29 May 2016 16:58:20 +0300
Labels: pod-template-hash=3934754246,run=aii
Status: Running
IP: 172.17.0.4
Controllers: ReplicaSet/aii-3934754246
Containers:
aii:
Container ID: docker://71609cfd8e33c01a81a36770d12d884443a12b4c2969b95e3042d9dee4fb455b
Image: localhost:5000/dev/aii
Image ID: docker://sha256:7e70fbb724962b2f23c9439a1c00356deb551fd96ffd27a8afa6340fc903e735
Port: 5144/TCP
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Running
Started: Sun, 29 May 2016 16:58:23 +0300
Ready: True
Restart Count: 0
Environment Variables:
KAFKA_IP: kafka
kafka:
Container ID: docker://6eb891e5968cf1106b26a9f3f7db881683a8e15dd59b1858435715580c90656c
Image: localhost:5000/dev/kafkazoo
Image ID: docker://sha256:b78e60582cbc8d3c4946807baf59552d110c7802c8204157e6fba509b96bc11c
Port:
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Running
Started: Sun, 29 May 2016 16:58:24 +0300
Ready: True
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready True
Volumes:
scripts-data:
Type: HostPath (bare host directory volume)
Path: /home/aii/general/infra/script
config-data:
Type: HostPath (bare host directory volume)
Path: /home/aii/general/infra/config
core-aii:
Type: HostPath (bare host directory volume)
Path: /home/aii/general/core
genome-aii:
Type: HostPath (bare host directory volume)
Path: /home/aii/general/genome
main-aii:
Type: HostPath (bare host directory volume)
Path: /home/aii/general/main
default-token-5z9rd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5z9rd
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned aii-3934754246-yilg3 to 127.0.0.1
1m 1m 1 {kubelet 127.0.0.1} spec.containers{aii} Normal Pulling pulling image "localhost:5000/dev/aii"
1m 1m 1 {kubelet 127.0.0.1} spec.containers{aii} Normal Pulled Successfully pulled image "localhost:5000/dev/aii"
1m 1m 1 {kubelet 127.0.0.1} spec.containers{aii} Normal Created Created container with docker id 71609cfd8e33
1m 1m 1 {kubelet 127.0.0.1} spec.containers{aii} Normal Started Started container with docker id 71609cfd8e33
1m 1m 1 {kubelet 127.0.0.1} spec.containers{kafka} Normal Pulling pulling image "localhost:5000/dev/kafkazoo"
1m 1m 1 {kubelet 127.0.0.1} spec.containers{kafka} Normal Pulled Successfully pulled image "localhost:5000/dev/kafkazoo"
1m 1m 1 {kubelet 127.0.0.1} spec.containers{kafka} Normal Created Created container with docker id 6eb891e5968c
1m 1m 1 {kubelet 127.0.0.1} spec.containers{kafka} Normal Started Started container with docker id 6eb891e5968c
Sounds like you want to setup a multi-node cluster, there are numerous ways to do that (http://kubernetes.io/docs/getting-started-guides/).
If you want a local solution on your machine, the vagrant way or the docker way is pretty straight forward.
If looking for a cloud, then GCE is the next easiest (https://cloud.google.com/container-engine/).
Once you have your multi-node cluster setup, then when you deploy your pod, it will be scheduled to a node in the cluster.
The only gotcha to be careful of given your manifest above, is you are using HostPaths for all your volume mounts. This is fine when you know what machine you are running the pod on, however, you should be abstracting yourself away from that.
To better solve you'll need to look into some persistent volumes that are not host specific. BUT for now, you can get it working. =)

Resources