Chaincode Build Failed in Hyperledger Fabric on Kubernetes - docker

Deploying Hyperledger Fabric v2.0 in Kubernetes
I am Trying to deploy a sample chaincode in a Private Kubernetes Cluster which is running in Azure Cloud. After creating the nodes and then running the Install chaincode operation is getting failed and throwing the below error. I am only using a single Kubernetes cluster.
Error:
chaincode install failed with status: 500 - failed to invoke backing implementation of 'InstallChaincode': could not build chaincode: docker build failed: docker image inspection failed: cannot connect to Docker endpoint
command terminated with exit code 1
Below is the peer configuration template for Deployment, Service & ConfigMap
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: ${PEER}
name: ${PEER}
namespace: ${ORG}
spec:
replicas: 1
selector:
matchLabels:
app: ${PEER}
strategy: {}
template:
metadata:
labels:
app: ${PEER}
spec:
containers:
- name: couchdb
image: blockchainpractice.azurecr.io/hyperledger/fabric-couchdb
env:
- name: COUCHDB_USER
value: couchdb
- name: COUCHDB_PASSWORD
value: couchdb
ports:
- containerPort: 5984
- name: fabric-peer
image: blockchainpractice.azurecr.io/hyperledger/fabric-peer:2.0
resources: {}
envFrom:
- configMapRef:
name: ${PEER}
volumeMounts:
- name: dockersocket
mountPath: "/host/var/run/docker.sock"
- name: ${PEER}
mountPath: "/etc/hyperledger/fabric-peer"
- name: client-root-tlscas
mountPath: "/etc/hyperledger/fabric-peer/client-root-tlscas"
volumes:
- name: dockersocket
hostPath:
path: "/var/run/docker.sock"
- name: ${PEER}
secret:
secretName: ${PEER}
items:
- key: key.pem
path: msp/keystore/key.pem
- key: cert.pem
path: msp/signcerts/cert.pem
- key: tlsca-cert.pem
path: msp/tlsca/tlsca-cert.pem
- key: ca-cert.pem
path: msp/cacerts/ca-cert.pem
- key: config.yaml
path: msp/config.yaml
- key: tls.crt
path: tls/tls.crt
- key: tls.key
path: tls/tls.key
- key: orderer-tlsca-cert.pem
path: orderer-tlsca-cert.pem
- key: core.yaml
path: core.yaml
- name: client-root-tlscas
secret:
secretName: client-root-tlscas
---
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: ${PEER}
namespace: ${ORG}
data:
CORE_PEER_ADDRESSAUTODETECT: "true"
CORE_PEER_ID: ${PEER}
CORE_PEER_LISTENADDRESS: 0.0.0.0:7051
CORE_PEER_PROFILE_ENABLED: "true"
CORE_PEER_LOCALMSPID: ${ORG_MSP}
CORE_PEER_MSPCONFIGPATH: /etc/hyperledger/fabric-peer/msp
# Gossip
CORE_PEER_GOSSIP_BOOTSTRAP: peer0.${ORG}:7051
CORE_PEER_GOSSIP_EXTERNALENDPOINT: "${PEER}.${ORG}:7051"
CORE_PEER_GOSSIP_ORGLEADER: "false"
CORE_PEER_GOSSIP_USELEADERELECTION: "true"
# TLS
CORE_PEER_TLS_ENABLED: "true"
CORE_PEER_TLS_CERT_FILE: "/etc/hyperledger/fabric-peer/tls/tls.crt"
CORE_PEER_TLS_KEY_FILE: "/etc/hyperledger/fabric-peer/tls/tls.key"
CORE_PEER_TLS_ROOTCERT_FILE: "/etc/hyperledger/fabric-peer/msp/tlsca/tlsca-cert.pem"
CORE_PEER_TLS_CLIENTAUTHREQUIRED: "false"
ORDERER_TLS_ROOTCERT_FILE: "/etc/hyperledger/fabric-peer/orderer-tlsca-cert.pem"
CORE_PEER_TLS_CLIENTROOTCAS_FILES: "/etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.${ORG}-cert.pem"
CORE_PEER_TLS_CLIENTCERT_FILE: "/etc/hyperledger/fabric-peer/tls/tls.crt"
CORE_PEER_TLS_CLIENTKEY_FILE: "/etc/hyperledger/fabric-peer/tls/tls.key"
# Docker
CORE_PEER_NETWORKID: ${ORG}-fabnet
CORE_VM_ENDPOINT: unix:///host/var/run/docker.sock
CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE: "bridge"
# CouchDB
CORE_LEDGER_STATE_STATEDATABASE: CouchDB
CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS: 0.0.0.0:5984
CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME: couchdb
CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD: couchdb
# Logging
CORE_LOGGING_PEER: "info"
CORE_LOGGING_CAUTHDSL: "info"
CORE_LOGGING_GOSSIP: "info"
CORE_LOGGING_LEDGER: "info"
CORE_LOGGING_MSP: "info"
CORE_LOGGING_POLICIES: "debug"
CORE_LOGGING_GRPC: "info"
GODEBUG: "netdns=go"
---
apiVersion: v1
kind: Service
metadata:
name: ${PEER}
namespace: ${ORG}
spec:
selector:
app: ${PEER}
ports:
- name: request
port: 7051
targetPort: 7051
- name: event
port: 7053
targetPort: 7053
type: LoadBalancer
Can anyone help me out. Thanks in advance

I'd suggest that it would be good to look at the K8S test network deployment in fabric-samples (https://github.com/hyperledger/fabric-samples/tree/main/test-network-k8s)
Note that the classic way the peer creates chaincode is create a new docker container via the docker daemon. This really doesn't sit well with K8S. So the chaincode-as-a-service approach is strongly recommended.

Related

Syslog from AKS cluster nodes - Using App Armor & Seccomp

How can I retrieve /var/log/syslog logs from AKS cluster nodes?
Azure recommends the use of Linux security features App Armor and Seccomp. Both of them produce log entries at /var/log/syslog of each cluster node where a workload running has a profile attached.
I've run a test with a nginx container using both, and I can see the corresponding log entries in my node:
Apr 29 10:37:45 aks-agentpool-31529777-vmss000000 kernel: [ 6248.505152] audit: type=1326 audit(1651228665.090:23650): auid=4294967295 uid=1001 gid=2000 ses=4294967295 pid=2016 comm="docker-entrypoi" exe="/bin/dash" sig=0 arch=c000003e syscall=3 compat=0 ip=0x7f90c03fda67 code=0x7ffc0000
Apr 29 10:37:45 aks-agentpool-31529777-vmss000000 kernel: [ 6248.505154] audit: type=1326 audit(1651228665.090:23651): auid=4294967295 uid=1001 gid=2000 ses=4294967295 pid=2016 comm="docker-entrypoi" exe="/bin/dash" sig=0 arch=c000003e syscall=257 compat=0 ip=0x7f90c03fdb84 code=0x7ffc0000
Apr 29 10:42:46 aks-agentpool-31529777-vmss000000 kernel: [ 6550.189592] audit: type=1326 audit(1651228966.775:25032): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=9028 comm="runc:[2:INIT]" exe="/" sig=0 arch=c000003e syscall=257 compat=0 ip=0x561a3e78fa2a code=0x7ffc0000
Syslog's are supposed to be retrievable through configuration of the monitoring agent at the destination Log Analytics workspace, by enabling the syslog facility:
Agents Configuration.
I've also enabled Container Insights in the cluster and Diagnostic Settings for every log category available. Still, I have no way to see those logs without directly connecting to the node and the node VM appears as a not connected source for the Log Analytics workspace: Workspace Data Sources
Env:
Kubernetes v1.21.9
Node: Linux aks-agentpool-31529777-vmss000000 5.4.0-1074-azure #77~18.04.1-Ubuntu SMP
1 pod with one nginx container + two daemonsets and a configMap to deploy the profiles.
apiVersion: v1
kind: Namespace
metadata:
name: testing
---
apiVersion: v1
kind: ConfigMap
metadata:
name: seccomp-profiles-map
namespace: testing
data:
mynginx: |-
{
"defaultAction": "SCMP_ACT_LOG"
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: apparmor-profiles
namespace: testing
data:
mynginx: |-
# vim:syntax=apparmor
#include <tunables/global>
profile mynginx flags=(complain) {
#include <abstractions/base>
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: seccomp-profiles-loader
# Namespace must match that of the ConfigMap.
namespace: testing
labels:
daemon: seccomp-profiles-loader
spec:
selector:
matchLabels:
daemon: seccomp-profiles-loader
template:
metadata:
name: seccomp-profiles-loader
labels:
daemon: seccomp-profiles-loader
spec:
automountServiceAccountToken: false
initContainers:
- name: seccomp-profile-loader
image: busybox
command: ["/bin/sh"]
# args: ["/profiles/*", "/var/lib/kubelet/seccomp/profiles/"]
args: ["-c", "cp /profiles/* /var/lib/kubelet/seccomp/profiles/"]
securityContext:
privileged: true
volumeMounts:
- name: node-profiles-folder
mountPath: /var/lib/kubelet/seccomp/profiles
readOnly: false
- name: profiles
mountPath: /profiles
readOnly: true
containers:
- name: seccomp-profiles-pause
# https://github.com/kubernetes/kubernetes/tree/master/build/pause
image: gcr.io/google_containers/pause
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
volumes:
- name: node-profiles-folder
hostPath:
path: /var/lib/kubelet/seccomp/profiles
- name: profiles
configMap:
name: seccomp-profiles-map
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: apparmor-loader
# Namespace must match that of the ConfigMap.
namespace: testing
labels:
daemon: apparmor-loader
spec:
selector:
matchLabels:
daemon: apparmor-loader
template:
metadata:
name: apparmor-loader
labels:
daemon: apparmor-loader
spec:
containers:
- name: apparmor-loader
image: google/apparmor-loader:latest
args:
# Tell the loader to pull the /profiles directory every 30 seconds.
- -poll
- 60s
- /profiles
securityContext:
# The loader requires root permissions to actually load the profiles.
privileged: true
volumeMounts:
- name: sys
mountPath: /sys
readOnly: true
- name: apparmor-includes
mountPath: /etc/apparmor.d
readOnly: true
- name: profiles
mountPath: /profiles
readOnly: true
volumes:
# The /sys directory must be mounted to interact with the AppArmor module.
- name: sys
hostPath:
path: /sys
# The /etc/apparmor.d directory is required for most apparmor include templates.
- name: apparmor-includes
hostPath:
path: /etc/apparmor.d
# Map in the profile data.
- name: profiles
configMap:
name: apparmor-profiles
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: testing
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
container.apparmor.security.beta.kubernetes.io/nginx: localhost/mynginx
spec:
automountServiceAccountToken: false
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: nginx
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/mynginx
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: PORT
value: '3000'
---
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: nginx

Registry UI (joxit) does not return stored images

I have setup a private registry (Kubernetes) using the following configuration based on this repo https://github.com/sleighzy/k8s-docker-registry:
Create the password file, see the Apache htpasswd documentation for more information on this command.
htpasswd -b -c -B htpasswd docker-registry registry-password!
Adding password for user docker-registry
Create namespace
kubectl create namespace registry
Add the generated password file as a Kubernetes secret.
kubectl create secret generic basic-auth --from-file=./htpasswd -n registry
secret/basic-auth created
registry-secrets.yaml
---
# https://kubernetes.io/docs/concepts/configuration/secret/
apiVersion: v1
kind: Secret
metadata:
name: s3
namespace: registry
data:
REGISTRY_STORAGE_S3_ACCESSKEY: Y2hlc0FjY2Vzc2tleU1pbmlv
REGISTRY_STORAGE_S3_SECRETKEY: Y2hlc1NlY3JldGtleQ==
registry-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: registry
spec:
ports:
- protocol: TCP
name: registry
port: 5000
selector:
app: registry
I am using my MinIO (already deployed and running)
registry-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry
labels:
app: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
ports:
- name: registry
containerPort: 5000
volumeMounts:
- name: credentials
mountPath: /auth
readOnly: true
env:
- name: REGISTRY_LOG_ACCESSLOG_DISABLED
value: "true"
- name: REGISTRY_HTTP_HOST
value: "https://registry.mydomain.io:5000"
- name: REGISTRY_LOG_LEVEL
value: info
- name: REGISTRY_HTTP_SECRET
value: registry-http-secret
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: homelab
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: /auth/htpasswd
- name: REGISTRY_STORAGE
value: s3
- name: REGISTRY_STORAGE_S3_REGION
value: ignored-cos-minio
- name: REGISTRY_STORAGE_S3_REGIONENDPOINT
value: charity.api.com -> This is the valid MinIO API
- name: REGISTRY_STORAGE_S3_BUCKET
value: "charitybucket"
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
- name: REGISTRY_HEALTH_STORAGEDRIVER_ENABLED
value: "false"
- name: REGISTRY_STORAGE_S3_ACCESSKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_ACCESSKEY
- name: REGISTRY_STORAGE_S3_SECRETKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_SECRETKEY
volumes:
- name: credentials
secret:
secretName: basic-auth
I have created an entry in /etc/hosts
192.168.xx.xx registry.mydomain.io
registry-IngressRoute.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`)
kind: Rule
services:
- name: registry
port: 5000
tls:
certResolver: tlsresolver
I have accees to the private registry using http://registry.mydomain.io:5000/ and it obviously returns a blank page.
I have already pushed some images and http://registry.mydomain.io:5000/v2/_catalog returns:
{"repositories":["console-image","hello-world","hello-world-2","hello-world-ha","myfirstimage","ubuntu-my"]}
Above configuration seems to work.
Then I tried to add a registry-ui provide by joxit with the following configuration:
registry-ui-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
namespace: registry
spec:
ports:
- protocol: TCP
name: registry-ui
port: 80
selector:
app: registry-ui
registry-ui-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry-ui
labels:
app: registry-ui
spec:
replicas: 1
selector:
matchLabels:
app: registry-ui
template:
metadata:
labels:
app: registry-ui
spec:
containers:
- name: registry-ui
image: joxit/docker-registry-ui:1.5-static
ports:
- name: registry-ui
containerPort: 80
env:
- name: REGISTRY_URL
value: https://registry.mydomain.io
- name: SINGLE_REGISTRY
value: "true"
- name: REGISTRY_TITLE
value: "CHARITY Registry UI"
- name: DELETE_IMAGES
value: "true"
registry-ui-ingress-route.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry-ui
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`) && PathPrefix(`/ui/`)
kind: Rule
services:
- name: registry-ui
port: 80
middlewares:
- name: stripprefix
tls:
certResolver: tlsresolver
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: stripprefix
namespace: registry
spec:
stripPrefix:
prefixes:
- /ui/
I have access to the browser ui at https://registry.mydomain.io/ui/, however it returns nothing.
Am I missing something here?
As the owner of that repository there may be something missing here. Your IngressRoute rule has an entryPoint of websecure and certResolver of tlsresolver. This is intended to be the https entrypoint for Traefik, see my other repository https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd and associated Traefik document of which this Docker Registry repo is based on.
Can you review your Traefik deployment to ensure that you have this entrypoint, and you also have this certificate resolver along with a generated https certificate that this is using. Can you also check the traefik logs to see if there are any errors there during startup, e.g. missing certs etc. and any access log information in there as well which may indicate why this is not routing to there.
If you don't have these items setup you could help narrow this down further by changing this IngressRoute config to use just the web entrypoint and remove the tls section as well in your registry-ui-ingress-route.yaml manifest file and then reapply that. This will mean you can access this over http to at least rule out any https issues.

Deploy Dind and secure Docker Registry on Kubernetes (colon issues)

I have an issue with one of my project. Here is what I want to do :
Have a private docker registry on my cluster Kubernetes
Have a docker deamon running so that I can pull / push and build image directly inside the cluster
For this project I'm using some certificate to secure all those interactions.
1. How to reproduce :
Note: I'm working on a linux-based system
Here are the files that I'm using :
Deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker
spec:
replicas: 1
selector:
matchLabels:
app: docker
template:
metadata:
labels:
app: docker
spec:
containers:
- name: docker
image: docker:dind
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
securityContext:
privileged: true
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
- name: docker-graph-storage
mountPath: /var/lib/docker
- name: dind-registry-cert
mountPath: >-
/etc/docker/certs.d/registry:5000/ca.crt
ports:
- containerPort: 2376
volumes:
- name: docker-graph-storage
emptyDir: {}
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
- name: init-reg-vol
secret:
secretName: init-reg
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
env:
- name: DOCKER_TLS_CERTDIR
value: /certs
- name: REGISTRY_HTTP_TLS_KEY
value: /certs/registry.pem
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/registry.crt
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
- name: dind-registry-cert
mountPath: /certs/
- name: registry-data
mountPath: /var/lib/registry
ports:
- containerPort: 5000
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: registry
- name: registry-data
persistentVolumeClaim:
claimName: registry-data
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: docker
command: ['sleep','200']
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
env:
- name: DOCKER_HOST
value: tcp://docker:2376
- name: DOCKER_TLS_VERIFY
value: '1'
- name: DOCKER_TLS_CERTDIR
value: /certs
- name: DOCKER_CERT_PATH
value: /certs/client
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/registry.crt
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
readOnly: true
- name: dind-registry-cert
mountPath: /usr/local/share/ca-certificate/ca.crt
readOnly: true
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
Services.yaml
---
apiVersion: v1
kind: Service
metadata:
name: docker
spec:
selector:
app: docker
ports:
- name: docker
protocol: TCP
port: 2376
targetPort: 2376
---
apiVersion: v1
kind: Service
metadata:
name: registry
spec:
selector:
app: registry
ports:
- name: registry
protocol: TCP
port: 5000
targetPort: 5000
Pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: certs-client
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-data
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
limits:
storage: 50Gi
requests:
storage: 2Gi
status: {}
For the cert files I have the following folder certs/ certs/client certs.d/registry:5000/ and I use these command line to generate the certs :
openssl req -newkey rsa:4096 -nodes -keyout ./certs/registry.pem -x509 -days 365 -out ./certs/registry.crt -subj "/C=''/ST=''/L=''/O=''/OU=''/CN=registry"
cp ./certs/registry.crt ./certs.d/registry\:5000/ca.crt
Then I use secrets to pass those certs inside the pods :
kubectl create secret generic registry --from-file=certs/registry.crt --from-file=certs/registry.pem
kubectl create secret generic ca.crt --from-file=certs/registry.crt
The to launch the project the following line is used :
kubectl apply -f pvc.yaml,deployment.yaml,service.yaml
2. My issues
I have a problem on my docker pods with this error :
Error: Error response from daemon: invalid volume specification: '/var/lib/kubelet/pods/727d0f2a-bef6-4217-a292-427c5d76e071/volumes/kubernetes.io~secret/dind-registry-cert:/etc/docker/certs.d/registry:5000/ca.crt:ro
So the problem seems to comme from the colon in the path name. Then I tried to escape the colon and I got this sublime error
error: error parsing deployment.yaml: error converting YAML to JSON: yaml: line 34: found unknown escape character
The real problem here is that if the folder is not named 'registry:5000' the certificat is not reconised as correct and I have a x509 error when trying to push an image from the client.
For the overall project I know that it can work like that since I already succes to deploy it localy with a docker-compose (here is the link to the github project if any of you are curious)
So I looked a bit on to it and found out that it's a recuring problem on docker (I mean on Docker Desktop for mount volumes on containers) but I can't find anything about the same issue on Kubernetes.
Do any of you have any lead / suggestion / workaround on this mater ?
As always, thanks for your times :)
------------------------------- EDIT following #HelloWorld answer -------------------------------
Thanks to the workaround with simlink the ca.cert is correctly mounted inside. Howerver since I was mounting it on the deployement that was use to run the docker deamon, the entrypoint of the container docker:dind was overwrite by the commands. For future reader here is the solution that I found : geting the entry-point.sh and running it manualy.
Here is the deployement as I write those lines :
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker
spec:
replicas: 1
selector:
matchLabels:
app: docker
template:
metadata:
labels:
app: docker
spec:
containers:
- name: docker
image: docker:dind
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
securityContext:
privileged: true
command: ['sh', '-c', 'mkdir -p /etc/docker/certs.d/registry:5000 && ln -s /random/registry.crt /etc/docker/certs.d/registry:5000/ca.crt && wget https://raw.githubusercontent.com/docker-library/docker/a73d96e731e2dd5d6822c99a9af4dcbfbbedb2be/19.03/dind/dockerd-entrypoint.sh && chmod +x dockerd-entrypoint.sh && ./dockerd-entrypoint.sh']
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
readOnly: false
- name: dind-registry-cert
mountPath: /random/
readOnly: false
ports:
- containerPort: 2376
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
I hope it will be usefull for someone in the futur :)
The only thing I come up with is using symlinks. I tested it and it works. I also tried searching for better solution but didn't find anything satisfying.
Have a look at this example:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: centos:7
command: ['sh', '-c', 'mkdir -p /etc/docker/certs.d/registry:5000 && ln -s /some/random/path/ca.crt /etc/docker/certs.d/registry:5000/ca.crt && exec sleep 10000']
volumeMounts:
- mountPath: '/some/random/path'
name: registry-cert
volumes:
- name: registry-cert
secret:
secretName: my-secret
And here is a template secret i used:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: default
type: Opaque
data:
ca.crt: <<< some_random_Data >>>
I have mounted this secret into a /some/random/path location (without colon so it wouldn't throw errors) and created a symlink between /some/random/path/ca.crt and /etc/docker/certs.d/registry:5000/ca.crt.
Of course you also need to create a dir structure before running ln -s ..., that is why I run mkdir -p ....
Let me know if you have any further questions. I'd be happy to answer them.

Can't enable images deletion from a private docker registry

all!!
I'm deploying private registry within K8S cluster with following yaml file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: registry
labels:
type: local
spec:
capacity:
storage: 4Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/registry/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: registry-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: Service
metadata:
name: registry
labels:
app: registry
spec:
ports:
- port: 5000
targetPort: 5000
nodePort: 30400
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
labels:
app: registry
spec:
ports:
- port: 8080
targetPort: 8080
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
- name: registryui
image: hyper/docker-registry-web:latest
ports:
- containerPort: 8080
env:
- name: REGISTRY_URL
value: http://localhost:5000/v2
- name: REGISTRY_NAME
value: cluster-registry
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: registry-persistent-storage
persistentVolumeClaim:
claimName: registry-claim
I'm just wondering that there is no option to delete docker images after pushing them to the local registry. I found the way how it suppose to work here: https://github.com/byrnedo/docker-reg-tool. I can list docker images inside local repository, see all tags via command line, but unable delete them. After reading the docker registry documentation, I've found that registry docker need to be run with following env: REGISTRY_STORAGE_DELETE_ENABLED=true.
I tried to add this variable into yaml file:
.........
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
env:
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: true
But applying this yaml file with command kubectl apply -f manifests/registry.yaml return following error message:
Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true}],"ima|..., bigger context ...|"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":true}],"image":"registry:2","name":"registry","port|...
After I find another suggestion:
The registry accepts configuration settings either via a file or via
environment variables. So the environment variable
REGISTRY_STORAGE_DELETE_ENABLED=true is equivalent to this in your
config file:
storage:
delete:
enabled: true
I've tried this option as well in my yaml file but still no luck...
Any suggestions how to enable docker images deletion in my yaml file are highly appreciated.
The value of true in yaml is parsed into a boolean data type and the syntax calls for a string. You'll need to explicitly quote it:
value: "true"

Kubernetes: port-forwarding automatically for services

I have a rails project that using postgres database. I want to build a database server using Kubernetes and rails server will connect to this database.
For example here is my defined postgres.yml
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- name: "5432"
port: 5432
targetPort: 5432
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- env:
- name: POSTGRES_DB
value: hades_dev
- name: POSTGRES_PASSWORD
value: "1234"
name: postgres
image: postgres:latest
ports:
- containerPort: 5432
resources: {}
stdin: true
tty: true
volumeMounts:
- mountPath: /var/lib/postgresql/data/
name: database-hades-volume
restartPolicy: Always
volumes:
- name: database-hades-volume
persistentVolumeClaim:
claimName: database-hades-volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-hades-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
I run this by following commands: kubectl run -f postgres.yml.
But when I try to run rails server. I always meet following exception:
PG::Error
invalid encoding name: utf8
I try to forwarding port, and rails server successfully connects to database server:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-3681891707-8ch4l 1/1 Running 0 1m
Then run following command:
kubectl port-forward postgres-3681891707-8ch4l 5432:5432
I think this solution not good. How can I define in my postgres.yml so I don't need to port-forwarding manually as above.
Thanks
You can try by exposing your service on NodePort and then accessing the service on that port.
Check here https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport

Resources