kubernetes deployment failed with skaffold - docker

I'm working on a microservices app by docker , kubernetes and skaffold
and this is my skaffold config file
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: mohamedl3zb/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
- image: mohamedl3zb/tickets
context: tickets
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
I get this error
exiting dev mode because first deploy failed: kubectl create: running [kubectl --context docker-desktop create --dry-run=client -oyaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\auth-depl.yaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\auth-mongo-depl.yaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\ingress-srv.yaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\nats-depl.yaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\tickets-depl.yaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\tickets-mongo.depl.yaml]
- stdout: "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: auth-depl\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: auth\n template:\n metadata:\n labels:\n app: auth\n spec:\n containers:\n - env:\n - name: MONGO_URI\n
value: mongodb://auth-mongo-srv:27017/auth\n - name: JWT_KEY\n valueFrom:\n secretKeyRef:\n key: JWT_KEY\n name: jwt-secret-key\n image: mohamedl3zb/auth\n name: auth\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: auth-srv\n namespace: default\nspec:\n ports:\n - name: auth\n port: 3000\n protocol: TCP\n targetPort: 3000\n selector:\n app: auth\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: auth-mongo-depl\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: auth-mongo\n template:\n metadata:\n labels:\n app: auth-mongo\n spec:\n containers:\n - image: mongo\n name: auth-mongo\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: auth-mongo-srv\n namespace: default\nspec:\n ports:\n - name: db\n port: 27017\n protocol: TCP\n targetPort: 27017\n selector:\n app: auth-mongo\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n annotations:\n kubernetes.io/ingress.class: nginx\n nginx.ingress.kubernetes.io/use-regex: \"true\"\n name: ingress-service\n namespace: default\nspec:\n rules:\n - host: ticketing.dev\n http:\n paths:\n - backend:\n serviceName: auth-srv\n servicePort: 3000\n path: /api/users/?(.*)\n - backend:\n serviceName: tickets-srv\n servicePort: 3000\n path: /api/tickets/?(.*)\n - backend:\n serviceName: orders-srv\n servicePort: 3000\n path: /api/orders/?(.*)\n - backend:\n serviceName: payments-srv\n servicePort: 3000\n path: /api/payments/?(.*)\n - backend:\n serviceName: client-srv\n servicePort: 3000\n path: /?(.*)\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nats-srv\n namespace: default\nspec:\n ports:\n - name: clients\n port: 4222\n protocol: TCP\n
targetPort: 4222\n - name: monitoring\n port: 8222\n protocol: TCP\n targetPort: 8222\n selector:\n app: nats\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: tickets-mongo-depl\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: tickets-mongo\n template:\n metadata:\n labels:\n app: tickets-mongo\n spec:\n containers:\n - image: mongo\n name: tickets-mongo\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: tickets-mongo-srv\n namespace: default\nspec:\n ports:\n - name: db\n port: 27017\n protocol: TCP\n targetPort: 27017\n selector:\n app: tickets-mongo\n"
- stderr: "unable to recognize \"C:\\\\Users\\\\Mohamed Salah\\\\Desktop\\\\ticket-app\\\\infra\\\\k8s\\\\nats-depl.yaml\": no matches for kind \"Deplyment\" in version \"apps/v1\"\nerror validating \"C:\\\\Users\\\\Mohamed Salah\\\\Desktop\\\\ticket-app\\\\infra\\\\k8s\\\\tickets-depl.yaml\": error validating data: ValidationError(Deployment.spec.template.spec.containers[0].env[4].valueFrom): unknown field \"fielsRef\" in io.k8s.api.core.v1.EnvVarSource; if you choose to ignore these errors, turn validation off with --validate=false\n"
- cause: exit status 1
I'm working with docker desktop and kubernetes

You've got two typos in your manifests that the admission controller is rejecting.
The first is nats-depl.yaml where Deployment is misspelled.
The second you mentioned already fixing in a comment, but I'll leave the relevant part of the error here for completeness:
"C:\\\\Users\\\\Mohamed Salah\\\\Desktop\\\\ticket-app\\\\infra\\\\k8s\\\\tickets-depl.yaml\": error validating data: ValidationError(Deployment.spec.template.spec.containers[0].env[4].valueFrom): unknown field \"fielsRef\" in io.k8s.api.core.v1.EnvVarSource

Error seems to be this
"unknown field "fielsRef" in io.k8s.api.core.v1.EnvVarSource"

Related

How to use embeded redis in kubernetes cluster..?

I want to run embeded redis with spring boot application in kubernetes cluster but it is not creating any container with embeded redis.
Getting error: CRASH LOOP BACKOFF.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-registration
spec:
selector:
matchLabels:
app: user-registration
replicas: 1
template:
metadata:
labels:
app: user-registration
spec:
containers:
- name: user-registration
image: user-registration:1.0
ports:
- containerPort: 8080
env:
- name: DB_HOST
value: mysql
- name: DB_NAME
value: user_registration
- name: DB_USERNAME
value: root
- name: DB_PASSWORD
value: root
- image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_DATABASE
value: user_registration
ports:
- containerPort: 3306
name: mysql
- name: redis
image: "docker.io/redis:6.0.5"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: user-registration
spec:
selector:
app: user-registration
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
nodePort: 30163
type: NodePort
It is working fine when iam commenting the redis config but not working when it is enabled.

Kubernetes : Golang application container does not start

I have simple golang application which talks to postgres database. The posrgres container is working as expected However my golang application does not start.
In my config.go the environment variables is specified as
type Config struct {
Port uint `env:"PORT" envDefault:"8000"`
PostgresURL string `env:"POSTGRES_URL" envDefault:"postgres://user:pass#127.0.0.1/simple-service"`
}
My golang application service and deployment yaml looks like:
apiVersion: v1
kind: Service
metadata:
name: simple-service-webapp-service
labels:
app: simple-service-webapp
spec:
ports:
- port: 8080
name: http
selector:
app: simple-service-webapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-service-webapp-v1
labels:
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: simple-service-webapp
version: v1
template:
metadata:
labels:
app: simple-service-webapp
version: v1
spec:
containers:
- name: simple-service-webapp
image: docker.io/225517/simple-service-webapp:v1
resources:
requests:
cpu: 100m
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: POSTGRES_HOST
value: postgresdb
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_DB
value: simple-service
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: pass
readinessProbe:
httpGet:
path: /live
port: 8080
---
Below is the service and deployment yaml file for postgres service
apiVersion: v1
kind: Service
metadata:
name: postgresdb
labels:
app: postgresdb
spec:
ports:
- port: 5432
name: tcp
selector:
app: postgresdb
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresdb-v1
labels:
app: postgresdb
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: postgresdb
version: v1
template:
metadata:
labels:
app: postgresdb
version: v1
spec:
containers:
- name: postgresdb
image: postgres
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: simple-service
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: pass
readinessProbe:
exec:
command: ["psql", "-W", "pass", "-U", "user", "-d", "simple-service", "-c", "SELECT 1"]
initialDelaySeconds: 15
timeoutSeconds: 2
livenessProbe:
exec:
command: ["psql", "-W", "pass", "-U", "user", "-d", "simple-service", "-c", "SELECT 1"]
initialDelaySeconds: 45
timeoutSeconds: 2
---
Do I need to change anything in config.go? Could you please help thanks!
This is a community wiki answer based on the info from the comments, feel free to expand on it. I'm placing this for better visibility for the community.
Question was solved by setting appropriate postgres DB port (5432) and rewriting config so it look for the correct variable set on the deployment.

Kubernetes: Modeling Jobs/Cron tasks for Postgres + Tomcat application

I work on an open source system that is comprised of a Postgres database and a tomcat server. I have docker images for each component. We currently use docker-compose to test the application.
I am attempting to model this application with kubernetes.
Here is my first attempt.
apiVersion: v1
kind: Pod
metadata:
name: dspace-pod
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
#
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
I have a configMap that is setting the hostname to the name of the pod.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspace-pod:5432/dspace
dspace.hostname = dspace-pod
dspace.baseUrl = http://dspace-pod:8080
solr.server=http://dspace-pod:8080/solr
This application has a number of tasks that are run from the command line.
I have created a 3rd Docker image that contains the jars that are needed on the command line.
I am interested in modeling these command line tasks as Jobs in Kubernetes. Assuming that is a appropriate way to handle these tasks, how do I specify that a job should run within a Pod that is already running?
Here is my first attempt at defining a job.
apiVersion: batch/v1
kind: Job
#https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
The following configuration has allowed me to start my services (tomcat and postgres) as I hoped.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
# example of a simple property defined using --from-literal
#example.property.1: hello
#example.property.2: world
# example of a complex property defined using --from-file
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspacedb-service:5432/dspace
dspace.hostname = dspace-service
dspace.baseUrl = http://dspace-service:8080
solr.server=http://dspace-service:8080/solr
---
apiVersion: v1
kind: Service
metadata:
name: dspacedb-service
labels:
app: dspacedb-app
spec:
type: NodePort
selector:
app: dspacedb-app
ports:
- protocol: TCP
port: 5432
# targetPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspacedb-deploy
labels:
app: dspacedb-app
spec:
selector:
matchLabels:
app: dspacedb-app
template:
metadata:
labels:
app: dspacedb-app
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
containers:
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
---
apiVersion: v1
kind: Service
metadata:
name: dspace-service
labels:
app: dspace-app
spec:
type: NodePort
selector:
app: dspace-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspace-deploy
labels:
app: dspace-app
spec:
selector:
matchLabels:
app: dspace-app
template:
metadata:
labels:
app: dspace-app
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x-jdk8-test
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
After applying the configuration above, I have the following results.
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dspace-service NodePort 10.104.224.245 <none> 8080:32459/TCP 3s app=dspace-app
dspacedb-service NodePort 10.96.212.9 <none> 5432:30947/TCP 3s app=dspacedb-app
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10
I was pleased to see that the service name can be used for port forwarding.
$ kubectl port-forward service/dspace-service 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
I am also able to run the following job using the defined service names in the configMap.
apiVersion: batch/v1
kind: Job
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
Results
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-create-admin-kl6wd 0/1 Completed 0 5m
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10m
I still have some work to do persisting the volumes.

Docker - Unable to start Webpack Dev Server on Kubernetes Cluster

I am trying to start Webpack Dev Server in a Rails app on a Digital Ocean Kubernetes Cluster. I can't figure out the correct command. The app works in Docker locally.
I keep getting errors like this:
Error: failed to start container "exactpos-webpack": Error response from
daemon: OCI runtime create failed: container_linux.go:348: starting
container process caused "exec: \"bin/bash\": stat bin/bash: no such
file or directory": unknown Back-off restarting failed container
Here is the code from my docker-compose.yml file:
webpack_dev_server: build: .
command: ./bin/webpack-dev-server
ports:
- 3035:3035
volumes:
- .:/usr/src/app
- gem_cache:/gems
env_file:
- .env/development/web
- .env/development/database environment:
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
Here is my kubectl deployment yaml file:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: exactpos-webpack
spec:
replicas: 1
minReadySeconds: 150
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: exactpos-webpack
tier: webpack
template:
metadata:
name: exactpos-webppack
labels:
app: exactpos-webpack
tier: webpack
spec:
imagePullSecrets:
- name: dockerhub-cred
containers:
- name: exactpos-webpack
image: index.docker.io/markhorrocks/exactpos_webpacker_dev_server:prod
command: ["bin/bash","-c", "./bin/webpack_dev_server"]
imagePullPolicy: Always
ports:
- containerPort: 3500
readinessProbe:
tcpSocket:
port: 3500
initialDelaySeconds: 150
timeoutSeconds: 5
periodSeconds: 5
failureThreshold: 10
livenessProbe:
tcpSocket:
port: 3500
initialDelaySeconds: 120
timeoutSeconds: 5
periodSeconds: 5
failureThreshold: 10
env:
- name: RAILS_LOG_TO_STDOUT
value: "true"
- name: RAILS_SERVE_STATIC_FILES
value: "true"
- name: DATABASE_NAME
value: "exactpos_production"
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_URL
value: "postgres"
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: "db-user"
key: "username"
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: "db-user-pass"
key: "password"
- name: REDIS_URL
value: "redis"
- name: REDIS_PORT
value: "6379"
- name: RAILS_ENV
value: "production"
- name: WEBPACKER_DEV_SERVER_HOST
value: "0.0.0.0"
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
name: "secret-key-base"
key: "secret-key-base"
Your deployment.yaml has following :
command: ["bin/bash","-c", "./bin/webpack_dev_server"]
There is no bin/bash present in docker image. It should be like:
command: ["/bin/bash","-c", "./bin/webpack_dev_server"]
I believe, it will resolve your problem.
EDIT1: I again looked at your docker-compose.yml file and see the following line:
volumes:
- .:/usr/src/app
It means at run time of docker you copied all the files from . directory to /usr/src/app. Is the config/webpacker.yml was present at that location? OR you overlayed /usr/src/app from whatever it was in . directory at the run time. Could you please share DockerFile of your app as well to better understanding?

Kubernetes deployment database connection error

I'm trying to deploy GLPI application (http://glpi-project.org/) over my Kubernetes cluster but i encounter an issue.
Here is my deployment code:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-glpi
labels:
type: openebs
spec:
storageClassName: openebs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: glpi
namespace: jb
labels:
app: glpi
spec:
selector:
matchLabels:
app: glpi
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: glpi
spec:
volumes:
- name: pv-storage-glpi
persistentVolumeClaim:
claimName: pv-claim-glpi
containers:
- name: mariadb
image: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: "glpi"
- name: MYSQL_DATABASE
value: "glpi"
- name: MYSQL_USER
value: "glpi"
- name: MYSQL_PASSWORD
value: "glpi"
- name: GLPI_SOURCE_URL
value: "https://forge.glpi-project.org/attachments/download/2020/glpi-0.85.4.tar.gz"
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- mountPath: /var/lib/mariadb/
name: pv-storage-glpi
subPath: mariadb
- name: glpi
image: driket54/glpi
ports:
- containerPort: 80
name: http
- containerPort: 8090
name: https
volumeMounts:
- mountPath: /var/glpidata
name: pv-storage-glpi
subPath: glpidata
---
apiVersion: v1
kind: Service
metadata:
name: glpi
namespace: jb
spec:
selector:
app: glpi
ports:
- protocol: "TCP"
port: 80
targetPort: http
name: http
- protocol: "TCP"
port: 8090
targetPort: https
name: https
- protocol: "TCP"
port: 3306
targetPort: mariadb
name: mariadb
type: NodePort
---
The docker image is properly deployed but in my test phase, during the setup of the app, i get the following error while setting up the database (mysql).
I've already checked the credentials (host, username, password) and the are correct
Please help
Not really an answer since I don' t have the Kubernetes knowledge expected, but I can't add a comment yet :(
What you should alter first is your GLPi version.
Use this link. It's the last one:
https://github.com/glpi-project/glpi/releases/download/9.3.0/glpi-9.3.tgz
Then you may use cli tools to setup the database.
https://glpi-install.readthedocs.io/en/latest/command-line.html
Using what I get from your file:
php scripts/cliinstall.php --host=mariadb(not sure about this one in your environment but you get hte idea) --db=glpi --user=glpi --pass=glpi

Resources