I am trying to start Webpack Dev Server in a Rails app on a Digital Ocean Kubernetes Cluster. I can't figure out the correct command. The app works in Docker locally.
I keep getting errors like this:
Error: failed to start container "exactpos-webpack": Error response from
daemon: OCI runtime create failed: container_linux.go:348: starting
container process caused "exec: \"bin/bash\": stat bin/bash: no such
file or directory": unknown Back-off restarting failed container
Here is the code from my docker-compose.yml file:
webpack_dev_server: build: .
command: ./bin/webpack-dev-server
ports:
- 3035:3035
volumes:
- .:/usr/src/app
- gem_cache:/gems
env_file:
- .env/development/web
- .env/development/database environment:
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
Here is my kubectl deployment yaml file:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: exactpos-webpack
spec:
replicas: 1
minReadySeconds: 150
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: exactpos-webpack
tier: webpack
template:
metadata:
name: exactpos-webppack
labels:
app: exactpos-webpack
tier: webpack
spec:
imagePullSecrets:
- name: dockerhub-cred
containers:
- name: exactpos-webpack
image: index.docker.io/markhorrocks/exactpos_webpacker_dev_server:prod
command: ["bin/bash","-c", "./bin/webpack_dev_server"]
imagePullPolicy: Always
ports:
- containerPort: 3500
readinessProbe:
tcpSocket:
port: 3500
initialDelaySeconds: 150
timeoutSeconds: 5
periodSeconds: 5
failureThreshold: 10
livenessProbe:
tcpSocket:
port: 3500
initialDelaySeconds: 120
timeoutSeconds: 5
periodSeconds: 5
failureThreshold: 10
env:
- name: RAILS_LOG_TO_STDOUT
value: "true"
- name: RAILS_SERVE_STATIC_FILES
value: "true"
- name: DATABASE_NAME
value: "exactpos_production"
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_URL
value: "postgres"
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: "db-user"
key: "username"
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: "db-user-pass"
key: "password"
- name: REDIS_URL
value: "redis"
- name: REDIS_PORT
value: "6379"
- name: RAILS_ENV
value: "production"
- name: WEBPACKER_DEV_SERVER_HOST
value: "0.0.0.0"
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
name: "secret-key-base"
key: "secret-key-base"
Your deployment.yaml has following :
command: ["bin/bash","-c", "./bin/webpack_dev_server"]
There is no bin/bash present in docker image. It should be like:
command: ["/bin/bash","-c", "./bin/webpack_dev_server"]
I believe, it will resolve your problem.
EDIT1: I again looked at your docker-compose.yml file and see the following line:
volumes:
- .:/usr/src/app
It means at run time of docker you copied all the files from . directory to /usr/src/app. Is the config/webpacker.yml was present at that location? OR you overlayed /usr/src/app from whatever it was in . directory at the run time. Could you please share DockerFile of your app as well to better understanding?
Related
I'm working on a microservices app by docker , kubernetes and skaffold
and this is my skaffold config file
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: mohamedl3zb/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
- image: mohamedl3zb/tickets
context: tickets
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
I get this error
exiting dev mode because first deploy failed: kubectl create: running [kubectl --context docker-desktop create --dry-run=client -oyaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\auth-depl.yaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\auth-mongo-depl.yaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\ingress-srv.yaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\nats-depl.yaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\tickets-depl.yaml -f C:\Users\Mohamed Salah\Desktop\ticket-app\infra\k8s\tickets-mongo.depl.yaml]
- stdout: "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: auth-depl\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: auth\n template:\n metadata:\n labels:\n app: auth\n spec:\n containers:\n - env:\n - name: MONGO_URI\n
value: mongodb://auth-mongo-srv:27017/auth\n - name: JWT_KEY\n valueFrom:\n secretKeyRef:\n key: JWT_KEY\n name: jwt-secret-key\n image: mohamedl3zb/auth\n name: auth\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: auth-srv\n namespace: default\nspec:\n ports:\n - name: auth\n port: 3000\n protocol: TCP\n targetPort: 3000\n selector:\n app: auth\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: auth-mongo-depl\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: auth-mongo\n template:\n metadata:\n labels:\n app: auth-mongo\n spec:\n containers:\n - image: mongo\n name: auth-mongo\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: auth-mongo-srv\n namespace: default\nspec:\n ports:\n - name: db\n port: 27017\n protocol: TCP\n targetPort: 27017\n selector:\n app: auth-mongo\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n annotations:\n kubernetes.io/ingress.class: nginx\n nginx.ingress.kubernetes.io/use-regex: \"true\"\n name: ingress-service\n namespace: default\nspec:\n rules:\n - host: ticketing.dev\n http:\n paths:\n - backend:\n serviceName: auth-srv\n servicePort: 3000\n path: /api/users/?(.*)\n - backend:\n serviceName: tickets-srv\n servicePort: 3000\n path: /api/tickets/?(.*)\n - backend:\n serviceName: orders-srv\n servicePort: 3000\n path: /api/orders/?(.*)\n - backend:\n serviceName: payments-srv\n servicePort: 3000\n path: /api/payments/?(.*)\n - backend:\n serviceName: client-srv\n servicePort: 3000\n path: /?(.*)\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nats-srv\n namespace: default\nspec:\n ports:\n - name: clients\n port: 4222\n protocol: TCP\n
targetPort: 4222\n - name: monitoring\n port: 8222\n protocol: TCP\n targetPort: 8222\n selector:\n app: nats\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: tickets-mongo-depl\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: tickets-mongo\n template:\n metadata:\n labels:\n app: tickets-mongo\n spec:\n containers:\n - image: mongo\n name: tickets-mongo\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: tickets-mongo-srv\n namespace: default\nspec:\n ports:\n - name: db\n port: 27017\n protocol: TCP\n targetPort: 27017\n selector:\n app: tickets-mongo\n"
- stderr: "unable to recognize \"C:\\\\Users\\\\Mohamed Salah\\\\Desktop\\\\ticket-app\\\\infra\\\\k8s\\\\nats-depl.yaml\": no matches for kind \"Deplyment\" in version \"apps/v1\"\nerror validating \"C:\\\\Users\\\\Mohamed Salah\\\\Desktop\\\\ticket-app\\\\infra\\\\k8s\\\\tickets-depl.yaml\": error validating data: ValidationError(Deployment.spec.template.spec.containers[0].env[4].valueFrom): unknown field \"fielsRef\" in io.k8s.api.core.v1.EnvVarSource; if you choose to ignore these errors, turn validation off with --validate=false\n"
- cause: exit status 1
I'm working with docker desktop and kubernetes
You've got two typos in your manifests that the admission controller is rejecting.
The first is nats-depl.yaml where Deployment is misspelled.
The second you mentioned already fixing in a comment, but I'll leave the relevant part of the error here for completeness:
"C:\\\\Users\\\\Mohamed Salah\\\\Desktop\\\\ticket-app\\\\infra\\\\k8s\\\\tickets-depl.yaml\": error validating data: ValidationError(Deployment.spec.template.spec.containers[0].env[4].valueFrom): unknown field \"fielsRef\" in io.k8s.api.core.v1.EnvVarSource
Error seems to be this
"unknown field "fielsRef" in io.k8s.api.core.v1.EnvVarSource"
I have simple golang application which talks to postgres database. The posrgres container is working as expected However my golang application does not start.
In my config.go the environment variables is specified as
type Config struct {
Port uint `env:"PORT" envDefault:"8000"`
PostgresURL string `env:"POSTGRES_URL" envDefault:"postgres://user:pass#127.0.0.1/simple-service"`
}
My golang application service and deployment yaml looks like:
apiVersion: v1
kind: Service
metadata:
name: simple-service-webapp-service
labels:
app: simple-service-webapp
spec:
ports:
- port: 8080
name: http
selector:
app: simple-service-webapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-service-webapp-v1
labels:
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: simple-service-webapp
version: v1
template:
metadata:
labels:
app: simple-service-webapp
version: v1
spec:
containers:
- name: simple-service-webapp
image: docker.io/225517/simple-service-webapp:v1
resources:
requests:
cpu: 100m
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: POSTGRES_HOST
value: postgresdb
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_DB
value: simple-service
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: pass
readinessProbe:
httpGet:
path: /live
port: 8080
---
Below is the service and deployment yaml file for postgres service
apiVersion: v1
kind: Service
metadata:
name: postgresdb
labels:
app: postgresdb
spec:
ports:
- port: 5432
name: tcp
selector:
app: postgresdb
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresdb-v1
labels:
app: postgresdb
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: postgresdb
version: v1
template:
metadata:
labels:
app: postgresdb
version: v1
spec:
containers:
- name: postgresdb
image: postgres
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: simple-service
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: pass
readinessProbe:
exec:
command: ["psql", "-W", "pass", "-U", "user", "-d", "simple-service", "-c", "SELECT 1"]
initialDelaySeconds: 15
timeoutSeconds: 2
livenessProbe:
exec:
command: ["psql", "-W", "pass", "-U", "user", "-d", "simple-service", "-c", "SELECT 1"]
initialDelaySeconds: 45
timeoutSeconds: 2
---
Do I need to change anything in config.go? Could you please help thanks!
This is a community wiki answer based on the info from the comments, feel free to expand on it. I'm placing this for better visibility for the community.
Question was solved by setting appropriate postgres DB port (5432) and rewriting config so it look for the correct variable set on the deployment.
I have a private gke cluster. It contains 3 nodes (each has 2 CPUs and 7.5GB of memory) and 3 pods' replica (it's a .NET Core application). I've noticed that my containers sometimes restart with "error 139 SIGSEGV", which says that there is problem with accessing memory:
This occurs when a program attempts to access a memory location that it’s not allowed to access, or attempts to access a memory location in a way that’s not allowed.
I don't have application logs with the error before restarting the container, therefore it's impossible to debug it.
I've added a property false
in application but it didn't solve the problem.
How can I fix this problem?
Manifest:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: stage-deployment
namespace: stage
spec:
replicas: 3
minReadySeconds: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: stage
template:
metadata:
labels:
app: stage
spec:
containers:
- name: stage-container
image: my.registry/stage/core:latest
imagePullPolicy: "Always"
ports:
- containerPort: 5000
name: http
- containerPort: 22
name: ssh
readinessProbe:
tcpSocket:
port: 5000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 5000
initialDelaySeconds: 5
periodSeconds: 20
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: db-credentials
key: dbname
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=my-instance:us-west1:dbserver=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: instance-credentials
secret:
secretName: instance-credentials
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: stage-service
namespace: stage
spec:
type: NodePort
selector:
app: stage
ports:
- protocol: TCP
port: 80
targetPort: 5000
name: https
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 300m
nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
nginx.ingress.kubernetes.io/proxy-buffers-number: 4 256k
nginx.org/client-max-body-size: 1000m
name: ingress
namespace: stage
spec:
rules:
- host: my.site.com
http:
paths:
- backend:
serviceName: stage-service
servicePort: 80
tls:
- hosts:
- my.site.com
secretName: my-certs
I am trying to create a pod with both phpmyadmin and adminer in it. I have the Dockerfile created but I am not sure of the entrypoint needed.
Has anyone accomplished this before? I have everything figured out but the entrypoint...
FROM phpmyadmin/phpmyadmin
ENV MYSQL_DATABASE=${MYSQL_DATABASE}
ENV MYSQL_USER=${MYSQL_USERNAME}
ENV MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
ENV MYSQL_PORT=3381
ENV PMA_USER=${MYSQL_USER}
ENV PMA_PORT=3381
ENV PMA_PASSWORD=${MYSQL_PASSWORD}
ENV PMA_HOST=${MYSQL_HOST}
EXPOSE 8081
ENTRYPOINT [ "executable" ]
FROM adminer:4
ENV POSTGRES_DB=${POSTGRES_DATABASE}
ENV POSTGRES_USER=${POSTGRES_USER}
ENV POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EXPOSE 8082
ENTRYPOINT [ "?" ]
------UPDATE 1 ----------
after read some comments I spilt my Dockerfiles and will create a yml file for the kube pod
FROM phpmyadmin/phpmyadmin
ENV MYSQL_DATABASE=${MYSQL_DATABASE}
ENV MYSQL_USER=${MYSQL_USERNAME}
ENV MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
ENV MYSQL_PORT=3381
ENV PMA_USER=${MYSQL_USER}
ENV PMA_PORT=3381
ENV PMA_PASSWORD=${MYSQL_PASSWORD}
ENV PMA_HOST=${MYSQL_HOST}
EXPOSE 8081
ENTRYPOINT [ "executable" ]
container 2
FROM adminer:4
ENV POSTGRES_DB=${POSTGRES_DATABASE}
ENV POSTGRES_USER=${POSTGRES_USER}
ENV POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EXPOSE 8082
ENTRYPOINT [ "?" ]
I am still not sure what the entrypoint script should be
Since you are not modifying anything in the image, you don't need to create a custom docker image for this, you could simply run 2 deployments in kubernetes passing the environment variables using a Kubernetes Secret.
See this example of how to deploy both application on Kubernetes:
Create a Kubernetes secret with your connection details:
cat <<EOF >./kustomization.yaml
secretGenerator:
- name: database-conn
literals:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_PORT=${MYSQL_PORT}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EOF
Apply the generated file:
kubectl apply -k .
secret/database-conn-mm8ck2296m created
Deploy phpMyAdmin and Adminer:
You need to create two deployment, the first for phpMyAdmin, and other to Adminer, using the secrets created above in the containers, for example:
Create a file called phpmyadmin-deploy.yml:
Note: Change the secret name from database-conn-mm8ck2296m to the generated name in the command above.
apiVersion: apps/v1
kind: Deployment
metadata:
name: phpmyadmin
spec:
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
app: phpmyadmin
spec:
containers:
- name: phpmyadmin
image: phpmyadmin/phpmyadmin
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_DATABASE
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_USER
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_PORT
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_PORT
- name: PMA_HOST
value: mysql.host
- name: PMA_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_USER
- name: PMA_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_ROOT_PASSWORD
- name: PMA_PORT
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_PORT
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin-svc
spec:
selector:
app: phpmyadmin
ports:
- protocol: TCP
port: 80
targetPort: 80
Adminer:
Create other file named adminer-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: adminer
spec:
selector:
matchLabels:
app: adminer
template:
metadata:
labels:
app: adminer
spec:
containers:
- name: adminer
image: adminer:4
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_PASSWORD
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: adminer-svc
spec:
selector:
app: adminer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Deploy the yaml files with kubectl apply -f *-deploy.yaml, after some seconds type kubectl get pods && kubectl get svc to verify if everything is ok.
Note: Both services will be created as ClusterIP, it means that it will be only accessible internally. If you are using a cloud provider, you can use service type LoadBalancer to get an external ip. Or you can use kubectl port-forward (see here) command to access your service from your computer.
Access application using port-forward:
phpMyadmin:
# This command will map the port 8080 from your localhost to phpMyadmin application:
kubectl port-forward svc/phpmyadmin-svc 8080:80
Adminer
# This command will map the port 8181 from your localhost to Adminer application:
kubectl port-forward svc/adminer-svc 8181:8080
And try to access:
http://localhost:8080 <= phpMyAdmin
http://localhost:8181 <= Adminer
References:
Kubernetes Secrets
Kubernetes Environment variables
Kubernetes port forward
You can't combine two docker images like that. What you've created is a multi-stage build and only the last stage is what ends up in the final image. And even if you used multi-stage copies to carefully fold both into one image, you would need to think through how you will run both things simultaneously. The upstream adminer image uses php -S under the hood.
You'd almost always run this in two separate Deployments. Since the only thing you're doing in this custom Dockerfile is setting environment variables, you don't even need a custom image; you can use the env: part of the pod spec to define environment variables at deploy time.
image: adminer:4 # without PHPMyAdmin
env:
- name: POSTGRES_DB
value: [...] # fixed value in pod spec
# valueFrom: ... # or get it from a ConfigMap or Secret
Run two Deployments, with one container in each, and a matching Service for each. (Don't run bare pods, and don't be tempted to put both containers in a single deployment.) If the databases are inside Kubernetes too, use their Services' names and ports; I'd usually expect these to be the "normal" 3306/5432 ports.
I have a rails project that using postgres database. I want to build a database server using Kubernetes and rails server will connect to this database.
For example here is my defined postgres.yml
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- name: "5432"
port: 5432
targetPort: 5432
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- env:
- name: POSTGRES_DB
value: hades_dev
- name: POSTGRES_PASSWORD
value: "1234"
name: postgres
image: postgres:latest
ports:
- containerPort: 5432
resources: {}
stdin: true
tty: true
volumeMounts:
- mountPath: /var/lib/postgresql/data/
name: database-hades-volume
restartPolicy: Always
volumes:
- name: database-hades-volume
persistentVolumeClaim:
claimName: database-hades-volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-hades-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
I run this by following commands: kubectl run -f postgres.yml.
But when I try to run rails server. I always meet following exception:
PG::Error
invalid encoding name: utf8
I try to forwarding port, and rails server successfully connects to database server:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-3681891707-8ch4l 1/1 Running 0 1m
Then run following command:
kubectl port-forward postgres-3681891707-8ch4l 5432:5432
I think this solution not good. How can I define in my postgres.yml so I don't need to port-forwarding manually as above.
Thanks
You can try by exposing your service on NodePort and then accessing the service on that port.
Check here https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport