First off, I'm pretty sure I know why this isn't working: I'm pulling the Docker postgres:11-alpine image, modifying it, but then trying to change the env: in the k8s deployment.yaml on a custom image. I think that is the issue.
Basically, I'm trying to accomplish this per the Docker postgres docs:
docker run --name some-postgres -e POSTGRES_PASSWORD='foo' POSTGRES_USER='bar'
This is what I have:
Dockerfile.dev
FROM postgres:11-alpine
EXPOSE 5432
COPY ./db/*.sql /docker-entrypoint-initdb.d/
postgres.yaml (secrets will be moved after I'm done playing with this)
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image: testproject/postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "test_dev"
- name: POSTGRES_USER
value: "bar"
- name: POSTGRES_PASSWORD
value: "foo"
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-storage
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
When I use Skaffold to spin the cluster up locally, however, the env: "don't take" as I can still access the DB using the defaults POSTGRES_USER=postgres and POSTGRES_PASSWORD=''.
I bet if I did image: postgres then the env: would work, but then I'm not sure how to do the equivalent of this that is in the Dockerfile:
COPY ./db/*.sql /docker-entrypoint-initdb.d/
Any suggestions?
Here is the skaffold.yaml if that is helpful too:
apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: testproject/postgres
docker:
dockerfile: ./db/Dockerfile.dev
sync:
manual:
- src: "***/*.sql"
dest: .
- image: testproject/server
docker:
dockerfile: ./server/Dockerfile.dev
sync:
manual:
- src: "***/*.py"
dest: .
deploy:
kubectl:
manifests:
- k8s/ingress.yaml
- k8s/postgres.yaml
- k8s/server.yaml
The Docker postgres docs mention the following:
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
Are you sure that you're starting your deployment with an empty data directory? Could it be that PostgreSQL starts and allows you to login using the credentials that were specified in the environment variables during the first time your started it with that persistent volume?
If that's not it, have a look at the environment variables of the running pod. kubectl describe POD should tell you which environment variables are actually passed through to the pod. Maybe something in your Skaffold setup overwrites the environment variables? You could have a look in the pod by running env when execing into the pod. Also don't forget the logs, the PostgreSQL container should log which user account it creates during startup.
Related
I'm working on a microservice architectural project in which I use rq-workers. I have used docker-compose file to start and connect the rq-worker with redis successfully but I'm not sure how to replicate it in kubernetes. No matter whatever I try with command and args, I'm thrown a status of Crashloopbackoff. Please guide me as to what I'm missing.Below are my docker-compose and rq-worker deployment files.
rq-worker and redis container config:
...
rq-worker:
build: ./simba-app
command: rq worker --url redis://redis:6379 queue
depends_on:
- redis
volumes:
- sharedvolume:/simba-app/app/docs
redis:
image: redis:4.0.6-alpine
ports:
- "6379:6379"
volumes:
- ./redis:/data
...
rq-worker.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rq-worker
labels:
app: rq-worker
spec:
selector:
matchLabels:
app: rq-worker
template:
metadata:
labels:
app: rq-worker
spec:
containers:
- name: rq-worker
image: some-image
command: ["/bin/sh", "-c"]
#command: ["rqworker", "--url", "redis://redis:6379", "queue"]
args:
- rqworker
- --url
- redis://redis:6379
- queue
imagePullSecrets:
- name: regcred
---
Thanks in advance!
Edit:
I checked the logs using kubectl logs and found the following logs:
Error 99 connecting to localhost:6379. Cannot assign requested address.
First of all, I'm using the 'service name' and not 'localhost' in my code to connect rq and redis. No idea why I'm seeing 'localhost' in my logs.
(Note: The kubernetes service name for redis is same as that used in my docker-compose file)
redis-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:4.0.6-alpine
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: ClusterIP
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
You do not need the /bin/sh -c wrapper here. Your setup reads the first args: word, rqworker, and parses it as a shell command, and executes it; the remaining words are lost.
The most straightforward thing to do is to make your command, split into words as-is, as the Kubernetes command:
containers:
- name: rq-worker
image: some-image
command:
- rqworker
- --url
- redis://redis:6379
- queue
(This matches the commented-out string in your example.)
A common Docker pattern is to use an ENTRYPOINT to do first-time setup and to make CMD be a complete shell command that's run at the end of the setup script. In Kubernetes, command: overrides Docker ENTRYPOINT; if your image has this pattern, then you need to not use a command:, but instead to put this command as you have it as args:.
The only time you do need an sh -c wrapper is in unusual cases where you need to run multiple commands, expand environment variables, or otherwise use shell-only features. In this case the command itself must be in a single word in the command: or args:.
command:
- /bin/sh
- -c
- rqworker --url redis://redis:6379 queue
I am trying to create a pod with both phpmyadmin and adminer in it. I have the Dockerfile created but I am not sure of the entrypoint needed.
Has anyone accomplished this before? I have everything figured out but the entrypoint...
FROM phpmyadmin/phpmyadmin
ENV MYSQL_DATABASE=${MYSQL_DATABASE}
ENV MYSQL_USER=${MYSQL_USERNAME}
ENV MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
ENV MYSQL_PORT=3381
ENV PMA_USER=${MYSQL_USER}
ENV PMA_PORT=3381
ENV PMA_PASSWORD=${MYSQL_PASSWORD}
ENV PMA_HOST=${MYSQL_HOST}
EXPOSE 8081
ENTRYPOINT [ "executable" ]
FROM adminer:4
ENV POSTGRES_DB=${POSTGRES_DATABASE}
ENV POSTGRES_USER=${POSTGRES_USER}
ENV POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EXPOSE 8082
ENTRYPOINT [ "?" ]
------UPDATE 1 ----------
after read some comments I spilt my Dockerfiles and will create a yml file for the kube pod
FROM phpmyadmin/phpmyadmin
ENV MYSQL_DATABASE=${MYSQL_DATABASE}
ENV MYSQL_USER=${MYSQL_USERNAME}
ENV MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
ENV MYSQL_PORT=3381
ENV PMA_USER=${MYSQL_USER}
ENV PMA_PORT=3381
ENV PMA_PASSWORD=${MYSQL_PASSWORD}
ENV PMA_HOST=${MYSQL_HOST}
EXPOSE 8081
ENTRYPOINT [ "executable" ]
container 2
FROM adminer:4
ENV POSTGRES_DB=${POSTGRES_DATABASE}
ENV POSTGRES_USER=${POSTGRES_USER}
ENV POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EXPOSE 8082
ENTRYPOINT [ "?" ]
I am still not sure what the entrypoint script should be
Since you are not modifying anything in the image, you don't need to create a custom docker image for this, you could simply run 2 deployments in kubernetes passing the environment variables using a Kubernetes Secret.
See this example of how to deploy both application on Kubernetes:
Create a Kubernetes secret with your connection details:
cat <<EOF >./kustomization.yaml
secretGenerator:
- name: database-conn
literals:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_PORT=${MYSQL_PORT}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EOF
Apply the generated file:
kubectl apply -k .
secret/database-conn-mm8ck2296m created
Deploy phpMyAdmin and Adminer:
You need to create two deployment, the first for phpMyAdmin, and other to Adminer, using the secrets created above in the containers, for example:
Create a file called phpmyadmin-deploy.yml:
Note: Change the secret name from database-conn-mm8ck2296m to the generated name in the command above.
apiVersion: apps/v1
kind: Deployment
metadata:
name: phpmyadmin
spec:
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
app: phpmyadmin
spec:
containers:
- name: phpmyadmin
image: phpmyadmin/phpmyadmin
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_DATABASE
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_USER
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_PORT
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_PORT
- name: PMA_HOST
value: mysql.host
- name: PMA_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_USER
- name: PMA_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_ROOT_PASSWORD
- name: PMA_PORT
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_PORT
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin-svc
spec:
selector:
app: phpmyadmin
ports:
- protocol: TCP
port: 80
targetPort: 80
Adminer:
Create other file named adminer-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: adminer
spec:
selector:
matchLabels:
app: adminer
template:
metadata:
labels:
app: adminer
spec:
containers:
- name: adminer
image: adminer:4
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_PASSWORD
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: adminer-svc
spec:
selector:
app: adminer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Deploy the yaml files with kubectl apply -f *-deploy.yaml, after some seconds type kubectl get pods && kubectl get svc to verify if everything is ok.
Note: Both services will be created as ClusterIP, it means that it will be only accessible internally. If you are using a cloud provider, you can use service type LoadBalancer to get an external ip. Or you can use kubectl port-forward (see here) command to access your service from your computer.
Access application using port-forward:
phpMyadmin:
# This command will map the port 8080 from your localhost to phpMyadmin application:
kubectl port-forward svc/phpmyadmin-svc 8080:80
Adminer
# This command will map the port 8181 from your localhost to Adminer application:
kubectl port-forward svc/adminer-svc 8181:8080
And try to access:
http://localhost:8080 <= phpMyAdmin
http://localhost:8181 <= Adminer
References:
Kubernetes Secrets
Kubernetes Environment variables
Kubernetes port forward
You can't combine two docker images like that. What you've created is a multi-stage build and only the last stage is what ends up in the final image. And even if you used multi-stage copies to carefully fold both into one image, you would need to think through how you will run both things simultaneously. The upstream adminer image uses php -S under the hood.
You'd almost always run this in two separate Deployments. Since the only thing you're doing in this custom Dockerfile is setting environment variables, you don't even need a custom image; you can use the env: part of the pod spec to define environment variables at deploy time.
image: adminer:4 # without PHPMyAdmin
env:
- name: POSTGRES_DB
value: [...] # fixed value in pod spec
# valueFrom: ... # or get it from a ConfigMap or Secret
Run two Deployments, with one container in each, and a matching Service for each. (Don't run bare pods, and don't be tempted to put both containers in a single deployment.) If the databases are inside Kubernetes too, use their Services' names and ports; I'd usually expect these to be the "normal" 3306/5432 ports.
I have containerized microservice built with Java. This application uses the default /config-volume directory when it searches for property files.
Previously I manually deployed via Dockerfile, and now I'm looking to automate this process with Kubernetes.
The container image starts the microservice immediately so I need to add properties to the config-volume folder immediately. I accomplished this in Docker with this simple Dockerfile:
FROM ########.amazon.ecr.url.us-north-1.amazonaws.com/company/image-name:1.0.0
RUN mkdir /config-volume
COPY path/to/my.properties /config-volume
I'm trying to replicate this type of behavior in a kubernetes deployment.yaml but I have found no way to do it.
I've tried performing a kubectl cp command immediately after applying the deployment and it sometimes works, but it can result in a race condition which cause the microservice to fail at startup.
(I've redacted unnecessary parts)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
replicas: 1
template:
spec:
containers:
- env:
image: ########.amazon.ecr.url.us-north-1.amazonaws.com/company/image-name:1.0.0
name: my-service
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /config-volume
name: config-volume
volumes:
- name: config-volume
emptyDir: {}
status: {}
Is there a way to copy files into a volume inside the deployment.yaml?
You are trying to emulate a ConfigMap using volumes. Instead, put your configuration into a ConfigMap, and mount that to your deployments. The documentation is there:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
Once you have your configuration as a ConfigMap, mount it using something like this:
...
containers:
- name: mycontainer
volumeMounts:
- name: config-volume
mountPath: /config-volume
volumes:
- name: config-volume
configMap:
name: nameOfConfigMap
Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar
I want to start a docker container with Kubernetes with the parameter --oom-score-adj .
My kubernetes deployment script looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: xxx
spec:
template:
metadata:
labels:
app: xxx
spec:
volumes:
- name: some-name
hostPath:
path: /some-path
containers:
- name: xxx-container
image: xxx-image
imagePullPolicy: "IfNotPresent"
securityContext:
privileged: true
command:
- /bin/sh
- -c
args:
- ./rsome-command.sh
volumeMounts:
- name: some-name
mountPath: /some-path
When I inspect the created container, I find --oom-score-adj is set to 1000. I want to set it to 0. Can anyone shed any line on how can I do it? Is there any definitive guide to pass such arguments?
You can't do this yet, it's one of the frustrating things still unresolved with Kubernetes.
There's a similar issue here around logging drivers. Unfortunately, you'll have to set the value on the docker daemon