I am trying to create a pod with both phpmyadmin and adminer in it. I have the Dockerfile created but I am not sure of the entrypoint needed.
Has anyone accomplished this before? I have everything figured out but the entrypoint...
FROM phpmyadmin/phpmyadmin
ENV MYSQL_DATABASE=${MYSQL_DATABASE}
ENV MYSQL_USER=${MYSQL_USERNAME}
ENV MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
ENV MYSQL_PORT=3381
ENV PMA_USER=${MYSQL_USER}
ENV PMA_PORT=3381
ENV PMA_PASSWORD=${MYSQL_PASSWORD}
ENV PMA_HOST=${MYSQL_HOST}
EXPOSE 8081
ENTRYPOINT [ "executable" ]
FROM adminer:4
ENV POSTGRES_DB=${POSTGRES_DATABASE}
ENV POSTGRES_USER=${POSTGRES_USER}
ENV POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EXPOSE 8082
ENTRYPOINT [ "?" ]
------UPDATE 1 ----------
after read some comments I spilt my Dockerfiles and will create a yml file for the kube pod
FROM phpmyadmin/phpmyadmin
ENV MYSQL_DATABASE=${MYSQL_DATABASE}
ENV MYSQL_USER=${MYSQL_USERNAME}
ENV MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
ENV MYSQL_PORT=3381
ENV PMA_USER=${MYSQL_USER}
ENV PMA_PORT=3381
ENV PMA_PASSWORD=${MYSQL_PASSWORD}
ENV PMA_HOST=${MYSQL_HOST}
EXPOSE 8081
ENTRYPOINT [ "executable" ]
container 2
FROM adminer:4
ENV POSTGRES_DB=${POSTGRES_DATABASE}
ENV POSTGRES_USER=${POSTGRES_USER}
ENV POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EXPOSE 8082
ENTRYPOINT [ "?" ]
I am still not sure what the entrypoint script should be
Since you are not modifying anything in the image, you don't need to create a custom docker image for this, you could simply run 2 deployments in kubernetes passing the environment variables using a Kubernetes Secret.
See this example of how to deploy both application on Kubernetes:
Create a Kubernetes secret with your connection details:
cat <<EOF >./kustomization.yaml
secretGenerator:
- name: database-conn
literals:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_PORT=${MYSQL_PORT}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EOF
Apply the generated file:
kubectl apply -k .
secret/database-conn-mm8ck2296m created
Deploy phpMyAdmin and Adminer:
You need to create two deployment, the first for phpMyAdmin, and other to Adminer, using the secrets created above in the containers, for example:
Create a file called phpmyadmin-deploy.yml:
Note: Change the secret name from database-conn-mm8ck2296m to the generated name in the command above.
apiVersion: apps/v1
kind: Deployment
metadata:
name: phpmyadmin
spec:
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
app: phpmyadmin
spec:
containers:
- name: phpmyadmin
image: phpmyadmin/phpmyadmin
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_DATABASE
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_USER
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_PORT
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_PORT
- name: PMA_HOST
value: mysql.host
- name: PMA_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_USER
- name: PMA_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_ROOT_PASSWORD
- name: PMA_PORT
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_PORT
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin-svc
spec:
selector:
app: phpmyadmin
ports:
- protocol: TCP
port: 80
targetPort: 80
Adminer:
Create other file named adminer-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: adminer
spec:
selector:
matchLabels:
app: adminer
template:
metadata:
labels:
app: adminer
spec:
containers:
- name: adminer
image: adminer:4
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_PASSWORD
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: adminer-svc
spec:
selector:
app: adminer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Deploy the yaml files with kubectl apply -f *-deploy.yaml, after some seconds type kubectl get pods && kubectl get svc to verify if everything is ok.
Note: Both services will be created as ClusterIP, it means that it will be only accessible internally. If you are using a cloud provider, you can use service type LoadBalancer to get an external ip. Or you can use kubectl port-forward (see here) command to access your service from your computer.
Access application using port-forward:
phpMyadmin:
# This command will map the port 8080 from your localhost to phpMyadmin application:
kubectl port-forward svc/phpmyadmin-svc 8080:80
Adminer
# This command will map the port 8181 from your localhost to Adminer application:
kubectl port-forward svc/adminer-svc 8181:8080
And try to access:
http://localhost:8080 <= phpMyAdmin
http://localhost:8181 <= Adminer
References:
Kubernetes Secrets
Kubernetes Environment variables
Kubernetes port forward
You can't combine two docker images like that. What you've created is a multi-stage build and only the last stage is what ends up in the final image. And even if you used multi-stage copies to carefully fold both into one image, you would need to think through how you will run both things simultaneously. The upstream adminer image uses php -S under the hood.
You'd almost always run this in two separate Deployments. Since the only thing you're doing in this custom Dockerfile is setting environment variables, you don't even need a custom image; you can use the env: part of the pod spec to define environment variables at deploy time.
image: adminer:4 # without PHPMyAdmin
env:
- name: POSTGRES_DB
value: [...] # fixed value in pod spec
# valueFrom: ... # or get it from a ConfigMap or Secret
Run two Deployments, with one container in each, and a matching Service for each. (Don't run bare pods, and don't be tempted to put both containers in a single deployment.) If the databases are inside Kubernetes too, use their Services' names and ports; I'd usually expect these to be the "normal" 3306/5432 ports.
Related
I'm learning Kubernetes over Minikube.
My demo consists of a Flask API and a MySQL Database.
I made all the .yaml files but something strange happens with services of the deployments...
I cannot communicate with the API externally (neither with Postman, Curl, browser...)
By "externally" I mean "from outside the cluster" (on the same machine, ex: from the browser, postman...)
This the Deployment+Service for the API:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-dip-api-deployment
labels:
app: api-dip-api
spec:
replicas: 1
selector:
matchLabels:
app: api-dip-api
template:
metadata:
labels:
app: api-dip-api
spec:
containers:
- name: api-dip-api
image: myregistry.com
ports:
- containerPort: 5000
env:
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: api-secret
key: api-db-user
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: api-secret
key: api-db-password
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-url
- name: DATABASE_NAME
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-name
- name: DATABASE_PORT
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-port
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api-dip-api
ports:
- port: 5000
protocol: TCP
targetPort: 5000
nodePort: 30000
type: LoadBalancer
Dockerfile API:
FROM python:latest
# create a dir for app
WORKDIR /app
# intall dependecies
COPY requirements.txt .
RUN pip install -r requirements.txt
# source code
COPY /app .
EXPOSE 5000
# run the application
CMD ["python", "main.py"]
Since i'm using Minikube the correct IP for the service is displayed with
minikube service <service_name>
I already tried looking the minikube context, as suggested in another post, but it shows like:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube default
so it should be ok.
I don't know what to try now... the ports are mapped correctly I think.
I did not found any solution to my problem.
I run Kubernetes with Minikube on Vmware Fusion on my Mac with BigSur.
I found out that the SAME EXACT deployment works on a machine with ubuntu installed, OR on a virtual machine made with VirtualBox.
Actually seems that this is a known issue:
https://github.com/kubernetes/minikube/issues/11577
https://github.com/kubernetes/minikube/issues/11193
https://github.com/kubernetes/minikube/issues/4027
First off, I'm pretty sure I know why this isn't working: I'm pulling the Docker postgres:11-alpine image, modifying it, but then trying to change the env: in the k8s deployment.yaml on a custom image. I think that is the issue.
Basically, I'm trying to accomplish this per the Docker postgres docs:
docker run --name some-postgres -e POSTGRES_PASSWORD='foo' POSTGRES_USER='bar'
This is what I have:
Dockerfile.dev
FROM postgres:11-alpine
EXPOSE 5432
COPY ./db/*.sql /docker-entrypoint-initdb.d/
postgres.yaml (secrets will be moved after I'm done playing with this)
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image: testproject/postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "test_dev"
- name: POSTGRES_USER
value: "bar"
- name: POSTGRES_PASSWORD
value: "foo"
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-storage
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
When I use Skaffold to spin the cluster up locally, however, the env: "don't take" as I can still access the DB using the defaults POSTGRES_USER=postgres and POSTGRES_PASSWORD=''.
I bet if I did image: postgres then the env: would work, but then I'm not sure how to do the equivalent of this that is in the Dockerfile:
COPY ./db/*.sql /docker-entrypoint-initdb.d/
Any suggestions?
Here is the skaffold.yaml if that is helpful too:
apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: testproject/postgres
docker:
dockerfile: ./db/Dockerfile.dev
sync:
manual:
- src: "***/*.sql"
dest: .
- image: testproject/server
docker:
dockerfile: ./server/Dockerfile.dev
sync:
manual:
- src: "***/*.py"
dest: .
deploy:
kubectl:
manifests:
- k8s/ingress.yaml
- k8s/postgres.yaml
- k8s/server.yaml
The Docker postgres docs mention the following:
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
Are you sure that you're starting your deployment with an empty data directory? Could it be that PostgreSQL starts and allows you to login using the credentials that were specified in the environment variables during the first time your started it with that persistent volume?
If that's not it, have a look at the environment variables of the running pod. kubectl describe POD should tell you which environment variables are actually passed through to the pod. Maybe something in your Skaffold setup overwrites the environment variables? You could have a look in the pod by running env when execing into the pod. Also don't forget the logs, the PostgreSQL container should log which user account it creates during startup.
I want to pass some values from Kubernetes yaml file to the containers. These values will be read in my Java app using System.getenv("x_slave_host").
I have this dockerfile:
FROM jetty:9.4
...
ARG slave_host
ENV x_slave_host $slave_host
...
$JETTY_HOME/start.jar -Djetty.port=9090
The kubernetes yaml file contains this part where I added env section:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: master
spec:
template:
metadata:
labels:
app: master
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: master
image: xregistry.azurecr.io/Y:latest
ports:
- containerPort: 9090
volumeMounts:
- name: shared-data
mountPath: ~/.X/experiment
- env:
- name: slave_host
value: slavevalue
- name: jupyter
image: xregistry.azurecr.io/X:latest
ports:
- containerPort: 8000
- containerPort: 8888
volumeMounts:
- name: shared-data
mountPath: /var/folder/experiment
imagePullSecrets:
- name: acr-auth
Locally when I did the same thing using docker compose, it worked using args. This is a snippet:
master:
image: master
build:
context: ./master
args:
- slave_host=slavevalue
ports:
- "9090:9090"
So now I am trying to do the same thing but in Kubernetes. However, I am getting the following error (deploying it on Azure):
error: error validating "D:\\a\\r1\\a\\_X\\deployment\\kub-deploy.yaml": error validating data: field spec.template.spec.containers[1].name for v1.Container is required; if you choose to ignore these errors, turn validation off with --validate=false
In other words, how to rewrite my docker compose file to kubernetes and passing this argument.
Thanks!
env section should be added under containers, like this:
containers:
- name: master
env:
- name: slave_host
value: slavevalue
To elaborate a on #Kun Li's answer, besides adding environment variables e.g. in the Deployment manifest directly you can create a ConfigMap (or Secret depending on the data being stored) and reference these in your manifests. This is a good way of sharing the same environment variables across applications, compared to manually adding environment variables to several different applications.
Note that a ConfigMap can consist of one or more key: value pairs and it's not limited to storing environment variables, it's just one of the use cases. And as i mentioned before, consider using a Secret if the data is classified as sensitive.
Example of a ConfigMap manifest, in this case used for storing an environment variable:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-env-var
data:
slave_host: slavevalue
To create a ConfigMap holding one key=value pair using kubectl create:
kubectl create configmap my-env --from-literal=slave_host=slavevalue
To get hold of all environment variables configured in a ConfigMap use the following in your manifest:
containers:
envFrom:
- configMapRef:
name: my-env-var
Or if you want to pick one specific environment variable from your ConfigMap containing several variables:
containers:
env:
- name: slave_host
valueFrom:
configMapKeyRef:
name: my-env-var
key: slave_host
See this page for more examples of using ConfigMap's in different situations.
Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar
I fiddle around with this example: https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/
My modifications:
- I'm using my own Wordpress image [x] Works
Service starts (it needed more CPU 0.8 instead of 0.5, but now it works)
I want to use mariadb instead of mysql [ ] Fails!
I can't figure out how two pods link together!!!! ~5h + still failing
Here are my .yaml-Files
apiVersion: v1
kind: Pod
metadata:
name: wpsite
labels:
name: wpsite
spec:
containers:
- image: <my image on gcr.io>
name: wpsite
env:
- name: WORDPRESS_DB_PASSWORD
# Change this - must match mysql.yaml password.
value: example
ports:
- containerPort: 80
name: wpsite
volumeMounts:
# Name must match the volume name below.
- name: wpsite-disk
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: wpsite-disk
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: wpsite-disk
fsType: ext4
service:
apiVersion: v1
kind: Service
metadata:
labels:
name: wpsite
name: wpsite
spec:
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
# Label keys and values that must match in order to receive traffic for this service.
selector:
name: wpsite
mariadb:
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- resources:
limits:
# 0.5 hat nicht funktioniert
# Fehlermeldung in: kubectl describe pod mariadb
cpu: 0.8
image: mariadb:10.1
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
# Change this password!
value: example
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
# This name must match the volumes.name below.
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mariadb-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mariadb-disk
fsType: ext4
maria-db-service:
apiVersion: v1
kind: Service
metadata:
labels:
name: mariadb
name: mariadb
spec:
ports:
# The port that this service should serve on.
- port: 3306
# Label keys and values that must match in
# order to receive traffic for this service.
selector:
name: mysql
kubectl logs wpsite shows error messages like this: Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: Name or service not known in - on line 10
OK - found it out!
It's the name in mariadb-service.yaml
metadata.name must be mysql and not mariadb, the selector in mariadb-service must point to mariadb (the pod)
Here are the working files:
mariadb.yaml
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- resources:
limits:
# 0.5 hat nicht funktioniert
# Fehlermeldung in: kubectl describe pod mariadb
cpu: 0.8
image: mariadb:10.1
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
# Change this password!
value: example
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
# This name must match the volumes.name below.
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mariadb-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mariadb-disk
fsType: ext4
mariadb-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
ports:
# The port that this service should serve on.
- port: 3306
# Label keys and values that must match in
# order to receive traffic for this service.
selector:
name: mariadb
wpsite.yaml
apiVersion: v1
kind: Pod
metadata:
name: wpsite
labels:
name: wpsite
spec:
containers:
- image: <change this to your imagename on gcr.io>
name: wpsite
env:
- name: WORDPRESS_DB_PASSWORD
# Change this - must match mysql.yaml password.
value: example
ports:
- containerPort: 80
name: wpsite
volumeMounts:
# Name must match the volume name below.
- name: wpsite-disk
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: wpsite-disk
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: wpsite-disk
fsType: ext4
wpsite-service.yaml
apiVersion: v1
kind: Service
metadata:
name: wpsite
labels:
name: wpsite
spec:
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
# Label keys and values that must match in order to receive traffic for this service.
selector:
name: wpsite
With these settings I run: (my yaml-files are under gke)
$ kubectl create -f gke/mariadb.yaml
# Check
$ kubectl get pod
$ kubectl create -f gke/mariadb-service.yaml
# Check
$ kubectl get service mysql!!!! (name in mariadb = mysql)
$ kubectl create -f gke/wpsite.yaml
# Check
$ kubectl get pod
$ kubectl create -f gke/wpsite-service.yaml
# Check
$ kubectl describe service wpsite
Hope this helps someone...