I'm learning Kubernetes over Minikube.
My demo consists of a Flask API and a MySQL Database.
I made all the .yaml files but something strange happens with services of the deployments...
I cannot communicate with the API externally (neither with Postman, Curl, browser...)
By "externally" I mean "from outside the cluster" (on the same machine, ex: from the browser, postman...)
This the Deployment+Service for the API:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-dip-api-deployment
labels:
app: api-dip-api
spec:
replicas: 1
selector:
matchLabels:
app: api-dip-api
template:
metadata:
labels:
app: api-dip-api
spec:
containers:
- name: api-dip-api
image: myregistry.com
ports:
- containerPort: 5000
env:
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: api-secret
key: api-db-user
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: api-secret
key: api-db-password
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-url
- name: DATABASE_NAME
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-name
- name: DATABASE_PORT
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-port
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api-dip-api
ports:
- port: 5000
protocol: TCP
targetPort: 5000
nodePort: 30000
type: LoadBalancer
Dockerfile API:
FROM python:latest
# create a dir for app
WORKDIR /app
# intall dependecies
COPY requirements.txt .
RUN pip install -r requirements.txt
# source code
COPY /app .
EXPOSE 5000
# run the application
CMD ["python", "main.py"]
Since i'm using Minikube the correct IP for the service is displayed with
minikube service <service_name>
I already tried looking the minikube context, as suggested in another post, but it shows like:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube default
so it should be ok.
I don't know what to try now... the ports are mapped correctly I think.
I did not found any solution to my problem.
I run Kubernetes with Minikube on Vmware Fusion on my Mac with BigSur.
I found out that the SAME EXACT deployment works on a machine with ubuntu installed, OR on a virtual machine made with VirtualBox.
Actually seems that this is a known issue:
https://github.com/kubernetes/minikube/issues/11577
https://github.com/kubernetes/minikube/issues/11193
https://github.com/kubernetes/minikube/issues/4027
Related
I am trying to create a pod with both phpmyadmin and adminer in it. I have the Dockerfile created but I am not sure of the entrypoint needed.
Has anyone accomplished this before? I have everything figured out but the entrypoint...
FROM phpmyadmin/phpmyadmin
ENV MYSQL_DATABASE=${MYSQL_DATABASE}
ENV MYSQL_USER=${MYSQL_USERNAME}
ENV MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
ENV MYSQL_PORT=3381
ENV PMA_USER=${MYSQL_USER}
ENV PMA_PORT=3381
ENV PMA_PASSWORD=${MYSQL_PASSWORD}
ENV PMA_HOST=${MYSQL_HOST}
EXPOSE 8081
ENTRYPOINT [ "executable" ]
FROM adminer:4
ENV POSTGRES_DB=${POSTGRES_DATABASE}
ENV POSTGRES_USER=${POSTGRES_USER}
ENV POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EXPOSE 8082
ENTRYPOINT [ "?" ]
------UPDATE 1 ----------
after read some comments I spilt my Dockerfiles and will create a yml file for the kube pod
FROM phpmyadmin/phpmyadmin
ENV MYSQL_DATABASE=${MYSQL_DATABASE}
ENV MYSQL_USER=${MYSQL_USERNAME}
ENV MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
ENV MYSQL_PORT=3381
ENV PMA_USER=${MYSQL_USER}
ENV PMA_PORT=3381
ENV PMA_PASSWORD=${MYSQL_PASSWORD}
ENV PMA_HOST=${MYSQL_HOST}
EXPOSE 8081
ENTRYPOINT [ "executable" ]
container 2
FROM adminer:4
ENV POSTGRES_DB=${POSTGRES_DATABASE}
ENV POSTGRES_USER=${POSTGRES_USER}
ENV POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EXPOSE 8082
ENTRYPOINT [ "?" ]
I am still not sure what the entrypoint script should be
Since you are not modifying anything in the image, you don't need to create a custom docker image for this, you could simply run 2 deployments in kubernetes passing the environment variables using a Kubernetes Secret.
See this example of how to deploy both application on Kubernetes:
Create a Kubernetes secret with your connection details:
cat <<EOF >./kustomization.yaml
secretGenerator:
- name: database-conn
literals:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_PORT=${MYSQL_PORT}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EOF
Apply the generated file:
kubectl apply -k .
secret/database-conn-mm8ck2296m created
Deploy phpMyAdmin and Adminer:
You need to create two deployment, the first for phpMyAdmin, and other to Adminer, using the secrets created above in the containers, for example:
Create a file called phpmyadmin-deploy.yml:
Note: Change the secret name from database-conn-mm8ck2296m to the generated name in the command above.
apiVersion: apps/v1
kind: Deployment
metadata:
name: phpmyadmin
spec:
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
app: phpmyadmin
spec:
containers:
- name: phpmyadmin
image: phpmyadmin/phpmyadmin
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_DATABASE
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_USER
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_PORT
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_PORT
- name: PMA_HOST
value: mysql.host
- name: PMA_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_USER
- name: PMA_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_ROOT_PASSWORD
- name: PMA_PORT
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_PORT
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin-svc
spec:
selector:
app: phpmyadmin
ports:
- protocol: TCP
port: 80
targetPort: 80
Adminer:
Create other file named adminer-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: adminer
spec:
selector:
matchLabels:
app: adminer
template:
metadata:
labels:
app: adminer
spec:
containers:
- name: adminer
image: adminer:4
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_PASSWORD
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: adminer-svc
spec:
selector:
app: adminer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Deploy the yaml files with kubectl apply -f *-deploy.yaml, after some seconds type kubectl get pods && kubectl get svc to verify if everything is ok.
Note: Both services will be created as ClusterIP, it means that it will be only accessible internally. If you are using a cloud provider, you can use service type LoadBalancer to get an external ip. Or you can use kubectl port-forward (see here) command to access your service from your computer.
Access application using port-forward:
phpMyadmin:
# This command will map the port 8080 from your localhost to phpMyadmin application:
kubectl port-forward svc/phpmyadmin-svc 8080:80
Adminer
# This command will map the port 8181 from your localhost to Adminer application:
kubectl port-forward svc/adminer-svc 8181:8080
And try to access:
http://localhost:8080 <= phpMyAdmin
http://localhost:8181 <= Adminer
References:
Kubernetes Secrets
Kubernetes Environment variables
Kubernetes port forward
You can't combine two docker images like that. What you've created is a multi-stage build and only the last stage is what ends up in the final image. And even if you used multi-stage copies to carefully fold both into one image, you would need to think through how you will run both things simultaneously. The upstream adminer image uses php -S under the hood.
You'd almost always run this in two separate Deployments. Since the only thing you're doing in this custom Dockerfile is setting environment variables, you don't even need a custom image; you can use the env: part of the pod spec to define environment variables at deploy time.
image: adminer:4 # without PHPMyAdmin
env:
- name: POSTGRES_DB
value: [...] # fixed value in pod spec
# valueFrom: ... # or get it from a ConfigMap or Secret
Run two Deployments, with one container in each, and a matching Service for each. (Don't run bare pods, and don't be tempted to put both containers in a single deployment.) If the databases are inside Kubernetes too, use their Services' names and ports; I'd usually expect these to be the "normal" 3306/5432 ports.
First off, I'm pretty sure I know why this isn't working: I'm pulling the Docker postgres:11-alpine image, modifying it, but then trying to change the env: in the k8s deployment.yaml on a custom image. I think that is the issue.
Basically, I'm trying to accomplish this per the Docker postgres docs:
docker run --name some-postgres -e POSTGRES_PASSWORD='foo' POSTGRES_USER='bar'
This is what I have:
Dockerfile.dev
FROM postgres:11-alpine
EXPOSE 5432
COPY ./db/*.sql /docker-entrypoint-initdb.d/
postgres.yaml (secrets will be moved after I'm done playing with this)
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image: testproject/postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "test_dev"
- name: POSTGRES_USER
value: "bar"
- name: POSTGRES_PASSWORD
value: "foo"
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-storage
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
When I use Skaffold to spin the cluster up locally, however, the env: "don't take" as I can still access the DB using the defaults POSTGRES_USER=postgres and POSTGRES_PASSWORD=''.
I bet if I did image: postgres then the env: would work, but then I'm not sure how to do the equivalent of this that is in the Dockerfile:
COPY ./db/*.sql /docker-entrypoint-initdb.d/
Any suggestions?
Here is the skaffold.yaml if that is helpful too:
apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: testproject/postgres
docker:
dockerfile: ./db/Dockerfile.dev
sync:
manual:
- src: "***/*.sql"
dest: .
- image: testproject/server
docker:
dockerfile: ./server/Dockerfile.dev
sync:
manual:
- src: "***/*.py"
dest: .
deploy:
kubectl:
manifests:
- k8s/ingress.yaml
- k8s/postgres.yaml
- k8s/server.yaml
The Docker postgres docs mention the following:
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
Are you sure that you're starting your deployment with an empty data directory? Could it be that PostgreSQL starts and allows you to login using the credentials that were specified in the environment variables during the first time your started it with that persistent volume?
If that's not it, have a look at the environment variables of the running pod. kubectl describe POD should tell you which environment variables are actually passed through to the pod. Maybe something in your Skaffold setup overwrites the environment variables? You could have a look in the pod by running env when execing into the pod. Also don't forget the logs, the PostgreSQL container should log which user account it creates during startup.
I've been hitting errors when trying to set up a dev platform in Kubernetes & minikube. The config is creating a service, persistentVolume, persistentVolumeClaim & deployment.
The deployment is creating a single pod with a single container based on bitnami/mariadb:latest
I am mounting a local volume into the minikube vm via:
minikube mount <source-path>:/data
This local volume is mounting correctly and can be inspected when I ssh into the minikube vm via: minikube ssh
I now run:
kubectl create -f mariadb-deployment.yaml
to fire up the platform, the yaml config:
kind: Service
apiVersion: v1
metadata:
name: mariadb-deployment
labels:
app: supertubes
spec:
ports:
- port: 3306
selector:
app: supertubes
tier: mariadb
type: LoadBalancer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: local-db-pv
labels:
type: local
tier: mariadb
app: supertubes
spec:
storageClassName: slow
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/staging/sumatra/mariadb-data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-db-pv-claim
labels:
app: supertubes
spec:
storageClassName: slow
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: local
tier: mariadb
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: mariadb-deployment
labels:
app: supertubes
spec:
selector:
matchLabels:
app: supertubes
tier: mariadb
template:
metadata:
labels:
app: supertubes
tier: mariadb
spec:
securityContext:
fsGroup: 1001
containers:
- image: bitnami/mariadb:latest
name: mariadb
env:
- name: MARIADB_ROOT_PASSWORD
value: <db-password>
- name: MARIADB_DATABASE
value: <db-name>
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- name: mariadb-persistent-storage
mountPath: /bitnami
volumes:
- name: mariadb-persistent-storage
persistentVolumeClaim:
claimName: local-db-pv-claim
The above config will then fail to boot the pod and inspecting the pods logs within minikube dashboard shows the following:
nami INFO Initializing mariadb
mariadb INFO ==> Cleaning data dir...
mariadb INFO ==> Configuring permissions...
mariadb INFO ==> Validating inputs...
mariadb INFO ==> Initializing database...
mariadb INFO ==> Creating 'root' user with unrestricted access...
mariadb INFO ==> Creating database pw_tbs...
mariadb INFO ==> Enabling remote connections...
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mariadb'
Looking at the above I believed the issue was to do with Bitnami using user: 1001 to launch their mariadb image:
https://github.com/bitnami/bitnami-docker-mariadb/issues/134
Since reading the above issue I've been playing with securityContext within the containers spec. At present you'll see I have it set to:
deployment.template.spec
securityContext:
fsGroup: 1001
but this isn't working. I've also tried:
securityContext:
privileged: true
but didn't get anywhere with that either.
One other check I made was to remove the volumeMount from deployment.template.spec.containers and see if things worked correctly without it, which they do :)
I then opened a shell into the pod to see what the permissions on /bitnami are:
Reading a bit more on the Bitnami issue posted above it says the user: 1001 is a member of the root group, therefore I'd expect them to have the neccessary permissions... At this stage I'm a little lost as to what is wrong.
If anyone could help me understand how to correctly set up this minikube vm volume within a container that would be amazing!
Edit 15/03/18
Following #Anton Kostenko's suggestions I added a busybox container as an initContainer which ran a chmod on the bitnami directory:
...
spec:
initContainers:
- name: install
image: busybox
imagePullPolicy: Always
command: ["chmod", "-R", "777", "/bitnami"]
volumeMounts:
- name: mariadb-persistent-storage
mountPath: /bitnami
containers:
- image: bitnami/mariadb:latest
name: mariadb
...
however even with setting global rwx permissions (777) the directory couldn't mount as the MariaDB container doesn't allow user 1001 to do so:
nami INFO Initializing mariadb
Error executing 'postInstallation': EPERM: operation not permitted, utime '/bitnami/mariadb/.restored'
Another Edit 15/03/18
Have now tried setting the user:group on my local machine (MacBook) so that when passed to the minikube vm they should already be correct:
Now mariadb-data has rwx permission for eveyone and user: 1001, group: 1001
I then removed the initContainer as I wasn't really sure what that would be adding.
SSHing onto the minikube vm I can see the permissions and user:group have been carried across:
The user & group now being set as docker
Firing up this container results in the same sort of error:
nami INFO Initializing mariadb
Error executing 'postInstallation': EIO: i/o error, utime '/bitnami/mariadb/.restored'
I've tried removing the securityContext, and also adding it as runAsUser: 1001, fsGroup: 1001, however neither made any difference.
Looks like that is an issue in Minikube.
You can try to use the init-container which will fix a permissions before main container will be started, like this:
...........
spec:
initContainers:
- name: "fix-non-root-permissions"
image: "busybox"
imagePullPolicy: "Always"
command: [ "chmod", "-R", "g+rwX", "/bitnami" ]
volumeMounts:
- name: datadir
mountPath: /bitnami
containers:
.........
Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar
I am trying to run a shell script at the start of a docker container running on Google Cloud Containers using Kubernetes. The structure of my app directory is something like this. I'd like to run prod_start.sh script at the start of the container (I don't want to put it as part of the Dockerfile though). The current setup fails to start the container with Command not found file ./prod_start.sh does not exist. Any idea how to fix this?
app/
...
Dockerfile
prod_start.sh
web-controller.yaml
Gemfile
...
Dockerfile
FROM ruby
RUN mkdir /backend
WORKDIR /backend
ADD Gemfile /backend/Gemfile
ADD Gemfile.lock /backend/Gemfile.lock
RUN bundle install
web-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
After a lot of experimentations I believe adding the script to the Dockerfile:
ADD prod_start.sh /backend/prod_start.sh
And then calling the command like this in the yaml controller file:
command: ['/bin/sh', './prod_start.sh']
Fixed it.
you can add a config map to your yaml instead of adding to your dockerfile.
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
- name: prod-start-config
configMap:
name: prod-start-config-script
defaultMode: 0744
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
- name: prod-start-config
mountpath: /backend/
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
Then create another yaml file for your script:
script.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prod-start-config-script
data:
prod_start.sh: |
apt-get update
When deployed the script will be in the scripts directory