I want to enable password for my Redis container in minikube. So, I enabled requirepass in redis.conf. Then, I generated the Docker image with this configuration file using the following Dockerfile.
FROM redis
COPY --chown=redis:redis redis.conf /usr/local/etc/redis/redis.conf
CMD ["redis-server", "/usr/local/etc/redis/redis.conf"]
Then, I launch a pod with this image using the following Deployment YAML.
kind: Deployment
apiVersion: apps/v1
metadata:
name: cache
labels:
run: cache
spec:
replicas: 1
selector:
matchLabels:
run: cache
template:
metadata:
labels:
run: cache
spec:
containers:
- name: cache
image: redis
envFrom:
- configMapRef:
name: redis-cfgmap
resources:
limits:
memory: "256Mi"
cpu: "200m"
imagePullPolicy: Never
restartPolicy: Always
terminationGracePeriodSeconds: 30
Note, I am doing a docker build -t redis:latest from a shell that has run eval $(minikube docker-env). Also, imagePullPolicy is set to Never so that the image is pulled from local Dokcer registry.
While the pod does come up, the logs mention that, the specified configuration file is not used.
6:C 27 Feb 2020 04:06:08.568 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
6:C 27 Feb 2020 04:06:08.568 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=6, just started
6:C 27 Feb 2020 04:06:08.568 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
6:M 27 Feb 2020 04:06:08.570 * Running mode=standalone, port=6379.
6:M 27 Feb 2020 04:06:08.570 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
6:M 27 Feb 2020 04:06:08.570 # Server initialized
6:M 27 Feb 2020 04:06:08.571 * Ready to accept connections
What is missing?
Just a little more explanation for whom might want to read it.
It looks like for some reason the image you were building - instead of overwriting existing image as it should - wasn't doing it and you were stuck with redis:latest official image, not the one you just built.
When approaching this problem and trying to build the image I had the same issue as you and managed to solve it running docker system prune, but after that I didn't manage to replicate it one more time so its hard for me to say what was the real cause of it.
Anyway I am glad that it worked for you.
Related
I have built a Dockerfile from an official image of Nodered and added some custom nodes. After build I ran a Docker container which runs fine and show the custom nodes I added. Now I pushed the custom image to a private registry and when using it in a kubernetes deployment pods runs fine but the custom nodes are not shown in the Nodered UI.
I have exec into both Docker container and k8s pod to see what was going on and found that the folder 'files' I passed in the Dockerfile to folder /data exists only in the Docker container, it isnĀ“t in the K8s pod.
Here is the folder 'files'in Docker:
docker exec -ti ca55c5823200 bash
bash-5.0# cd /data
bash-5.0# ls
files flows.json lib node_modules package-lock.json package.json settings.js
Now when I exec into the K8s pod:
bash-5.0# cd /data
bash-5.0# ls
config lib lost+found node_modules package.json settings.js
bash-5.0#
I also checked the pod logs but found nothing useful:
k logs nodered-696bc98c7f-c48h7
> node-red-docker#1.3.6 start /usr/src/node-red
> node $NODE_OPTIONS node_modules/node-red/red.js $FLOWS "--userDir" "/data"
20 Sep 18:58:32 - [info]
Welcome to Node-RED
===================
20 Sep 18:58:32 - [info] Node-RED version: v1.3.6
20 Sep 18:58:32 - [info] Node.js version: v10.24.1
20 Sep 18:58:32 - [info] Linux 5.4.141-67.229.amzn2.x86_64 x64 LE
20 Sep 18:58:37 - [info] Loading palette nodes
20 Sep 18:58:48 - [info] Settings file : /data/settings.js
20 Sep 18:58:48 - [info] Context store : 'default' [module=memory]
20 Sep 18:58:48 - [info] User directory : /data
20 Sep 18:58:48 - [warn] Projects disabled : editorTheme.projects.enabled=false
20 Sep 18:58:48 - [info] Flows file : /data/flows.json
20 Sep 18:58:48 - [info] Creating new flow file
20 Sep 18:58:49 - [warn]
---------------------------------------------------------------------
Your flow credentials file is encrypted using a system-generated key.
If the system-generated key is lost for any reason, your credentials
file will not be recoverable, you will have to delete it and re-enter
your credentials.
You should set your own key using the 'credentialSecret' option in
your settings file. Node-RED will then re-encrypt your credentials
file using your chosen key the next time you deploy a change.
---------------------------------------------------------------------
20 Sep 18:58:49 - [info] Starting flows
20 Sep 18:58:49 - [info] Started flows
20 Sep 18:58:50 - [info] Server now running at http://127.0.0.1:1880/
This is the Dockerfile:
FROM nodered/node-red:1.3.6-12
USER root
COPY files/settings.js /data/
COPY files/ /data/files
RUN chown -R node-red:root /data
RUN bash /data/files/script.sh
And this is the Deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodered
namespace: nodered
spec:
selector:
matchLabels:
app: nodered
replicas: 1
template: # template for the pods
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: 698364145168.dkr.ecr.us-west-2.amazonaws.com/nodered:1.0
readinessProbe:
httpGet:
path: /
port: 1880
resources:
requests:
memory: 300Mi
cpu: 50m
limits:
memory: 300Mi
cpu: 50m
volumeMounts:
- name: config-volume
mountPath: /data/
volumes:
- name: config-volume
persistentVolumeClaim:
claimName: nodered-pvc
restartPolicy: Always
Any idea what is going on? Thank you in advanced!
After seeing David's comment below I have removed the PV and PVC from the deployment manifest and now I see these syntax errors in the logs. What I don't get is why it is running fine in Docker:
k logs nodered-6cdd6954f6-pmgmf
> node-red-docker#1.3.6 start /usr/src/node-red
> node $NODE_OPTIONS node_modules/node-red/red.js $FLOWS "--userDir" "/data"
Error loading settings file: /data/settings.js
/data/settings.js:347
}
^
SyntaxError: Unexpected end of input
at Module._compile (internal/modules/cjs/loader.js:723:23)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
at Module.load (internal/modules/cjs/loader.js:653:32)
at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
at Function.Module._load (internal/modules/cjs/loader.js:585:3)
at Module.require (internal/modules/cjs/loader.js:692:17)
at require (internal/modules/cjs/helpers.js:25:18)
at Object.<anonymous> (/usr/src/node-red/node_modules/node-red/red.js:136:20)
at Module._compile (internal/modules/cjs/loader.js:778:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
I need to make my Linux container running (without any app or service in it) so I can enter /bin/bash and modify some local linux files before I actually manually run my app from the container shell (this is purely for some debugging purposes so I do not want any modifications in my image itself, please do not suggest that as an option)
I have defined my Kubernetes YAML file hoping that I would be able to execute simple command: ["/bin/bash"] but this does not work because it will execute command and Exit the container. So how I can make it not to exit so I am able to exec container?
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-deployment2
labels:
app: frontarena-ads-deployment2
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-aks-test2
labels:
app: frontarena-ads-aks-test2
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
restartPolicy: Always
containers:
- name: frontarena-ads-aks-test2
image: test.dev/ads:test2
imagePullPolicy: Always
env:
- name: DB_TYPE
value: "odbc"
- name: LANG
value: "en_US.utf8"
command: ["/bin/bash"]
imagePullSecrets:
- name: fa-repo-secret
selector:
matchLabels:
app: frontarena-ads-aks-test2
When I want to see what is going on after the deployment I can notice:
NAME READY STATUS RESTARTS AGE
frontarena-ads-deployment2-546fc4b75-zmmrs 0/1 CrashLoopBackOff 19 77m
kubectl logs $POD doesn't return anything
and kubectl describe pod $POD output is:
Command:
/bin/bash
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 07 Apr 2021 11:40:31 +0000
Finished: Wed, 07 Apr 2021 11:40:31 +0000
You just need to run some long/endless process for the container to be up. For example you can read a stream, that'll last forever unless you kill the pod/container:
command: ["/bin/bash", "-c"]
args: ["cat /dev/stdout"]
I am using the following snippet to create the deployment
oc create -f nginx-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
openshift.io/scc: privileged
spec:
securityContext:
priviledged: false
runAsUser: 0
volumes:
- name: static-web-volume
hostPath:
path: /home/testFolder
type: Directory
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: static-web-volume
I am getting permission denied issue when i try to go inside the html folder
$ cd /usr/share/nginx/html
$ ls
ls: cannot open directory .: Permission denied
This is easiest sample code as i have similar requirement where i have to read the files from the mounted drives, but that one is failing as well.
I am using kubernetes 1.5 as this is only one available. I am not sure whether the volumes have been mounted or not.
all my dir permissions are set to root as well.
content of /home/testfolder
0 drwxrwxrwx. 3 root root 52 Apr 15 23:06 .
4 dr-xr-x---. 11 root root 4096 Apr 15 22:58 ..
0 drwxrwxrwx. 2 root root 6 Apr 15 19:56 ind
4 -rwxrwxrwx. 1 root root 14 Apr 15 19:22 index.html
4 -rwxrwxrwx. 1 root root 694 Apr 15 23:06 ordr.yam
I remember hitting this one in openshift sometime back. It has something to do with SElinux configuration on the host.
Try this at the host server where you mount to your container volume /usr/share/nginx/html.
sudo chcon -Rt svirt_sandbox_file_t /
I'm trying to set up a dev environment with Kubernetes via Minikube. I successfully mounted the same volume to the same data dir on the same image with Docker for Mac, but I'm having trouble with Minikube.
Relevant files and logs:
db-pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
name: msyql
name: db
namespace: default
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: volumesnew
volumes:
- name: volumesnew
hostPath:
path: "/Users/eric/Volumes/mysql"
kubectl get pods:
NAME READY STATUS RESTARTS AGE
db 0/1 Error 1 3s
kubectl logs db:
2016-08-29 20:05:55 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-08-29 20:05:55 0 [Note] mysqld (mysqld 5.6.32) starting as process 1 ...
2016-08-29 20:05:55 1 [Warning] Setting lower_case_table_names=2 because file system for /var/lib/mysql/ is case insensitive
kubectl describe pods db:
Name: db
Namespace: default
Node: minikubevm/10.0.2.15
Start Time: Wed, 31 Aug 2016 07:48:39 -0700
Labels: name=msyql
Status: Running
IP: 172.17.0.3
Controllers: <none>
Containers:
mysqldev:
Container ID: docker://af0937edcd9aa00ebc278bc8be00bc37d60cbaa403c69f71bc1b378182569d3d
Image: mysql/mysql-server:5.6.32
Image ID: docker://sha256:0fb418d5a10c9632b7ace0f6e7f00ec2b8eb58a451ee77377954fedf6344abc5
Port: 3306/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 31 Aug 2016 07:48:42 -0700
Finished: Wed, 31 Aug 2016 07:48:43 -0700
Ready: False
Restart Count: 1
Environment Variables:
MYSQL_ROOT_PASSWORD: test
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
volumesnew:
Type: HostPath (bare host directory volume)
Path: /Users/eric/Volumes/newmysql
default-token-il74e:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-il74e
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
7s 7s 1 {default-scheduler } Normal Scheduled Successfully assigned db to minikubevm
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id 568f9112dce0
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id 568f9112dce0
6s 4s 2 {kubelet minikubevm} spec.containers{mysqldev} Normal Pulled Container image "mysql/mysql-server:5.6.32" already present on machine
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id af0937edcd9a
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id af0937edcd9a
3s 2s 2 {kubelet minikubevm} spec.containers{mysqldev} Warning BackOff Back-off restarting failed docker container
3s 2s 2 {kubelet minikubevm} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysqldev" with CrashLoopBackOff: "Back-off 10s restarting failed container=mysqldev pod=db_default(012d5178-6f8a-11e6-97e8-c2daf2e2520c)"
I was able to mount the data directory from the host to the container in a test directory, but I'm having trouble mounting to the MySQL data directory. Also, I tried to mount an empty directory to the container's data dir with the appropriate MySQL environment variables set, which in Docker for Mac allowed me to perform a SQL dump in the new dir, but I'm seeing the same errors in Minikube.
Any thought on what might be the cause, or if I'm not setting up my dev environment the preferred Kubernetes/Minikube way, please share your thoughts.
I was able to resolve this with the following:
echo "/Users -network 192.168.99.0 -mask 255.255.255.0 -alldirs -maproot=root:wheel" | sudo tee -a /etc/exports
sudo nfsd restart
minikube start
minikube ssh -- sudo umount /Users
minikube ssh -- sudo /usr/local/etc/init.d/nfs-client start
minikube ssh -- sudo mount 192.168.99.1:/Users /Users -o rw,async,noatime,rsize=32768,wsize=32768,proto=tcp
I am running Minikube in VirtualBox. I don't know if this will work with other VM drivers - xhyve, etc.
Reference: https://github.com/kubernetes/minikube/issues/2
EDIT: I should mention that this works for minikube v0.14.0.
1. Mount the folder you want to share on your host, in minikube:
minikube mount ./path/to/mySharedData:/mnt1/shared1
Don't close the terminal. That process needs to be running all the time for the folder to be accessible.
2. Use that folder with hostPath:
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: my-volume
volumes:
- name: my-volume
hostPath:
path: "/mnt1/shared1"
3. Writing access issue?
In case you have a writing access issue, you might want to mount the volume with:
minikube mount ./path/to/mySharedData:/mnt1/shared1 --uid 10001 --gid 10001
Here, the volume mounted in minikube will have group id and user id 10001. This is the user id of Azure SQL Edge server inside the container.
I don't know which is the user id of mysql in your case. If you want to know, log into your container and type id, it will tell you the user id.
i've a kubernetes cluster with a master node and 3 minions, i've already a glusterfs cluster, every node of kubernetes cluster have glusterfs-client installed and working.
i'm trying to run a pod ( a simple mysql ) mounting /var/lib/mysql on glusterfs but i see:
Image: mysql:5.6 is ready, container is creating
i run:
kubectl get event
i see:
Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql-9ym10 Pod scheduled {scheduler } Successfully assigned mysql-9ym10 to nodeXX
Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql ReplicationController successfulCreate {replication-controller } Created pod: mysql-9ym10
Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedMount {kubelet nodeXX} Unable to mount volumes for pod "mysql-9ym10_default": exit status 1
Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedSync {kubelet nodeXX} Error syncing pod, skipping: exit status 1
if i run
kubectl describe pod mysql-9ym10
i see:
Name: mysql-9ym10
Namespace: default
Image(s): mysql:5.6
Node: nodeXX/nodeXX
Labels: app=mysql
Status: Pending
Reason:
Message:
IP:
Replication Controllers: mysql (1/1 replicas created)
Containers:
mysql:
Image: mysql:5.6
State: Waiting
Reason: Image: mysql:5.6 is ready, container is creating
Ready: False
Restart Count: 0
Conditions:
Type Status
Ready False
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 {scheduler } scheduled Successfully assigned mysql-9ym10 to nodeXX
Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedMount Unable to mount volumes for pod "mysql-9ym10_default": exit status 1
Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedSync Error syncing pod, skipping: exit status 1
this is the yaml file for container:
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
name: mysql
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- mountPath: /var/lib/mysql
name: glusterfsvol
volumes:
- glusterfs:
endpoints: glusterfs-cluster
path: glustervolume
readOnly: false
name: glusterfsvol
i've got and endpoint that is configured with glusterfs ip addresses.
i know the posted link, i've followed it but the result is on my first post!
On first: To use a GlusterFS you don't need to install glusterfs-client on kubernetes node. Kubernetes have the volume mounting option for glusterfs by default.
To use a glusterfs with kubernetes you need to things.
a working glusterfs server. a running volume in the glusterfs server. I assume you have those. If anyone don't then create a glusterfs server and start your volumes with the following commands
$ gluster volume create <volume-name> replica 2 transport tcp \
peer1:/directory \
peer2:/directory \
force
$ gluster volume start <vonlume-name>
$ sudo gluster volume info
if this is ok, you need an kubernetes endpoint to use with the pod. as far an example a end point is like this.
kind: Endpoints
apiVersion: v1
metadata:
name: glusterfs
subsets:
- addresses:
- ip: peer1
ports:
- port: 1
- addresses:
- ip: peer2
ports:
- port: 1
And at third mount the gfs volume to a pod with the end point.
containers:
- name: mysql
image: mysql:5.6
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- mountPath: /var/lib/mysql
name: glusterfsvol
volumes:
- glusterfs:
endpoints: glusterfs-cluster
path: <volume-name>
name: glusterfsvol
**The path must match the volume name with the glusterfs.
this all should work fine.
You need to configure Endpoints https://github.com/kubernetes/kubernetes/blob/release-1.1/examples/glusterfs/README.md , otherwise kubernetes doesn't know how to access your gluster cluster.