unable to run a container with volume on glusterfs - docker

i've a kubernetes cluster with a master node and 3 minions, i've already a glusterfs cluster, every node of kubernetes cluster have glusterfs-client installed and working.
i'm trying to run a pod ( a simple mysql ) mounting /var/lib/mysql on glusterfs but i see:
Image: mysql:5.6 is ready, container is creating
i run:
kubectl get event
i see:
Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql-9ym10 Pod scheduled {scheduler } Successfully assigned mysql-9ym10 to nodeXX
Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql ReplicationController successfulCreate {replication-controller } Created pod: mysql-9ym10
Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedMount {kubelet nodeXX} Unable to mount volumes for pod "mysql-9ym10_default": exit status 1
Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedSync {kubelet nodeXX} Error syncing pod, skipping: exit status 1
if i run
kubectl describe pod mysql-9ym10
i see:
Name: mysql-9ym10
Namespace: default
Image(s): mysql:5.6
Node: nodeXX/nodeXX
Labels: app=mysql
Status: Pending
Reason:
Message:
IP:
Replication Controllers: mysql (1/1 replicas created)
Containers:
mysql:
Image: mysql:5.6
State: Waiting
Reason: Image: mysql:5.6 is ready, container is creating
Ready: False
Restart Count: 0
Conditions:
Type Status
Ready False
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 {scheduler } scheduled Successfully assigned mysql-9ym10 to nodeXX
Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedMount Unable to mount volumes for pod "mysql-9ym10_default": exit status 1
Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedSync Error syncing pod, skipping: exit status 1
this is the yaml file for container:
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
name: mysql
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- mountPath: /var/lib/mysql
name: glusterfsvol
volumes:
- glusterfs:
endpoints: glusterfs-cluster
path: glustervolume
readOnly: false
name: glusterfsvol

i've got and endpoint that is configured with glusterfs ip addresses.
i know the posted link, i've followed it but the result is on my first post!

On first: To use a GlusterFS you don't need to install glusterfs-client on kubernetes node. Kubernetes have the volume mounting option for glusterfs by default.
To use a glusterfs with kubernetes you need to things.
a working glusterfs server. a running volume in the glusterfs server. I assume you have those. If anyone don't then create a glusterfs server and start your volumes with the following commands
$ gluster volume create <volume-name> replica 2 transport tcp \
peer1:/directory \
peer2:/directory \
force
$ gluster volume start <vonlume-name>
$ sudo gluster volume info
if this is ok, you need an kubernetes endpoint to use with the pod. as far an example a end point is like this.
kind: Endpoints
apiVersion: v1
metadata:
name: glusterfs
subsets:
- addresses:
- ip: peer1
ports:
- port: 1
- addresses:
- ip: peer2
ports:
- port: 1
And at third mount the gfs volume to a pod with the end point.
containers:
- name: mysql
image: mysql:5.6
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- mountPath: /var/lib/mysql
name: glusterfsvol
volumes:
- glusterfs:
endpoints: glusterfs-cluster
path: <volume-name>
name: glusterfsvol
**The path must match the volume name with the glusterfs.
this all should work fine.

You need to configure Endpoints https://github.com/kubernetes/kubernetes/blob/release-1.1/examples/glusterfs/README.md , otherwise kubernetes doesn't know how to access your gluster cluster.

Related

Redis does not use configuration file in minikube

I want to enable password for my Redis container in minikube. So, I enabled requirepass in redis.conf. Then, I generated the Docker image with this configuration file using the following Dockerfile.
FROM redis
COPY --chown=redis:redis redis.conf /usr/local/etc/redis/redis.conf
CMD ["redis-server", "/usr/local/etc/redis/redis.conf"]
Then, I launch a pod with this image using the following Deployment YAML.
kind: Deployment
apiVersion: apps/v1
metadata:
name: cache
labels:
run: cache
spec:
replicas: 1
selector:
matchLabels:
run: cache
template:
metadata:
labels:
run: cache
spec:
containers:
- name: cache
image: redis
envFrom:
- configMapRef:
name: redis-cfgmap
resources:
limits:
memory: "256Mi"
cpu: "200m"
imagePullPolicy: Never
restartPolicy: Always
terminationGracePeriodSeconds: 30
Note, I am doing a docker build -t redis:latest from a shell that has run eval $(minikube docker-env). Also, imagePullPolicy is set to Never so that the image is pulled from local Dokcer registry.
While the pod does come up, the logs mention that, the specified configuration file is not used.
6:C 27 Feb 2020 04:06:08.568 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
6:C 27 Feb 2020 04:06:08.568 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=6, just started
6:C 27 Feb 2020 04:06:08.568 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
6:M 27 Feb 2020 04:06:08.570 * Running mode=standalone, port=6379.
6:M 27 Feb 2020 04:06:08.570 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
6:M 27 Feb 2020 04:06:08.570 # Server initialized
6:M 27 Feb 2020 04:06:08.571 * Ready to accept connections
What is missing?
Just a little more explanation for whom might want to read it.
It looks like for some reason the image you were building - instead of overwriting existing image as it should - wasn't doing it and you were stuck with redis:latest official image, not the one you just built.
When approaching this problem and trying to build the image I had the same issue as you and managed to solve it running docker system prune, but after that I didn't manage to replicate it one more time so its hard for me to say what was the real cause of it.
Anyway I am glad that it worked for you.

.NET Core Docker Container won't work in Kubernetes

PLEASE READ UPDATE 2
I have a very simple EventHubClient app. It will just listen to an EventHub messages.
I get it running with the Docker support given in Visual Studio 2017 (Linux Container).
But when I try to deploy it in Kubernetes, I get "Back-off restarting failed container"
C# Code:
public static void Main(string[] args)
{
// Init Mapper
AutoMapper.Mapper.Initialize(cfg =>
{
cfg.AddProfile<AiElementProfile>();
});
Console.WriteLine("Registering EventProcessor...");
var eventProcessorHost = new EventProcessorHost(
EventHubPath,
ConsumerGroupName,
EventHubConnectionString,
AzureStorageConnectionString,
ContainerName
);
// Registers the Event Processor Host and starts receiving messages
eventProcessorHost.RegisterEventProcessorAsync<EventProcessor>();
Console.WriteLine("Receiving. Press ENTER to stop worker.");
Console.ReadLine();
}
Kubernetes Manifest file (.yaml):
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: historysvc-deployment
spec:
selector:
matchLabels:
app: historysvc
replicas: 2
template:
metadata:
labels:
app: historysvc
spec:
containers:
- name: historysvc
image: vncont.azurecr.io/historysvc:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: acr-auth
kubectl get pods:
NAME READY STATUS RESTARTS AGE
historysvc-deployment-558fc5649f-bln8f 0/1 CrashLoopBackOff 17 1h
historysvc-deployment-558fc5649f-jgjvq 0/1 CrashLoopBackOff 17 1h
kubectl describe pod historysvc-deployment-558fc5649f-bln8f
Name: historysvc-deployment-558fc5649f-bln8f
Namespace: default
Node: aks-nodepool1-81522366-0/10.240.0.4
Start Time: Tue, 24 Jul 2018 10:15:37 +0200
Labels: app=historysvc
pod-template-hash=1149712059
Annotations: <none>
Status: Running
IP: 10.244.0.11
Controlled By: ReplicaSet/historysvc-deployment-558fc5649f
Containers:
historysvc:
Container ID: docker://59e66f1e6420146f6eca4f19e2801a4ee0435a34c7ac555a8d04f699a1497f35
Image: vncont.azurecr.io/historysvc:v1
Image ID: docker-pullable://vncont.azurecr.io/historysvc#sha256:636d81435bd421ec92a0b079c3841cbeb3ad410509a6e37b1ec673dc4ab8a444
Port: 80/TCP
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 24 Jul 2018 10:17:10 +0200
Finished: Tue, 24 Jul 2018 10:17:10 +0200
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 24 Jul 2018 10:16:29 +0200
Finished: Tue, 24 Jul 2018 10:16:29 +0200
Ready: False
Restart Count: 4
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mt8mm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-mt8mm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mt8mm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned historysvc-deployment-558fc5649f-bln8f to aks-nodepool1-81522366-0
Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-81522366-0 MountVolume.SetUp succeeded for volume "default-token-mt8mm"
Normal Pulled 8s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Container image "vncont.azurecr.io/historysvc:v1" already present on machine
Normal Created 7s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Created container
Normal Started 6s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Started container
Warning BackOff 6s (x8 over 1m) kubelet, aks-nodepool1-81522366-0 Back-off restarting failed container
What am I missing?
UPDATE 1
kubectl describe pod historysvc-deployment-558fc5649f-jgjvq
Name: historysvc-deployment-558fc5649f-jgjvq
Namespace: default
Node: aks-nodepool1-81522366-0/10.240.0.4
Start Time: Tue, 24 Jul 2018 10:15:37 +0200
Labels: app=historysvc
pod-template-hash=1149712059
Annotations: <none>
Status: Running
IP: 10.244.0.12
Controlled By: ReplicaSet/historysvc-deployment-558fc5649f
Containers:
historysvc:
Container ID: docker://ccf83bce216276450ed79d67fb4f8a66daa54cd424461762478ec62f7e592e30
Image: vncont.azurecr.io/historysvc:v1
Image ID: docker-pullable://vncont.azurecr.io/historysvc#sha256:636d81435bd421ec92a0b079c3841cbeb3ad410509a6e37b1ec673dc4ab8a444
Port: 80/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 25 Jul 2018 09:32:34 +0200
Finished: Wed, 25 Jul 2018 09:32:35 +0200
Ready: False
Restart Count: 277
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mt8mm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-mt8mm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mt8mm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 2m (x6238 over 23h) kubelet, aks-nodepool1-81522366-0 Back-off restarting failed container
UPDATE 2
When I run it localy with:
docker run <image>
it ends instantly (ignores the read line) (completes), which seems to be the problem.
I have to write
docker run -it <image>
-it at the end for it to do the read line.
How does kubernetes runs the docker image? Where can I set that?
This can be done by attaching an argument to run with your deployment.
In your case the Kubernetes Manifest file (.yaml) should look like this:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: historysvc-deployment
spec:
selector:
matchLabels:
app: historysvc
replicas: 2
template:
metadata:
labels:
app: historysvc
spec:
containers:
- name: historysvc
image: vncont.azurecr.io/historysvc:v1
ports:
- containerPort: 80
args: ["-it"]
imagePullSecrets:
- name: acr-auth
You can find this explained in k8s docs inject-data-application/define-command-argument-container
When you create a Pod, you can define a command and arguments for the containers that run in the Pod. To define a command, include the command field in the configuration file. To define arguments for the command, include the args field in the configuration file. The command and arguments that you define cannot be changed after the Pod is created.
The command and arguments that you define in the configuration file override the default command and arguments provided by the container image. If you define args, but do not define a command, the default command is used with your new arguments.

kubernetes cannot create pod for a simple RC

I set up a local all-in-one Kubernetes env. I followed below steps to install. When I try to create my first RC, the RC created successfully, but the pod didn't get created:
Env: CentOS7
#systemctl disable firewalld
#systemctl stop firewalld
#yum install -y etcd kubernetes
#systemctl start etcd
#systemctl start docker
#systemctl start kube-apiserver
#systemctl start kube-controller-manager
#systemctl start kube-scheduler
#systemctl start kubelet
#systemctl start kube-proxy
All services started successful.
mysql-rc.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
command:
[root#terryhu82 yaml]# kubectl create -f mysql-rc.yaml
replicationcontroller "mysql" created
[root#terryhu82 yaml]# kubectl get rc
NAME DESIRED CURRENT READY AGE
mysql 1 0 0 48s
[root#terryhu82 yaml]# kubectl get pods
No resources found.
[root#terryhu82 yaml]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root#terryhu82 yaml]# kubectl get nodes
NAME STATUS AGE
127.0.0.1 Ready 23h
[root#terryhu82 yaml]# kubectl describe node 127.0.0.1
Name: 127.0.0.1
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=127.0.0.1
Taints: <none>
CreationTimestamp: Mon, 06 Nov 2017 00:22:58 +0800
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Mon, 06 Nov 2017 23:38:05 +0800 Mon, 06 Nov 2017 00:22:58 +0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Mon, 06 Nov 2017 23:38:05 +0800 Mon, 06 Nov 2017 00:22:58 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 06 Nov 2017 23:38:05 +0800 Mon, 06 Nov 2017 00:22:58 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Mon, 06 Nov 2017 23:38:05 +0800 Mon, 06 Nov 2017 00:23:08 +0800 KubeletReady kubelet is posting ready status
Addresses: 127.0.0.1,127.0.0.1,127.0.0.1
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 4
memory: 16416476Ki
pods: 110
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 4
memory: 16416476Ki
pods: 110
System Info:
Machine ID: 52ac3151ed7d485d98fa44e0da0e817b
System UUID: 564D434D-F7CF-9923-4B1D-A494E3391AE1
Boot ID: 148e293c-9631-4421-b55b-115ba72bc1d3
Kernel Version: 3.10.0-693.5.2.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.12.6
Kubelet Version: v1.5.2
Kube-Proxy Version: v1.5.2
ExternalID: 127.0.0.1
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
0 (0%) 0 (0%) 0 (0%) 0 (0%)
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
13m 13m 1 {kubelet 127.0.0.1} Normal Starting Starting kubelet.
13m 13m 1 {kubelet 127.0.0.1} Warning ImageGCFailed unable to find data for container /
13m 13m 6 {kubelet 127.0.0.1} Normal NodeHasSufficientDisk Node 127.0.0.1 status is now: NodeHasSufficientDisk
13m 13m 6 {kubelet 127.0.0.1} Normal NodeHasSufficientMemory Node 127.0.0.1 status is now: NodeHasSufficientMemory
13m 13m 6 {kubelet 127.0.0.1} Normal NodeHasNoDiskPressure Node 127.0.0.1 status is now: NodeHasNoDiskPressure
13m 13m 1 {kubelet 127.0.0.1} Warning Rebooted Node 127.0.0.1 has been rebooted, boot id: 148e293c-9631-4421-b55b-115ba72bc1d3
I didn't perform any configuration for the components. Can anyone help to guide me why the pods and container didn't get created? Where I can see the log?
to run this mysql you need 173MB memory. you defined 128M as upper limit, thats why its not starting. you can use the below one.
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "200Mi"
cpu: "500m"
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
htop output

Unable to mount MySQL data volume to Kubernetes Minikube pod

I'm trying to set up a dev environment with Kubernetes via Minikube. I successfully mounted the same volume to the same data dir on the same image with Docker for Mac, but I'm having trouble with Minikube.
Relevant files and logs:
db-pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
name: msyql
name: db
namespace: default
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: volumesnew
volumes:
- name: volumesnew
hostPath:
path: "/Users/eric/Volumes/mysql"
kubectl get pods:
NAME READY STATUS RESTARTS AGE
db 0/1 Error 1 3s
kubectl logs db:
2016-08-29 20:05:55 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-08-29 20:05:55 0 [Note] mysqld (mysqld 5.6.32) starting as process 1 ...
2016-08-29 20:05:55 1 [Warning] Setting lower_case_table_names=2 because file system for /var/lib/mysql/ is case insensitive
kubectl describe pods db:
Name: db
Namespace: default
Node: minikubevm/10.0.2.15
Start Time: Wed, 31 Aug 2016 07:48:39 -0700
Labels: name=msyql
Status: Running
IP: 172.17.0.3
Controllers: <none>
Containers:
mysqldev:
Container ID: docker://af0937edcd9aa00ebc278bc8be00bc37d60cbaa403c69f71bc1b378182569d3d
Image: mysql/mysql-server:5.6.32
Image ID: docker://sha256:0fb418d5a10c9632b7ace0f6e7f00ec2b8eb58a451ee77377954fedf6344abc5
Port: 3306/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 31 Aug 2016 07:48:42 -0700
Finished: Wed, 31 Aug 2016 07:48:43 -0700
Ready: False
Restart Count: 1
Environment Variables:
MYSQL_ROOT_PASSWORD: test
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
volumesnew:
Type: HostPath (bare host directory volume)
Path: /Users/eric/Volumes/newmysql
default-token-il74e:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-il74e
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
7s 7s 1 {default-scheduler } Normal Scheduled Successfully assigned db to minikubevm
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id 568f9112dce0
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id 568f9112dce0
6s 4s 2 {kubelet minikubevm} spec.containers{mysqldev} Normal Pulled Container image "mysql/mysql-server:5.6.32" already present on machine
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id af0937edcd9a
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id af0937edcd9a
3s 2s 2 {kubelet minikubevm} spec.containers{mysqldev} Warning BackOff Back-off restarting failed docker container
3s 2s 2 {kubelet minikubevm} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysqldev" with CrashLoopBackOff: "Back-off 10s restarting failed container=mysqldev pod=db_default(012d5178-6f8a-11e6-97e8-c2daf2e2520c)"
I was able to mount the data directory from the host to the container in a test directory, but I'm having trouble mounting to the MySQL data directory. Also, I tried to mount an empty directory to the container's data dir with the appropriate MySQL environment variables set, which in Docker for Mac allowed me to perform a SQL dump in the new dir, but I'm seeing the same errors in Minikube.
Any thought on what might be the cause, or if I'm not setting up my dev environment the preferred Kubernetes/Minikube way, please share your thoughts.
I was able to resolve this with the following:
echo "/Users -network 192.168.99.0 -mask 255.255.255.0 -alldirs -maproot=root:wheel" | sudo tee -a /etc/exports
sudo nfsd restart
minikube start
minikube ssh -- sudo umount /Users
minikube ssh -- sudo /usr/local/etc/init.d/nfs-client start
minikube ssh -- sudo mount 192.168.99.1:/Users /Users -o rw,async,noatime,rsize=32768,wsize=32768,proto=tcp
I am running Minikube in VirtualBox. I don't know if this will work with other VM drivers - xhyve, etc.
Reference: https://github.com/kubernetes/minikube/issues/2
EDIT: I should mention that this works for minikube v0.14.0.
1. Mount the folder you want to share on your host, in minikube:
minikube mount ./path/to/mySharedData:/mnt1/shared1
Don't close the terminal. That process needs to be running all the time for the folder to be accessible.
2. Use that folder with hostPath:
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: my-volume
volumes:
- name: my-volume
hostPath:
path: "/mnt1/shared1"
3. Writing access issue?
In case you have a writing access issue, you might want to mount the volume with:
minikube mount ./path/to/mySharedData:/mnt1/shared1 --uid 10001 --gid 10001
Here, the volume mounted in minikube will have group id and user id 10001. This is the user id of Azure SQL Edge server inside the container.
I don't know which is the user id of mysql in your case. If you want to know, log into your container and type id, it will tell you the user id.

kubernetes replication controller

i've a simple kubernetes cluster with a master and 3 minions. In this scenario, if i run a simple pod of a nginx or a mysql it works properly but, if i change type of KIND into yaml file and i try to run a replicated service, pods will start but i can't access to the service.
this is my yaml file for nginx with 3 replicas:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
this is service yaml config file:
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx
name: nginx
spec:
ports:
- port: 80
selector:
name: nginx
i run it with:
# kubectl create -f nginx-rc.yaml
# kubectl create -f nginx-rc-service.yaml
if i run:
# kubectl get pod,svc,rc -o wide
i see:
NAME READY STATUS RESTARTS AGE NODE
nginx-kgq1s 1/1 Running 0 1m node01
nginx-pomx3 1/1 Running 0 1m node02
nginx-xi54i 1/1 Running 0 1m node03
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.1 443/TCP
nginx name=nginx name=nginx 10.254.47.150 80/TCP
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
nginx nginx nginx app=nginx 3
i can see description for pod:
Name: nginx-kgq1s
Namespace: default
Image(s): nginx
Node: node01/node01
Labels: app=nginx
Status: Running
Reason:
Message:
IP: 172.17.52.3
Replication Controllers: nginx (3/3 replicas created)
Containers:
nginx:
Image: nginx
State: Running
Started: Thu, 11 Feb 2016 16:28:08 +0100
Ready: True
Restart Count: 0
Conditions:
Type Status
Ready True
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {scheduler } scheduled Successfully assigned nginx-kgq1s to node01
Thu, 11 Feb 2016 16:27:57 +0100 Thu, 11 Feb 2016 16:27:57 +0100 1 {kubelet node01} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
Thu, 11 Feb 2016 16:28:02 +0100 Thu, 11 Feb 2016 16:28:02 +0100 1 {kubelet node01} implicitly required container POD created Created with docker id bed30a90c6eb
Thu, 11 Feb 2016 16:28:02 +0100 Thu, 11 Feb 2016 16:28:02 +0100 1 {kubelet node01} implicitly required container POD started Started with docker id bed30a90c6eb
Thu, 11 Feb 2016 16:28:07 +0100 Thu, 11 Feb 2016 16:28:07 +0100 1 {kubelet node01} spec.containers{nginx} created Created with docker id 0a5c69cd0481
Thu, 11 Feb 2016 16:28:08 +0100 Thu, 11 Feb 2016 16:28:08 +0100 1 {kubelet node01} spec.containers{nginx} started Started with docker id 0a5c69cd0481
this is what i see if i get description for rc:
Name: nginx
Namespace: default
Image(s): nginx
Selector: app=nginx
Labels: app=nginx
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {replication-controller } successfulCreate Created pod: nginx-kgq1s
Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {replication-controller } successfulCreate Created pod: nginx-pomx3
Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {replication-controller } successfulCreate Created pod: nginx-xi54i
and this is what i see if i get description of service:
Name: nginx
Namespace: default
Labels: name=nginx
Selector: name=nginx
Type: ClusterIP
IP: 10.254.47.150
Port: <unnamed> 80/TCP
Endpoints: <none>
Session Affinity: None
No events.
as i can see, the problem may be that i don't have an ENDPOINT but i don't have any idea how i could solve.
It looks to me like the selector for your service is wrong. It's looking for a label of name: nginx, but your pods actually have app: nginx.
Try changing your service file to:
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx
name: nginx
spec:
ports:
- port: 80
selector:
app: nginx
... or change your replication controller template to use name: nginx instead of app: nginx as the label. Basically, the labels have to match so that the service knows how to present a unified facade over your pods.
To build on #jonskeet's answer, the reason the labels have to match is because the Pods can run on any node in your k8s cluster and Services need a way to locate them.
Therefore, the Service you're slapping in front of the Pod, needs to be able to filter through the cluster and, particularly, the set of Pods in the namespace it's in, and it leverages these matching k/v's in both selectors as its methodology to do so.

Resources