Kubernetes: how to use gitRepo volume? - docker

Can someone give an example of how to use the gitRepo type of volume in Kubernetes?
The doc says it's a plugin, not sure what that means. Could not find an example anywhere and i don't know the proper syntax.
especially is there parameters to pull a specific branch, use credentials (username, password, or SSH key) etc...
EDIT:
Going through the Kubernetes code this is what I figured so far:
- name: data
gitRepo:
repository: "git repo url"
revision: "hash of the commit to use"
But can't seen to make it work, and not sure how to troubleshoot this issue

This is a sample application I used:
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "tess.io",
"labels": {
"name": "tess.io"
}
},
"spec": {
"replicas": 3,
"selector": {
"name": "tess.io"
},
"template": {
"metadata": {
"labels": {
"name": "tess.io"
}
},
"spec": {
"containers": [
{
"image": "tess/tessio:0.0.3",
"name": "tessio",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/tess",
"name": "tess"
}
]
}
],
"volumes": [
{
"name": "tess",
"gitRepo": {
"repository": "https://<TOKEN>:x-oauth-basic#github.com/tess/tess.io"
}
}
]
}
}
}
}
And you can use the revision too.
PS: The repo above does not exist anymore.

UPDATE:
gitRepo is now deprecated
https://github.com/kubernetes/kubernetes/issues/60999
ORIGINAL ANSWER:
going through the code this is what i figured:
- name: data
gitRepo:
repository: "git repo url"
revision: "hash of the commit to use"
after fixing typos in my mountPath, it works fine.

Related

Configuring an EC2 (not Fargate) instance via task_definition.json

Currently we have a working task_definition file for a AWS Fargate instance.
We want to migrate from Fargate to a specific AWS EC2 instance, e.g. Z1d.
From the AWS documentation I found that the ecs.instance-type parameter needs to be added.
Unfortunately it does not state where it should be added in the task_definition.json file.
Currently we have something along the lines of:
{
"family": "generic-family",
"requiresCompatibilities": ["FARGATE"],
"cpu": "4096",
"memory": "8192",
...
"containerDefinitions": [
{
"name": "generic-docker-name",
"image": "...",
},
]
}
We think it should be something like:
{
"family": "generic-family",
"requiresCompatibilities": ["EC2"],
"ecs.instance-type": "Z1d",
...
"containerDefinitions": [
{
"name": "generic-docker-name",
"image": "...",
},
]
}
Or looking at some other documentation:
{
"family": "generic-family",
"requiresCompatibilities": ["EC2"],
...
"containerDefinitions": [
{
"name": "generic-docker-name",
"image": "...",
"Parameters": {
"InstanceTypeParameter" : {
"Type" : "String",
"Default" : "z1d.large",
"AllowedValues" : ["z1d.large"],
"Description" : "..."
}
}
},
]
}
But that doesn't seem to work.
Does anyone know how this should be done? Or how I should read the AWS documentation for this specific topic?
Add this to your task_definition.json file at the top level.
"placementConstraints": [
{
"type": "memberOf",
"expression": "attribute:ecs.instance-type == z1d.large"
}
],
You can read more about it here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html

How to directly mount external NFS share/volume in kubernetes(1.10.3)

I am using kubernetes : v1.10.3 , i have one external NFS server which i am able to mount anywhere ( any physical machines). I want to mount this NFS directly to pod/container . I tried but every time i am getting error. don't want to use privileges, kindly help me to fix.
ERROR: MountVolume.SetUp failed for volume "nfs" : mount failed: exit
status 32 Mounting command: systemd-run Mounting arguments:
--description=Kubernetes transient mount for /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
--scope -- mount -t nfs 10.225.241.137:/stagingfs/alt/ /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
Output: Running scope as unit run-43393.scope. mount: wrong fs type,
bad option, bad superblock on 10.225.241.137:/stagingfs/alt/, missing
codepage or helper program, or other error (for several filesystems
(e.g. nfs, cifs) you might need a /sbin/mount. helper program)
In some cases useful info is found in syslog - try dmesg | tail or so.
NFS server : mount -t nfs 10.X.X.137:/stagingfs/alt /alt
I added two things for volume here but getting error every time.
first :
"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
Second :
"volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
---------------------complete yaml --------------------------------
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "jboss",
"namespace": "staging",
"selfLink": "/apis/extensions/v1beta1/namespaces/staging/deployments/jboss",
"uid": "6a85e235-68b4-11e8-8181-00163eeb9788",
"resourceVersion": "609891",
"generation": 2,
"creationTimestamp": "2018-06-05T11:34:32Z",
"labels": {
"k8s-app": "jboss"
},
"annotations": {
"deployment.kubernetes.io/revision": "2"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"k8s-app": "jboss"
}
},
"template": {
"metadata": {
"name": "jboss",
"creationTimestamp": null,
"labels": {
"k8s-app": "jboss"
}
},
"spec": {
"volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
"containers": [
{
"name": "jboss",
"image": "my.abc.com/alt:7.1_1.1",
"resources": {},
"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"privileged": true
}
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 10,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 2,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:45Z",
"lastTransitionTime": "2018-06-05T11:35:45Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
},
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:46Z",
"lastTransitionTime": "2018-06-05T11:34:32Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"jboss-8674444985\" has successfully progressed."
}
]
}
}
Regards
Anupam Narayan
As stated in the error log:
for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program
According to this question, you might be missing the nfs-commons package which you can install using sudo apt install nfs-common

Kubernetes create Service in deployment+rbd mode, also same configured to succeed in default namespace, fail under non default namespace?

Kubernetes create Service in deployment+rbd mode, also same configured to succeed in default namespace, fail under non default namespace?
The config(json):
{
"kind": "Deployment",
"spec": {
"replicas": "1",
"template": {
"spec": {
"volumes": [
{
"rbd": {
"secretRef": {
"name": "ceph-secret"
},
"image": "zhaosiyi.24",
"fsType": "ext4",
"readOnly": false,
"user": "admin",
"monitors": [
"xxx.xxx.xxx.6:6789",
"xxx.xxx.xxx.7:6789",
"xxx.xxx.xxx.8:6789"
],
"pool": "rrkd.rbd"
},
"name": "aa"
}
],
"imagePullSecrets": [
{
"name": "registrykey-m3-1"
}
],
"containers": [
{
"image": "ccr.ccs.tencentyun.com/rrkd/rrkd-nginx:1.0",
"volumeMounts": [
{
"readOnly": false,
"mountPath": "/mnt",
"name": "aa"
}
],
"name": "aa",
"ports": [
{
"protocol": "TCP",
"containerPort": 80
}
]
}
]
},
"metadata": {
"labels": {
"name": "aa"
}
}
},
"selector": {
"matchLabels": {
"name": "aa"
}
}
},
"apiVersion": "extensions/v1beta1",
"metadata": {
"labels": {
"name": "aa"
},
"name": "aa"
}
}
{
"kind": "Service",
"spec": {
"type": "NodePort",
"ports": [
{
"targetPort": 80,
"protocol": "TCP",
"port": 80
}
],
"selector": {
"name": "aa"
}
},
"apiVersion": "v1",
"metadata": {
"labels": {
"name": "aa"
},
"name": "aa"
}
}
The strangest thing is that the details of the describe pod show success, without any error information, but the get pod is not actually successful, as below:
enter image description here
enter image description here
The problem has been solved. Non default of namespace is unsuccessful because the new namespace is not have Secret, so the authentication failed at pull image. Need to manually create the Secret for pull image. If it is RBD or PVC mode, you also need to manually create the Secret for mounting storage.
Can you show more detail for the describe of pod?
I think it will show the failed log
it always you don't have secret or the secret not setting well,
and the ceph maybe wrong too

How to download Google Compute Engine disk content?

I have linked a Persistent Volume to my Kubernetes Neo4j Replication Controller to store the DB data. Now I would like to download that data locally to run the production DB on my system. I can't find the way to download the Disk content. Can someone point me in the right direction?
Updates (Persistent Volume Creation with Kubernetes):
persistent-volume-db.json
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv-db"
},
"spec": {
"capacity": {
"storage": "500Gi"
},
"accessModes": [
"ReadWriteMany"
],
"gcePersistentDisk": {
"pdName": "tuwa-db-data-disk",
"fsType": "ext4"
}
}
}
persistent-volume-claim-db.json
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "pvc"
},
"spec": {
"accessModes": [
"ReadWriteMany"
],
"resources": {
"requests": {
"storage": "500Gi"
}
}
}
}
And then the usage:
neo4j-controller.json
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "neo4j-controller",
"labels": {
"name": "neo4j"
}
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"name": "neo4j"
}
},
"spec": {
"containers": [
{
"name": "neo4j",
"image": "neo4j/neo4j",
"ports": [
{
"name": "neo4j-server",
"containerPort": 7474
}
],
"volumeMounts": [
{
"mountPath": "/data/databases",
"name": "pv-db"
}
]
}
],
"volumes": [
{
"name": "pv-db",
"persistentVolumeClaim": {
"claimName": "pvc-db"
}
}
]
}
}
}
}
GCE's admin panel doesn't have a "download" button for persistent disks, but gcloud makes it easy to copy files from an instance to your local machine:
gcloud compute copy-files example-instance:~/REMOTE-DIR ~/LOCAL-DIR --zone us-central1-a
This will copy ~/REMOTE-DIR from a remote instance into ~/LOCAL-DIR on your machine. Just replace the directory names, example-instance with your instances name, and adjust your zone if necessary. More info here in the docs.
gcloud compute copy-files #has been deprecated
Please use gcloud compute scp instead. Note that gcloud compute scp does not have recursive copy on by default. To turn on recursion, use the --recurse flag.
gcloud compute scp --recurse example-instance:~/instance-1_path ~/locad_path --zone=us-central1-a
This will copy ~/instance-1_path from a remote compute instance into ~/local_path on your personal machine. Just replace the directory names, example-instance with your instances name, and adjust your zone if necessary.
Complete documentation of this SDK can be found here

How to use volumes-from in marathon

I'm working with mesos + marathon + docker quite a while but I got stuck at some point. At the moment I try to deal with persistent container and I tried to play around with the "volumes-from" parameter but I can't make it work because I have no clue how I can figure out the name of the data box to put it as a key in the json. I tried it with the example from here
{
"id": "privileged-job",
"container": {
"docker": {
"image": "mesosphere/inky"
"privileged": true,
"parameters": [
{ "key": "hostname", "value": "a.corp.org" },
{ "key": "volumes-from", "value": "another-container" },
{ "key": "lxc-conf", "value": "..." }
]
},
"type": "DOCKER",
"volumes": []
},
"args": ["hello"],
"cpus": 0.2,
"mem": 32.0,
"instances": 1
}
I would really appreciate any kind of help :-)
From what I know :
docker --volume-from take the ID or the name of a container.
Since your datacontainer is launch with Marathon too, it get an ID (not sur how to get this ID from marathon) and a name of that form : mesos-0fb2e432-7330-4bfe-bbce-4f77cf382bb4 which is not related to task ID in Mesos nor docker ID.
The solution would be to write something like this for your web-ubuntu application :
"parameters": [
{ "key": "volumes-from", "value": "mesos-0fb2e432-7330-4bfe-bbce-4f77cf382bb4" }
]
Since this docker-ID is unknown from Marathon it is not practical to use datacontainer that are started with Marathon.
You can try to start a datacontainer directly with Docker (without using Marathon) and use it as you do before but since you don't know in advance where web-ubuntu will be scheduled (unless you add a constraint to force it) it is not practical.
{
"id": "data-container",
"container": {
"docker": {
"image": "mesosphere/inky"
},
"type": "DOCKER",
"volumes": [
{
"containerPath": "/data",
"hostPath": "/var/data/a",
"mode": "RW"
}
]
},
"args": ["data-only"],
"cpus": 0.2,
"mem": 32.0,
"instances": 1
}
{
"id": "privileged-job",
"container": {
"docker": {
"image": "mesosphere/inky"
"privileged": true,
"parameters": [
{ "key": "hostname", "value": "a.corp.org" },
{ "key": "volumes-from", "value": "data-container" },
{ "key": "lxc-conf", "value": "..." }
]
},
"type": "DOCKER",
"volumes": []
},
"args": ["hello"],
"cpus": 0.2,
"mem": 32.0,
"instances": 1
}
Something like that maybe?
Mesos support passing the parameter of volume plugin using "key" & "value". But the issue is how to pass the volume name which Mesos expects to be either an absolute path or if absolute path is not passed then it will merge the name provided with the slave container sandbox folder. They do that primarily to support checkpointing, in case slave goes down accidentally.
The only option, till the above get enhanced, is to use another key value pair parameter. For e.g. in above case
{ "key": "volumes-from", "value": "databox" },
{ "key": "volume", "value": "datebox_volume" }
I have tested above with a plugin and it works.
Another approach is to write a custom mesos framework capable of running the docker command you want. In order to know what offers to accept and where to place each task you can use marathon information from: /apps/v2/ (under tasks key).
A good starting point for writing a new mesos framework is: https://github.com/mesosphere/RENDLER

Resources