Docker Compose - Mount 2 volumes with the same path to a container - docker

Here is my docker-compose config file
{
"version": "2",
"services": {
"data1": {
"image": "myimage:v1",
"volumes": ["/agents"]
},
"data2": {
"image": "myimage:v2",
"volumes": ["/agents"]
},
"app": {
"image": "app:latest",
"volumes_from": ["data1", "data2"]
}
}
}
Since the volume from both "data1" and "data2" services is at path "/agents", my "app" container will have one of them. Is there a way to specify a different path for each volume that i mount?

Related

Does ECS task definition support volume mapping syntax?

docker-compose spec support volume mapping syntax under services, for example:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_VERSION: ${DOCKER_VERSION}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
Following "AWSTemplateFormatVersion": "2010-09-09", the corresponding ECS task definition has volume syntax un-readable(with MountPoints and Volumes), as shown below:
"EcsTaskDefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"ContainerDefinitions": [
{
"Name": "jenkins",
"Image": "xyzaccount/jenkins:ecs",
"Memory": 995,
"PortMappings": [ { "ContainerPort": 8080, "HostPort": 8080 } ],
"MountPoints": [
{
"SourceVolume": "docker",
"ContainerPath": "/var/run/docker.sock"
},
{
"SourceVolume": "jenkins_home",
"ContainerPath": "/var/jenkins_home"
}
]
}
],
"Volumes": [
{
"Name": "jenkins_home",
"Host": { "SourcePath": "/ecs/jenkins_home" }
},
{
"Name": "docker",
"Host": { "SourcePath": "/var/run/docker.sock" }
}
]
}
}
Does ECS task definition syntax of CloudFormation (now) support volume mapping syntax? similar to docker-compose....
Yes, of course, ECS support docker socket mounting, but the syntax is bit different. Add DOCKER_HOST environment variable in the task definition and source path should start with //.
"volumes": [
{
"name": "docker",
"host": {
"sourcePath": "//var/run/docker.sock"
}
}
]
The // worked in case of AWS ecs.
Also, you need to add DOCKER_HOST environment variable in your task definition.
"environment": [
{
"name": "DOCKER_HOST",
"value": "unix:///var/run/docker.sock"
}
]

Orphaned Tasks in Docker Swarm after removal of failed node

Last week I had to remove a failed node from my Docker Swarm Cluster, leaving some tasks that ran on that node in desired state "Remove".
Even after deleting the stack and recreating it with the same name, docker stack ps stackname still shows them.
Interestingly enough, after recreating the stack, the tasks are still there, but with no node assigned.
Here's what I tried so far to "cleanup" the stack:
Recreating the stack with the same name
docker container prune
docker volume prune
docker system prune
Is there a way to remove a specific task?
Here's the output for docker inspect fkgz0oihexzs, the first task in the list:
[
{
"ID": "fkgz0oihexzsjqwv4ju0szorh",
"Version": {
"Index": 14422171
},
"CreatedAt": "2018-11-05T16:15:31.528933998Z",
"UpdatedAt": "2018-11-05T16:27:07.422368364Z",
"Labels": {},
"Spec": {
"ContainerSpec": {
"Image": "redacted",
"Labels": {
"com.docker.stack.namespace": "redacted"
},
"Env": [
"redacted"
],
"Privileges": {
"CredentialSpec": null,
"SELinuxContext": null
},
"Isolation": "default"
},
"Resources": {},
"Placement": {
"Platforms": [
{
"Architecture": "amd64",
"OS": "linux"
}
]
},
"Networks": [
{
"Target": "3i998stqemnevzgiqw3ndik4f",
"Aliases": [
"redacted"
]
}
],
"ForceUpdate": 0
},
"ServiceID": "g3vk9tgfibmcigmf67ik7uhj6",
"Slot": 1,
"Status": {
"Timestamp": "2018-11-05T16:15:31.528892467Z",
"State": "new",
"Message": "created",
"PortStatus": {}
},
"DesiredState": "remove"
}
]
I had the same problem. I resolved it following this instructions :
docker run --rm -v /var/run/docker/swarm/control.sock:/var/run/swarmd.sock dperny/tasknuke <taskid>
Be sure to use the full long task id or it will not work (fkgz0oihexzsjqwv4ju0szorh in your case).

How to directly mount external NFS share/volume in kubernetes(1.10.3)

I am using kubernetes : v1.10.3 , i have one external NFS server which i am able to mount anywhere ( any physical machines). I want to mount this NFS directly to pod/container . I tried but every time i am getting error. don't want to use privileges, kindly help me to fix.
ERROR: MountVolume.SetUp failed for volume "nfs" : mount failed: exit
status 32 Mounting command: systemd-run Mounting arguments:
--description=Kubernetes transient mount for /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
--scope -- mount -t nfs 10.225.241.137:/stagingfs/alt/ /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
Output: Running scope as unit run-43393.scope. mount: wrong fs type,
bad option, bad superblock on 10.225.241.137:/stagingfs/alt/, missing
codepage or helper program, or other error (for several filesystems
(e.g. nfs, cifs) you might need a /sbin/mount. helper program)
In some cases useful info is found in syslog - try dmesg | tail or so.
NFS server : mount -t nfs 10.X.X.137:/stagingfs/alt /alt
I added two things for volume here but getting error every time.
first :
"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
Second :
"volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
---------------------complete yaml --------------------------------
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "jboss",
"namespace": "staging",
"selfLink": "/apis/extensions/v1beta1/namespaces/staging/deployments/jboss",
"uid": "6a85e235-68b4-11e8-8181-00163eeb9788",
"resourceVersion": "609891",
"generation": 2,
"creationTimestamp": "2018-06-05T11:34:32Z",
"labels": {
"k8s-app": "jboss"
},
"annotations": {
"deployment.kubernetes.io/revision": "2"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"k8s-app": "jboss"
}
},
"template": {
"metadata": {
"name": "jboss",
"creationTimestamp": null,
"labels": {
"k8s-app": "jboss"
}
},
"spec": {
"volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
"containers": [
{
"name": "jboss",
"image": "my.abc.com/alt:7.1_1.1",
"resources": {},
"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"privileged": true
}
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 10,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 2,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:45Z",
"lastTransitionTime": "2018-06-05T11:35:45Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
},
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:46Z",
"lastTransitionTime": "2018-06-05T11:34:32Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"jboss-8674444985\" has successfully progressed."
}
]
}
}
Regards
Anupam Narayan
As stated in the error log:
for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program
According to this question, you might be missing the nfs-commons package which you can install using sudo apt install nfs-common

How to download Google Compute Engine disk content?

I have linked a Persistent Volume to my Kubernetes Neo4j Replication Controller to store the DB data. Now I would like to download that data locally to run the production DB on my system. I can't find the way to download the Disk content. Can someone point me in the right direction?
Updates (Persistent Volume Creation with Kubernetes):
persistent-volume-db.json
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv-db"
},
"spec": {
"capacity": {
"storage": "500Gi"
},
"accessModes": [
"ReadWriteMany"
],
"gcePersistentDisk": {
"pdName": "tuwa-db-data-disk",
"fsType": "ext4"
}
}
}
persistent-volume-claim-db.json
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "pvc"
},
"spec": {
"accessModes": [
"ReadWriteMany"
],
"resources": {
"requests": {
"storage": "500Gi"
}
}
}
}
And then the usage:
neo4j-controller.json
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "neo4j-controller",
"labels": {
"name": "neo4j"
}
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"name": "neo4j"
}
},
"spec": {
"containers": [
{
"name": "neo4j",
"image": "neo4j/neo4j",
"ports": [
{
"name": "neo4j-server",
"containerPort": 7474
}
],
"volumeMounts": [
{
"mountPath": "/data/databases",
"name": "pv-db"
}
]
}
],
"volumes": [
{
"name": "pv-db",
"persistentVolumeClaim": {
"claimName": "pvc-db"
}
}
]
}
}
}
}
GCE's admin panel doesn't have a "download" button for persistent disks, but gcloud makes it easy to copy files from an instance to your local machine:
gcloud compute copy-files example-instance:~/REMOTE-DIR ~/LOCAL-DIR --zone us-central1-a
This will copy ~/REMOTE-DIR from a remote instance into ~/LOCAL-DIR on your machine. Just replace the directory names, example-instance with your instances name, and adjust your zone if necessary. More info here in the docs.
gcloud compute copy-files #has been deprecated
Please use gcloud compute scp instead. Note that gcloud compute scp does not have recursive copy on by default. To turn on recursion, use the --recurse flag.
gcloud compute scp --recurse example-instance:~/instance-1_path ~/locad_path --zone=us-central1-a
This will copy ~/instance-1_path from a remote compute instance into ~/local_path on your personal machine. Just replace the directory names, example-instance with your instances name, and adjust your zone if necessary.
Complete documentation of this SDK can be found here

Kubernetes pod not binding volumes to container

I've got the following ReplicationController JSON defined:
{
"id": "PHPController",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 2,
"replicaSelector": {"name": "php"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "PHPController",
"volumes": [{ "name": "wordpress", "path": "/mnt/nfs/wordpress_a", "hostDir": "/mnt/nfs/wordpress_a"}],
"containers": [{
"name": "php",
"image": "internaluser/php53",
"ports": [{"containerPort": 80, "hostPort": 9021}],
"volumeMounts": [{"name": "wordpress", "mountPath": "/mnt/nfs/wordpress_a"}]
}]
}
},
"labels": {"name": "php"}
}},
"labels": {"name": "php"}
}
The container starts correctly when run with "docker run -t -i -p 0.0.0.0:9021:80 -v /mnt/nfs/wordpress_a:/mnt/nfs/wordpress_a:rw internaluser/php53".
/mnt/nfs/wordpress_a is an NFS share, mounted on all of the minions. Each minion has full RW access and I have verified that the share is present.
After creating the pod containers with the Replication Controller, I can see that the volume was never actually bound, and/or incorrectly mounted:
"Volumes": {
"/mnt/nfs/wordpress_a": "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7"
},
"VolumesRW": {
"/mnt/nfs/wordpress_a": true
}
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LxcConf": null,
"Privileged": false,
"PortBindings": {
"80/tcp": [
{
"HostIp": "",
"HostPort": "9021"
}
]
},
I find it strange that the container believes /mnt/nfs/wordpress_a is mapped to "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7".
From the kubelet log:
Desired [10.101.4.15]: [{Namespace:etcd Name:c823da9e-4437-11e4-a3b1-0050568421eb Manifest:{Version:v1beta1 ID:c823da9e-4437-11e4-a3b1-0050568421eb UUID:c823da9e-4437-11e4-a3b1-0050568421eb Volumes:[{Name:wordpress Source:}] Containers:[{Name:php Image:internaluser/php53 Command:[] WorkingDir: Ports:[{Name: HostPort:9021 ContainerPort:80 Protocol:TCP HostIP:}] Env:[{Name:SERVICE_HOST Value:10.1.1.1}] Memory:0 CPU:0 VolumeMounts:[{Name:wordpress ReadOnly:false MountPath:/mnt/nfs/wordpress_a}] LivenessProbe: Lifecycle: Privileged:false}] RestartPolicy:{Always:0xa99a20 OnFailure: Never:}}}]
Does anyone have experience with this sort of thing? I've been driving myself crazy troubleshooting this. Thanks!
Solved. The volumes syntax was incorrect.
https://github.com/GoogleCloudPlatform/kubernetes/issues/1446

Resources