Cassandra fails to start on Docker Toolbox external volume - docker

The issue is related to running "library/Cassandra" version 3.11.2 on Docker Toolbox (docker-machine version 0.11.0) on Windows 7.
These are versions that I have to use due to constraints at my place of work, and can confirm that I have had no problems doing exactly the same thing on Docker on Mac OSX at home.
Cassandra works fine if I don't map the data directory to an external volume in the Compose file, but has the downside of causing the Virtualbox VM to fill up on a regular basis. It's fine for development, but a pain for our other systems such as Test and won't be of any use in Production.
I am aware of the constraints imposed by Docker Toolbox for Windows, especially around the use of C:\Users as mount points.
My docker-compose file looks as follows:
version: '3'
services:
cassandra:
privileged: true
image: cassandra
ports:
- "7000:7000"
- "7001:7001"
- "7199:7199"
- "9042:9042"
- "9160:9160"
volumes:
- C:\Users\davis_s\docker-volumes\cassandra:/var/lib/cassandra
container_name: cassandra
environment:
- CASSANDRA_CLUSTER_NAME=mycluster
- CASSANDRA_ENDPOINT_SNITCH=GossipingPropertyFileSnitch
- CASSANDRA_DC=datacenter1
networks:
- smartdeploy_evo
When I map the data volume for Cassandra to a folder outside of the VM it stops when it gets to the following line:
cassandra | INFO [main] 2018-07-27 10:49:00,044 ColumnFamilyStore.java:411 - Initializing system.IndexInfo
It does manage to create the following folders under this location (which was totally empty prior to startup):
commitlog
data
hints
saved_caches
with other subdirectories under these ones, such as data/system/Index-info.. etc.
I have left it over the weekend and it never gets past this line.
The output for "docker inspect" related to the mount point looks as follows:
"Mounts": [
{
"Type": "bind",
"Source": "/c/Users/davis_s/docker-volumes/cassandra",
"Destination": "/var/lib/cassandra",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
I have tried the following to try and overcome this:
Given full (recursive) Windows permissions to "Everyone" on C:\Users\davis_s
Mapped a different Shared Folder in VirtualBox, again with full permissions for "Everyone"
Tried running the Cassandra container with "Privileged: true"
I've seen a few other posts on SO related to Cassandra stopping at a similar point, but these look to have been solved by clearing the folder and restarting which is relevant in this case (as it started empty anyway!).
Thanks in advance...

Related

Docker-compose, remote context, path relative to local machine instead of remote

I have a very simple docker-compose.yml:
version: '2.4'
services:
containername:
image: ${DOCKER_IMAGE}
volumes:
- ./config:/root/config
I'm using a remote staging server accessed via ssh:
docker context create staging --docker "host=ssh://ubuntu#staging.example.com"
docker context use staging
However, I see unexpected results with my volume after I docker-compose up:
docker-compose --context staging up -d
docker inspect containername
...
"Mounts": [
{
"Type": "bind",
"Source": "/Users/crummy/code/.../config",
"Destination": "/root/config",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
...
It seems the expansion of ./config to a full path happens on the machine docker-compose is running on. Not the machine Docker is running on.
I can fix this problem by hardcoding the entire path: /home/ubuntu/config:/root/config. But this makes my docker-compose file a little less flexible than it could be. Is there a way to get the dot expansion to occur on the remote machine?
No, the docs say that:
You can mount a relative path on the host, which expands relative to the directory of the Compose configuration file being used. Relative paths should always begin with . or ..
I believe that happens for two reasons:
There's no easy way and objective way that the docker-compose can find out how to expand . in this context, as there's no way to know what . would mean for the ssh client (home? same folder?).
Even though the docker cli is using a different context, the expansion is done by the docker-compose tool, that's unaware about the context switch.
Even using environment variables might pose a problem, since the env expansion would also happen in the machine you're running the docker-compose command.

Force creation of bind mount in docker-compose

I am trying to make a neo4j container use host system directories for data and logs using docker-compose. My compose file looks like this
neo4j:
image: neo4j:3.5.6
ports:
- "127.0.0.1:7474:7474"
- "127.0.0.1:7473:7473"
- "127.0.0.1:7687:7687"
environment:
NEO4J_AUTH: "none"
volumes:
- "~/neo4j/data:/data"
- "~/neo4j/logs:/logs"
However, it only works for the logs directory, for the data directory, the container keeps its own volume. The binds section of docker inspect looks like this
"Binds": [
"/home/rbusche/neo4j/logs:/logs:rw",
"6f989b981c12a252776404343044b6678e0fac48f927e80964bcef409ab53eef:/data:rw"
],
Peculiar enough it works when I use docker run and specify the volume there. The neo4j Dockerfile declares both data and logs as container volumes. This there any way to force docker-compose to override those?
After removing the volume 6f989b981c12a252776404343044b6678e0fac48f927e80964bcef409ab53eef and the container associated with it, it works as expected. It seems like the container was clinging to a volume it created on a previous start.

Hyperledger Fabric BYFN - Unable to find directory listed in docker-compose-base.yaml

I am looking at docker-compose-base.yaml line 27:
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
- orderer.example.com:/var/hyperledger/production/orderer
I can find below 3 directories on my filesystem
../channel-artifacts/genesis.block
../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp
../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/
but I cannot find a directory named orderer.example.com. I think this is not meant to be a directory but related to the
container_name: orderer.example.com
in some way. Could anyone explain the meaning of the last mapping:
- orderer.example.com:/var/hyperledger/production/orderer
it does not look like a local <-> docker directory mapping. Then what is it?
Before starting the container it's normal that the orderer.example.com folder is not create, since it will be created when the container starts.
Mount the directory /var/hyperledger/production/orderer between the host and the container can be useful if you orderer is run is solo mode.
Indeed, if your container crash (for example your server restart) you can keep track of the blocks that the orderer has build. It's can be used for restart the blockchain network easier (or move the orderer in other server) .
TL;DR: The first three
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
are examples of bind mounts. The last one
- orderer.example.com:/var/hyperledger/production/orderer
is a volume. Its confusing because bind mounts can be created using the volume syntax and hence also get referred to as volumes in documentation.
This is not related to
container_name: orderer.example.com
docker-compose-base.yaml is a base file that is used e.g., by docker-compose-e2e-template.yaml. In that file one can see volumes being defined:
volumes:
orderer.example.com:
peer0.org1.example.com:
peer1.org1.example.com:
peer0.org2.example.com:
peer1.org2.example.com:
these volumes behave the same way as if one had created them using the docker volume create command. see https://docs.docker.com/engine/reference/commandline/volume_create/ to understand what that command does. It is a way to create persistent storage that 1. does not get deleted when the docker containers stop and exit and, 2. that can be used to share data amongst containers.
To see list of all volumes created by docker on the machine, run:
docker volume ls
To inspect a volume, run (to give an example):
$ docker volume inspect net_orderer.example.com
[
{
"CreatedAt": "2018-11-06T22:10:42Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/net_orderer.example.com/_data",
"Name": "net_orderer.example.com",
"Options": null,
"Scope": "local"
}
]
re: the original question:
Unable to find directory listed in docker-compose-base.yaml
You will not be able to find this directory. E.g., if you try listing the mountpoint above:
$ ls /var/lib/docker/volumes/net_orderer.example.com/_data
ls: /var/lib/docker/volumes/net_orderer.example.com/_data: No such file or directory
docker volumes are not created as regular directories on the host. The steps to "get" to a volume are quite complex in fact. see https://forums.docker.com/t/host-path-of-volume/12277/9 for details.

Not able to run service using docker stack in docker swarm

Here is the docker-compose file that I once used to successfully run docker container using docker-compose.
Now trying to deploy using the docker stack. My manager and workers node are ready.
docker-compose.yml
version: '3'
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
Command used:
docker stack deploy -c docker-compose.yml monitoring
Trying to run it in the manager node. Why is the service not running . checking if the service is running or not using docker service ls shows 0/1 as you can see in the figure bellow.
When trying to check the logs with docker service logs <servicename> Nothing gets loaded.
Where exactly am I missing the things. My ultimate goal being running the full docker monitoring service like cadvisor and node-exporter and all other required. I tried with this https://github.com/stefanprodan/swarmprom/blob/master/docker-compose.yml
I have figured out the problem is in the replicated mode, runs well in global mode although.
What error is here with the replicated mode. I don't think I have a syntax error here. How to make it run in replicated mode and in the manager node only
If everything works well with the "docker-compose up" but you find the issues with "docker stack deploy". This is how I fixed it.
I am using docker in a Windows machines and docker uses a VirtualBox VM. You will need to do port forward to see the containers in the host machine.
The problem is related to the bind mount with the volumes.
See the path in the two scenarios below when I do docker inspect :
Docker-compose up:
"Mounts": [
{
"Type": "bind",
"Source": "/c/Users/Bardav/Documents/Large-Scale-Software Testing/PromptDocker/PromptDocker/app_users",
"Destination": "/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
Docker stack deploy -c docker-compose.ymal store:
"Mounts": [
{
"Type": "bind",
"Source": "C:\\Users\\Bardav\\Documents\\Large-Scale-Software Testing\\PromptDocker\\PromptDocker\\app_viewer",
"Target": "/app/"
}
],
SOLUTION:
Update the path in the docker-compose.yml and it will work. For instance;
RELATIVE PATH NOT WORKING
reg_users:
image: prompt_user_app
build: ./app_users/
volumes:
- ./app_users/:/app/
ABSOLUTE PATH WORKS FINE
reg_users:
image: prompt_user_app
build: ./app_users/
volumes:
- "/c/Users/Bardav/Documents/Large-Scale-Software Testing/PromptDocker/PromptDocker/app_users:/app/"
This might be related to the placement constraint on Swarm Master / Manager. I had faced a similar problem while trying to schedule visualiser or portioner container on the master node using Azure Container Service. You can check the status of the master. If it is in Puased state you can make it active
I had shared my experience at https://www.handsonarchitect.com/2017/11/visualize-docker-swarm-in-azure.html

Empty directory when mounting volume using windows for docker

I am attempting to mount a volume from C:/Users into a container running on a docker-machine using the hyperv driver on docker for windows (win 10 pro). I am a using the lastest docker (1.13.1) and the same on the hyper vm machine. I have tried switching to using a local account, shared the drive in the docker settings menu and ive pretty much tried everything i could find on google.
Running the test volume run command in the settings menu works for me. At this point in time I presume hyperv does not support mounting volumes from the host however i cant find anywhere that explicitly says that volumes mounting will not work in hyperv.
This is my docker-compose config:
networks: {}
services:
app:
build:
context: C:\users\deep\projects\chat\app
command: sleep 3600
image: app
links:
- rethinkdb
- redis
ports:
- 4005:4005
- 4007:4007
volumes:
- /c/users/deep/projects/chat/app:/usr/src/app:rw
redis:
image: redis
rethinkdb:
image: rethinkdb:2.3.5
version: '2.0'
volumes: {}
In my Dockerfile i can see copy files into the container to usr/src/app. When i up the services with the volume specified in the compose file the directory is emptied, however i omit this volume mount i can see my files that i copied into the container from the dockerfile.
Running verbose when starting my services i can see a volumes path specified as such 'Binds': [u'/c/users/deep/projects/chat/app:/usr/src/app:rw']. However, when i inspect the container using docker-compose inspect app i see volumes set to null "Volumes": null.
I presume at this point that mounting volumes into a container running inside a hyperv VM is not supported? Can someone confirm so that I can RIP :)
I think you just need to share the volume (c:) of the folder from the Docker app settings.
See the "Shared Drives" paragraph from the getting started guide
I'm using mounted folders with a similar configuration and it works fine once the drive has been shared.
As stupid as it seems, this happens to me often. The solution is to un-check the C drive in "Docker for windows" - > Setting - > Shared Drives, apply and check it again with apply.
You should use /c/Users, with a capital "C".

Resources