delete volumes from images - docker

When I create a container from docker-compose with some volumes and then commit that container, the volumes in the docker-compose file will be committed too. There is a way to not commit the volumes in the image?
With below command just can add volume but not delete them:
docker commit -c 'VOLUME /foo' container_name image_name
Thank you.

Update (April 2018): In "How can I edit an existing docker image metadata?", Guido U. Draheim proposes gdraheim/docker-copyedit, a python scripts which can edit docker image metadata.
That can remove or overrides image metadata, including volumes.
The command would be:
./docker-copyedit.py FROM image1 INTO image2 REMOVE ALL VOLUMES
Since 2018, the same issue now includes (from Aalex Gabi):
For building a CI image with an embedded MySQL database snapshot I ended up using this solution: "Persist & share dev data in a Docker image with commit" from Steven Landow.
FROM mysql:5.7
ADD snapshots/default.sql /tmp/default.sql
# Using separate data folder outside of mysql image declared volume
# https://github.com/moby/moby/issues/3465
# https://medium.com/#stevenlandow/persist-share-dev-mysql-data-in-a-docker-image-with-commit-f9aa9910be0a
RUN mkdir /var/lib/mysql-no-volume
RUN set -exu ;\
MYSQL_ROOT_PASSWORD=root docker-entrypoint.sh --datadir /var/lib/mysql-no-volume &\
MYSQL_PID=$! &&\
timeout 22 bash -c 'until printf "" 2>>/dev/null >>/dev/tcp/$0/$1; do sleep 1; done' localhost 3306 &&\
mysql -proot -e 'create database `mydb` collate "utf8mb4_general_ci"' &&\
mysql -proot mydb < /tmp/default.sql &&\
kill $MYSQL_PID &&\
tail --pid=$MYSQL_PID -f /dev/null # Using tail to wait for PID to end https://unix.stackexchange.com/questions/427115/listen-for-exit-of-process-given-pid
# Using separate data folder outside of mysql image declared volume
# https://github.com/moby/moby/issues/3465
# https://medium.com/#stevenlandow/persist-share-dev-mysql-data-in-a-docker-image-with-commit-f9aa9910be0a
CMD ["--datadir", "/var/lib/mysql-no-volume"]

It seems that this is currently not possible, though there are many people requesting the feature and someone might be working on it. This Github issue discusses the topic:
https://github.com/moby/moby/issues/3465

Related

Docker volume: rename or copy operation

As per documentation Docker volumes are advertised this way:
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker.
But if they are so good, why there are no operations to manage them like: copy, rename?
the command:
docker volume --help
gives only these options:
Usage: docker volume COMMAND
Manage volumes
Commands:
create Create a volume
inspect Display detailed information on one or more volumes
ls List volumes
prune Remove all unused local volumes
rm Remove one or more volumes
Documentation also states no other commands, nor any workarounds for having the copy or rename functionality.
I would like to rename currently existing volume and create another (blank) in place of the originally named volume and populate it with the new data for test.
After doing my test I may want (or not) to remove the newly created volume and rename the other one to its previous (original) name to restore the volume setup as it was before.
I would like to not create a backup of the original volume that I want to rename. Renaming is good enough for me and much faster than creating the backup and restoring form it.
Editing the docker-compose file and changing the name of the volume there is something I would like to avoid as well.
Is there any workaround that can work for renaming of a volume?
Can low level manual management from the shell targeting the Docker Root Dir: /var/lib/docker and volumes sub-dir be a solution or that approach may lead to some docker demon data inconsistency?
Not really the answer but I'll post this copy example because I couldn't find any before and searching for it took me to this question.
Docker suggest --volumes-from for backup purposes here.
For offline migration (stopped container) I don't see the point in using --volumes-from. So I just used a middle container with both volumes mounted and a copy command.
To finish off the migration a new container can use the new volume
Here's a quick test
Prepare a volume prova
docker run --name myname -d -v prova:/usr/share/nginx/html nginx:latest
docker exec myname touch /usr/share/nginx/html/added_file
docker stop myname
Verify the volume has nginx data + our file added_file
sudo ls /var/lib/docker/volumes/prova/_data
Output:
50x.html added_file index.html
Migrate the data to volume prova2
docker run --rm \
-v prova:/original \
-v prova2:/migration \
ubuntu:latest \
bash -c "cp -R /original/* /migration/"
Verify the new volume has the same data
sudo ls /var/lib/docker/volumes/prova2/_data
Output:
50x.html added_file index.html
Run a new container with the migrated volume:
docker run --name copyname -d -v prova2:/user/share/nginx/html nginx:latest
Verify the new container sees the migrated data at the original volume moint point:
docker exec copyname ls -al /user/share/nginx/html
For next searchers, I made a script that can do a copy of volume by #Lennonry example. Here it is https://github.com/KOYU-Tech/docker-volume-copy
Script itself for history:
#!/bin/bash
if (( $# < 2 )); then
echo ""
echo "No arguments provided"
echo "Use command example:"
echo "./dcv.sh OLD_VOLUME_NAME NEW_VOLUME_NAME"
echo ""
exit 1
fi
OLD_VOLUME_NAME="$1"
NEW_VOLUME_NAME="$2"
echo "== From '$OLD_VOLUME_NAME' to '$NEW_VOLUME_NAME' =="
function isVolumeExists {
local isOldExists=$(docker volume inspect "$1" 2>/dev/null | grep '"Name":')
local isOldExists=${isOldExists#*'"Name": "'}
local isOldExists=${isOldExists%'",'}
local isOldExists=${isOldExists##*( )}
if [[ "$isOldExists" == "$1" ]]; then
return 1
else
return 0
fi
}
# check if old volume exists
isVolumeExists ${OLD_VOLUME_NAME}
if [[ "$?" -eq 0 ]]; then
echo "Volume $OLD_VOLUME_NAME doesn't exist"
exit 2
fi
# check if new volume exists
isVolumeExists ${NEW_VOLUME_NAME}
if [[ "$?" -eq 0 ]]; then
echo "creating '$NEW_VOLUME_NAME' ..."
docker volume create ${NEW_VOLUME_NAME} 2>/dev/null 1>/dev/null
isVolumeExists ${NEW_VOLUME_NAME}
if [[ "$?" -eq 0 ]]; then
echo "Cannot create new volume"
exit 3
else
echo "OK"
fi
fi
# most important part, data migration
docker run --rm --volume ${OLD_VOLUME_NAME}:/source --volume ${NEW_VOLUME_NAME}:/destination ubuntu:latest bash -c "echo 'copying volume ...'; cp -R /source/* /destination/"
if [[ "$?" -eq 0 ]]; then
echo "Done successfuly 🎉"
else
echo "Some error occured 😭"
fi

Allow Docker Container & Host User To Write on Bind Mounted Host Directory

Any help from any source is appreciated.
Server has a Docker container with alpine, nginx, php. This container is able to write in bind mounted host directory, only when I set "chown -R nobody directory" to the host directory (nobody is a user in container).
I am using VSCode's extension "Remote - SSH" to connect to server as user ubuntu. VSCode is able to edit files in that same host directory (being used for bind mount), only when I set "chown -R ubuntu directory".
Problem: if I set "ubuntu" as owner, container can't write (using php to write), if I set "nobody" as owner, VSCode SSH can't write. I am finding a way to allow both to write without changing directory owner user again and again, or similar ease.
Image used: https://hub.docker.com/r/trafex/php-nginx
What I tried:
In Container, I added user "nobody" to group "ubuntu". On host, directory (used as mount) was set "sudo chown -R ubuntu:ubuntu directory", user "ubuntu" was already added to group "ubuntu".
VSCode did edit, container was unable to edit. (Edit: IT WORKED, I changed the directory permission for the group to allow write)
Edit: the container already created without Dockerfile also ran and maybe edited with important changes, so maybe I can't use Dockerfile or entrypoint.sh way to solve problem. Can It be achieved through running commands inside container or without creating container again? This container can be stopped.
Edit: I am wondering, in Triet Doan's answer, an option is to modify UID and GID of already created user in the container, will doing this for the user and group "nobody" can cause any problems inside container, I am wondering because probably many commands for settings already executed inside container, files are already edited by php on mounted directory & container is running for days
Edit: I found that alpine has no usermod & groupmod.
This article wrote about this problem very nicely. I would just summarize the main ideas here.
The easiest way to tackle with this permission problem is to modify UID and GID in the container to the same UID and GID that are used in the host machine.
In your case, we try to get the UID and GID of user ubuntu and use them in the container.
The author suggests 3 ways:
1. Create a new user with the same UID and GID of the host machine in entrypoint.sh.
Here’s the Dockerfile version for Ubuntu base image.
FROM ubuntu:latest
RUN apt-get update && apt-get -y install gosu
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
The entrypoint.sh was created as follows:
#!/bin/bash
USER_ID=${LOCAL_UID:-9001}
GROUP_ID=${LOCAL_GID:-9001}
echo "Starting with UID: $USER_ID, GID: $GROUP_ID"
useradd -u $USER_ID -o -m user
groupmod -g $GROUP_ID user
export HOME=/home/user
exec /usr/sbin/gosu user "$#"
Simply build the container with the docker build command.
docker build -t ubuntu-test1 .
The LOCAL_UID and LOCAL_GID can be passed to the container in the docker run command.
$ docker run -it --name ubuntu-test -e LOCAL_UID=$(id -u $USER) -e LOCAL_GID=$(id -g $USER) ubuntu-test1 /bin/bash
Starting with UID: 1001, GID: 1001
user#1291224a8029:/$ id
uid=1001(user) gid=1001(user) groups=1001(user)
We can see that the UID and GID in the container are the same as those in the host.
2. Mount the host machine’s /etc/passwd and /etc/group to a container
This is also a fine approach and simpler at a glance. One drawback of this approach is that a new user created in a container can’t access the bind-mounted file and directories because UID and GID are different from the host machine’s ones.
One must be careful to have /etc/passwd and /etc/group with read-only access, otherwise the container might access and overwrite the host machine’s /etc/passwd and /etc/group. Therefore, the author doesn't recommend this way.
$ docker run -it --name ubuntu-test --mount type=bind,source=/etc/passwd,target=/etc/passwd,readonly --mount type=bind,source=/etc/group,target=/etc/g
roup,readonly -u $(id -u $USER):$(id -g $USER) ubuntu /bin/bash
ether#903ad03490f3:/$ id
uid=1001(user) gid=1001(user) groups=1001(user)
3. Modify UID and GID with the same UID and GID of the host machine
This is mostly the same approach as No.1, but just modify the UID and GID in case a new user has been created in the container already. Assume you have a new user created in the Dockerfile, then just call these commands in either Dockerfile or entrypoint.sh.
If your username and group name were "test", then you can use usermod and groupmod commands to modify UID and GID in the container. The taken UID and GID as environment variables from the host machine will be used for this "test" user.
usermod -u $USER_ID -o -m -d <path-to-new-home> test
groupmod -g $GROUP_ID test
Problem: if I set "ubuntu" as owner, container can't write (using php to write), if I set "nobody" as owner, VSCode SSH can't write. I am finding a way to allow both to write without changing directory owner user again and again, or similar ease.
First, I'd recommend the container image should create a new username for the files inside the container, rather than reusing nobody since that user may also be used for other OS tasks that shouldn't have any special access.
Next, as Triet suggests, an entrypoint that adjusts the container's user/group to match the volume is preferred. My own version of these scripts can be found in this base image that includes a fix-perms script that makes the user id and group id of the container user match the id's of a mounted volume. In particular, the following lines of that script where $opt_u is the container username, $opt_g is the container group name, and $1 is the volume mount location:
# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
# update the gid
if [ -n "$opt_g" ]; then
OLD_GID=$(getent group "${opt_g}" | cut -f3 -d:)
NEW_GID=$(stat -c "%g" "$1")
if [ "$OLD_GID" != "$NEW_GID" ]; then
echo "Changing GID of $opt_g from $OLD_GID to $NEW_GID"
groupmod -g "$NEW_GID" -o "$opt_g"
if [ -n "$opt_r" ]; then
find / -xdev -group "$OLD_GID" -exec chgrp -h "$opt_g" {} \;
fi
fi
fi
Then I start the container as root, and the container runs the fix-perms script from the entrypoint, followed by a command similar to:
exec gosu ${container_user} ${orig_command}
This replaces the entrypoint that's running as root with the application running as the specified user. I've got more examples of this in:
DockerCon presentation
Similar SO questions
What I tried: In Container, I added user "nobody" to group "ubuntu".
On host, directory (used as mount) was set "sudo chown -R
ubuntu:ubuntu directory", user "ubuntu" was already added to group
"ubuntu". VSCode did edit, container was unable to edit.
I'd avoid this and create a new user. Nobody is designed to be as unprivileged as possible, so there could be unintended consequences with giving it more access.
Edit: the container already created without Dockerfile also ran and
maybe edited with important changes, so maybe I can't use Dockerfile
or entrypoint.sh way to solve problem. Can It be achieved through
running commands inside container or without creating container again?
This container can be stopped.
This is a pretty big code smell in containers. They should be designed to be ephemeral. If you can't easily replace them, you're missing the ability to upgrade to a newer image, and creating a lot of state drift that you'll eventually need to cleanup. Your changes that should be preserved need to be in a volume. If there are other changes that would be lost when the container is deleted, they will be visible in docker diff and I'd recommend fixing this now rather than increasing the size of the technical debt.
Edit: I am wondering, in Triet Doan's answer, an option is to modify
UID and GID of already created user in the container, will doing this
for the user and group "nobody" can cause any problems inside
container, I am wondering because probably many commands for settings
already executed inside container, files are already edited by php on
mounted directory & container is running for days
I would build a newer image that doesn't depend on this username. Within the container, if there's data you need to preserve, it should be in a volume.
Edit: I found that alpine has no usermod & groupmod.
I use the following in the entrypoint script to install it on the fly, but the shadow package should be included in the image you build rather than doing this on the fly for every new container:
if ! type usermod >/dev/null 2>&1 || \
! type groupmod >/dev/null 2>&1; then
if type apk /dev/null 2>&1; then
echo "Warning: installing shadow, this should be included in your image"
apk add --no-cache shadow
else
echo "Commands usermod and groupmod are required."
exit 1
fi
fi

Is it possible to add an installer, run it and delete it during one build step in Docker?

I'm trying to create a Docker image from a pretty large installer binary (300+ MB). I want to add the installer to the image, install it, and delete the installer. This doesn't seem to be possible:
COPY huge-installer.bin /tmp
RUN /tmp/huge-installer.bin
RUN rm /tmp/huge-installer.bin # <- has no effect on the image size
Using multiple build stages doesn't seem to solve this, since I need to run the installer in the final image. If I could execute the installer directly from a previous build stage, without copying it, that would solve my problem, but as far as I know that's not possible.
Is there any way to avoid including the full weight of the installer in the final image?
I ended up solving this by using the built-in HTTP server in Python to make the project directory available to the image over HTTP.
Inside the Dockerfile, I can run commands like this, piping scripts directly to bash using curl:
RUN curl "http://127.0.0.1:${SERVER_PORT}/installer-${INSTALLER_VERSION}.bin" | bash
Or save binaries, run them and delete them in one step:
RUN curl -O "http://127.0.0.1:${SERVER_PORT}/binary-${INSTALLER_VERSION}.bin" && \
./binary-${INSTALLER_VERSION}.bin && \
rm binary-${INSTALLER_VERSION}.bin
I use a Makefile to start the server and stop it after the build, but you can use a build script instead.
Here's a Makefile example:
SHELL := bash
IMAGE_NAME := app-test
VERSION := 1.0.0
SERVER_PORT := 8580
.ONESHELL:
.PHONY: build
build:
# Kills the HTTP server when the build is done
function cleanup {
pkill -f "python3 -m http.server.*${SERVER_PORT}"
}
trap cleanup EXIT
# Starts a HTTP server that makes the contents of the project directory
# available to the image
python3 -m http.server -b 127.0.0.1 ${SERVER_PORT} &>/dev/null &
sleep 1
EXTRA_ARGS=""
# Allows skipping the build cache by setting NO_CACHE=1
if [[ -n $$NO_CACHE ]]; then
EXTRA_ARGS="--no-cache"
fi
docker build $$EXTRA_ARGS \
--network host \
--build-arg SERVER_PORT=${SERVER_PORT} \
-t ${IMAGE_NAME}:latest \
.
docker tag ${IMAGE_NAME}:latest ${IMAGE_NAME}:${VERSION}
I think the best way is to download the bin from a website then run it:
RUN wget http://myweb/huge-installer.bin && /tmp/huge-installer.bin && rm /tmp/huge-installer.bin
in this way your image layer will not contain the binary you download
I didn't test it thoroughly, but wouldn't such an approach be viable? (Besides LinPy's answer, which is way easier if you have the possibility to just do it that way.)
Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /tmp/entrypoint.sh
RUN \
echo "I am an image that can run your huge installer binary!" \
&& echo "I will only function when you give it to me as a volume mount."
ENTRYPOINT [ "/tmp/entrypoint.sh" ]
entrypoint.sh:
#!/bin/sh
/tmp/your-installer # install your stuff here
while true; do
echo "installer finished, commit me now!"
sleep 5
done
Then run:
$ docker build -t foo-1
$ docker run --rm --name foo-1 --rm -d -v $(pwd)/your-installer:/tmp/your-installer
$ docker logs -f foo-1
# once it echoes "commit me now!", run the next command
$ docker commit foo-1 foo-2
$ docker stop foo-1
Since the installer was only mounted as a volume, the image foo-2 should not contain it anymore. You could also go and build another Dockerfile based on foo-2 to change the entrypoint, for example.
Cf. docker commit

How do you run an Openshift Docker container as something besides root?

I'm currently running Openshift, but I am running into a problem when I try to build/deploy my custom Docker container. The container works properly on my local machine, but once it gets built in openshift and I try to deploy it, I get the error message. I believe the problem is because I am trying to run commands inside of the container as root.
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
My Docker file that I am deploying looks like this -
FROM centos:7
MAINTAINER me<me#me>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN git clone https://github.com/dockerFileBootstrap.git
RUN ansible-playbook "-e edit_url=andrewgarfield edit_alias=emmastone site_url=testing.com" dockerAnsible/dockerFileBootstrap.yml
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
COPY supervisord.conf /usr/etc/supervisord.conf
RUN rm -rf supervisord.conf
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80 443
#CMD ["/usr/bin/supervisord"]
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Ive run into a similar problem multiple times where it will say things like Permission Denied on file /supervisord.log or something similar.
How can I set it up so that my container doesnt run all of the commands as root? It seems to be causing all of the problems that I am having.
Openshift has strictly security policy regarding custom Docker builds.
Have a look a this OpenShift Application Platform
In particular at point 4 into the FAQ section, here quoted.
4. Why doesn't my Docker image run on OpenShift?
Security! Origin runs with the following security policy by default:
Containers run as a non-root unique user that is separate from other system users
They cannot access host resources, run privileged, or become root
They are given CPU and memory limits defined by the system administrator
Any persistent storage they access will be under a unique SELinux label, which prevents others from seeing their content
These settings are per project, so containers in different projects cannot see each other by default
Regular users can run Docker, source, and custom builds
By default, Docker builds can (and often do) run as root. You can control who can create Docker builds through the builds/docker and builds/custom policy resource.
Regular users and project admins cannot change their security quotas.
Many Docker containers expect to run as root (and therefore edit all the contents of the filesystem). The Image Author's guide gives recommendations on making your image more secure by default:
Don't run as root
Make directories you want to write to group-writable and owned by group id 0
Set the net-bind capability on your executables if they need to bind to ports <1024
Otherwise, you can see the security documentation for descriptions on how to relax these restrictions.
I hope it helps.
Although you don't have access to root, your OpenShift container, by default, is a member of the root group. You can change some dir/file permissions to avoid the Permission Denied errors.
If you're using a Dockerfile to deploy an image to OpenShift, you can add the following RUN command to your Dockerfile:
RUN chgrp -R 0 /run && chmod -R g=u /run
This will change the group for everything in the /run directory to the root group and then set the group permission on all files to be equivalent to the owner (group equals user) of the file. Essentially, any user in the root group has the same permissions as the owner for every file.
You can run docker as any user , also root (and not Openshift default build-in account UID - 1000030000 when issuing this two commands in sequence on command line oc cli tools
oc login -u system:admin -n default following with oc adm policy add-scc-to-user anyuid -z default -n projectname where projectname is name of your project inside which you assigned under your docker

How to create a Dockerfile for cassandra (or any database) that includes a schema?

I would like to create a dockerfile that builds a Cassandra image with a keyspace and schema already there when the image starts.
In general, how do you create a Dockerfile that will build an image that includes some step(s) that can't really be done until the container is running, at least the first time?
Right now, I have two steps: build the cassandra image from an existing cassandra Dockerfile that maps a volume with the CQL schema files into a temporary directory, and then run docker exec with cqlsh to import the schema after the image has been started as a container.
But that doesn't create an image with the schema - just a container. That container could be saved as an image, but that's cumbersome.
docker run --name $CASSANDRA_NAME -d \
-h $CASSANDRA_NAME \
-v $CASSANDRA_DATA_DIR:/data \
-v $CASSANDRA_DIR/target:/tmp/schema \
tobert/cassandra:2.1.7
then
docker exec $CASSANDRA_NAME cqlsh -f /tmp/schema/create_keyspace.cql
docker exec $CASSANDRA_NAME cqlsh -f /tmp/schema/schema01.cql
# etc
This works, but it makes it impossible to use with tools like Docker compose since linked containers/services will start up too and expect the schema to be in place.
I saw one attempt where the cassandra process as attempted to be started in the background in the Dockerfile during build, then cqlsh run, but I don't think that worked too well.
Ok I had this issue and someone advised me some strategy to deal with:
Start from an existing Cassandra Dockerfile, the official one for example
Remove the ENTRYPOINT stuff
Copy the schema (.cql) file and data (.csv) into the image and put it somewhere, /opt/data for example
create a shell script that will be used as the last command to start Cassandra
a. start cassandra with $CASSANDRA_HOME/bin/cassandra
b. IF there is a $CASSANDRA_HOME/data/data/your_keyspace-xxxx folder and it's not empty, do nothing more
c. Else
1. sleep some time to allow the server to listen on port 9042
2. when port 9042 is listening, execute the .cql script to load csv files
I found this procedure rather cumbersome but there seems to be no other way around. For Cassandra hands-on lab, I found it easier to create a VM image using Vagrant and Ansible.
Make a docker file Dockerfile_CAS:
FROM cassandra:latest
COPY ddl.cql docker-entrypoint-initdb.d/
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN ls -la *.sh; chmod +x *.sh; ls -la *.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["cassandra", "-f"]
edit docker-entrypoint.sh, add
for f in docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.cql) echo "$0: running $f" && until cqlsh -f "$f"; do >&2 echo "Cassandra is unavailable - sleeping"; sleep 2; done & ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
above exec "$#"
docker build -t suraj1287/cassandra -f Dockerfile_CAS .
and rebuild the image...
Another approach used by our team is create schema on server init.
Our java code test if exist the SCHEMA, if not (new environment, new deployment) create it.
Same for every new TABLE, automatic CREATE TABLE creates required new tables for new data entities when they run in any new cluster (other developer local, preproduction, production).
All this code is isolated inside our DataDriver classes for portability, in case we change Cassandra for another DB in some client or project.
This prevent a lot of hassle both for admins and for developers.
This approach is even valid for initial data loading, we use on tests.

Resources