Minecraft Docker Image: game storage persistence - docker

I'm trying to use this Vanilla Minecraft docker image to run a Minecraft server for friends, but can't seem to get its data to live beyond the lifetime of the container itself.
I am running on an Ubuntu host, and on that host I have created /opt/minecraft where I'd like any outside-of-container persisted data to live.
To test my approaches I would:
start the container (commands provided below with each approach I tried)
join the server
dig a circle around myself as an indicator that I've been there
disconnect from the server
stop the container docker stop generated_name
start the container again using the same command as above
join the server again
is the circle I dug still there / has the world changed?
I have yet to find an approach that doesn't lose my data. Here's what I've tried:
Approach 1: all in one
I tried just mounting /opt/minecraft as /minecraft in the container:
docker run -d -v /opt/minecraft:/minecraft -p 25565:25565 webhippie/minecraft-vanilla
Approach 2: individual volumes
I noted in the Dockerfile that there are 3 volumes listed:
VOLUME ["/minecraft/merge", "/minecraft/world", "/minecraft/logs"]
So after destroying and recreating an empty /opt/minecraft and creating the three folders to mount, I tried this instead:
docker run -d -v /opt/minecraft/world:/minecraft/world -v /opt/minecraft/merge:/minecraft/merge -v /opt/minecraft/logs:/minecraft/logs -p 25565:25565 webhippie/minecraft-vanilla
In both approaches 1 and 2 I can see that some files have been created in the folder(s) mounted as volumes (so I don't think it's a permissions issue), but it doesn't seem to be enough to keep my world from being created from scratch again after a container restart. What am I missing?
I'm also new to Minecraft. Maybe there's some nuance about the game that I'm missing, instead?

Your second approach should be totally fine to keep the world, at least that's how I'm doing that.

Related

Using remove option with interactive docker container [duplicate]

I am trying Docker for the first time and do not yet have a "mental model". Total beginner.
All the examples that I am looking at have included the --rm flag to run, such as
docker run -it --rm ...
docker container run -it --rm ...
Question:
Why do these commands include the --rm flag? I would think that if I were to go through the trouble of setting up or downloading a container with the good stuff in it, why remove it? I want to keep it to use again.
So, I know I have the wrong idea of Docker.
Containers are merely an instance of the image you use to run them.
The state of mind when creating a containerized app is not by taking a fresh, clean ubuntu container for instance, and downloading the apps and configurations you wish to have in it, and then let it run.
You should treat the container as an instance of your application, but your application is embedded into an image.
The proper usage would be creating a custom image, where you embed all your files, configurations, environment variables etc, into the image. Read more about Dockerfile and how it is done here
Once you did that, you have an image that contains everything, and in order to use your application, you just run the image with proper port settings or other dynamic variables, using docker run <your-image>
Running containers with --rm flag is good for those containers that you use for very short while just to accomplish something, e.g., compile your application inside a container, or just testing something that it works, and then you are know it's a short lived container and you tell your Docker daemon that once it's done running, erase everything related to it and save the disk space.
The flag --rm is used when you need the container to be deleted after the task for it is complete.
This is suitable for small testing or POC purposes and saves the headache for house keeping.
From https://docs.docker.com/engine/reference/run/#clean-up---rm
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag
In short, it's useful to keep the host clean from stopped and unused containers.
When you run a container from an image using a simple command like (docker run -it ubuntu), it spins up a container. You attach to your container using docker attach container-name (or using exec for different session).
So, when you're within your container and working on it and you type exit or ctrl+z or any other way to come out of the container, other than ctrl+p+q, your container exits. That means that your container has stopped, but it is still available on your disk and you can start it again with : docker start container-name/ID.
But when you run the container with —rm tag, on exit, the container is deleted permanently.
I use --rm when connecting to running containers to perform some actions such as database backup or file copy. Here is an example:
docker run -v $(pwd):/mnt --link app_postgres_1:pg --rm postgres:9.5 pg_dump -U postgres -h pg -f /mnt/docker_pg.dump1 app_db
The above will connect a running container named 'app_postgres_1' and create a backup. Once the backup command completes, the container is fully deleted.
The "docker run rm " command makes us run a new container and later when our work is completed then it is deleted by saving the disk space.
The important thing to note is, the container is just like a class instance and not for data storage. We better delete them once the work is complete. When we start again, it starts fresh.
The question comes then If the container is deleted then what about the data in a container? The data is actually saved in the local system and get linked to it when the container is started. The concept is named as "Volume or shared volume".

Access files on host server from the Meteor App deployed with Meteor Up

I have a Meteor App deployed with Meteor UP to Ubuntu.
From this App I need to read a file which is located outside App container on the host server.
How can I do that?
I've tried to set up volumes in the mup.js but no luck. It seems that I'm missing how to correctly provide /host/path and /container/path
volumes: {
// passed as '-v /host/path:/container/path' to the docker run command
'/host/path': '/container/path',
'/second/host/path': '/second/container/path'
},
Read the docs for Docker mounting volumes but obviously can't understand it.
Let's say file is in /home/dirname/filename.csv.
How to correctly mount it into App to be able to access it from the Application?
Or maybe there are other possibilities to access it?
Welcome to Stack Overflow. Let me suggest another way of thinking about this...
In a scalable cluster, docker instances can be spun up and down as the load on the app changes. These may or may not be on the same host computer, so building a dependency on the file system of the host isn't a great idea.
You might be better to think of using a file storage mechanism such as S3, which will scale on its own, and disk storage limits won't apply.
Another option is to determine if the files could be stored in the database.
I hope that helps
Let's try to narrow the problem down.
Meteor UP is passing the configuration parameter volumes directly on to docker, as they also mention in the comment you included. It therefore might be easier to test it against docker directly - narrowing the components involved down as much as possible:
sudo docker run \
-it \
--rm \
-v "/host/path:/container/path" \
-v "/second/host/path:/second/container/path" \
busybox \
/bin/sh
Let me explain this:
sudo because Meteor UP uses sudo to start the container. See: https://github.com/zodern/meteor-up/blob/3c7120a75c12ea12fdd5688e33574c12e158fd07/src/plugins/meteor/assets/templates/start.sh#L63
docker run we want to start a container.
-it to access the container (think of it like SSH'ing into the container).
--rm to automatically clean up - remove the container - after we're done.
-v - here we give the volumes as you define it (I here took the two directories example you provided).
busybox - an image with some useful tools.
/bin/sh - the application to start the container with
I'd expect that you also cannot access the files here. In this case, dig deeper on why you can't make a folder accessible in Docker.
If you can, which would sound weird to me, you can start the container and try to access into the container by running the following command:
docker exec -it my-mup-container /bin/sh
You can think of this command like SSH'ing into a running container. Now you can check around if it really isn't there, if the credentials inside the container are correct, etc.
At last, I have to agree it #mikkel, that it's not a good option to mount a local directoy, but you can now start looking into how to use docker volume to mount a remote directory. He mentioned S3 by AWS, I've worked with AzureFiles on Azure, there are plenty of possibilities.

What is the '--rm' flag doing?

I am trying Docker for the first time and do not yet have a "mental model". Total beginner.
All the examples that I am looking at have included the --rm flag to run, such as
docker run -it --rm ...
docker container run -it --rm ...
Question:
Why do these commands include the --rm flag? I would think that if I were to go through the trouble of setting up or downloading a container with the good stuff in it, why remove it? I want to keep it to use again.
So, I know I have the wrong idea of Docker.
Containers are merely an instance of the image you use to run them.
The state of mind when creating a containerized app is not by taking a fresh, clean ubuntu container for instance, and downloading the apps and configurations you wish to have in it, and then let it run.
You should treat the container as an instance of your application, but your application is embedded into an image.
The proper usage would be creating a custom image, where you embed all your files, configurations, environment variables etc, into the image. Read more about Dockerfile and how it is done here
Once you did that, you have an image that contains everything, and in order to use your application, you just run the image with proper port settings or other dynamic variables, using docker run <your-image>
Running containers with --rm flag is good for those containers that you use for very short while just to accomplish something, e.g., compile your application inside a container, or just testing something that it works, and then you are know it's a short lived container and you tell your Docker daemon that once it's done running, erase everything related to it and save the disk space.
The flag --rm is used when you need the container to be deleted after the task for it is complete.
This is suitable for small testing or POC purposes and saves the headache for house keeping.
From https://docs.docker.com/engine/reference/run/#clean-up---rm
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag
In short, it's useful to keep the host clean from stopped and unused containers.
When you run a container from an image using a simple command like (docker run -it ubuntu), it spins up a container. You attach to your container using docker attach container-name (or using exec for different session).
So, when you're within your container and working on it and you type exit or ctrl+z or any other way to come out of the container, other than ctrl+p+q, your container exits. That means that your container has stopped, but it is still available on your disk and you can start it again with : docker start container-name/ID.
But when you run the container with —rm tag, on exit, the container is deleted permanently.
I use --rm when connecting to running containers to perform some actions such as database backup or file copy. Here is an example:
docker run -v $(pwd):/mnt --link app_postgres_1:pg --rm postgres:9.5 pg_dump -U postgres -h pg -f /mnt/docker_pg.dump1 app_db
The above will connect a running container named 'app_postgres_1' and create a backup. Once the backup command completes, the container is fully deleted.
The "docker run rm " command makes us run a new container and later when our work is completed then it is deleted by saving the disk space.
The important thing to note is, the container is just like a class instance and not for data storage. We better delete them once the work is complete. When we start again, it starts fresh.
The question comes then If the container is deleted then what about the data in a container? The data is actually saved in the local system and get linked to it when the container is started. The concept is named as "Volume or shared volume".

Strategies for deciding when to use 'docker run' vs 'docker start' and using the latest version of a given image

I'm dockerizing some of our services. For our dev environment, I'd like to make things as easy as possible for our developers and so I'm writing some scripts to manage the dockerized components. I want developers to be able to start and stop these services just as if they were non-dockerized. I don't want them to have to worry about creating and running the container vs stopping and starting and already-created container. I was thinking that this could be handled using Fig. To create the container (if it doesn't already exist) and start the service, I'd use fig up --no-recreate. To stop the service, I'd use fig stop.
I'd also like to ensure that developers are running containers built using the latest images. In other words, something would check to see if there was a later version of the image in our Docker registry. If so, this image would be downloaded and run to create a new container from that image. At the moment it seems like I'd have to use docker commands to list the contents of the registry (docker search) and compare that to existing local containers (docker ps -a) with the addition of some greping and awking or use the Docker API to achieve the same thing.
Any persistent data will be written to mounted volumes so the data should survive the creation of a new container.
This seems like it might be a common pattern so I'm wondering whether anyone else has given these sorts of scenarios any thought.
This is what I've decided to do for now for our Neo4j Docker image:
I've written a shell script around docker run that accepts command-line arguments for the port, database persistence directory on the host, log file persistence directory on the host. It executes a docker run command that looks like:
docker run --rm -it -p ${port}:7474 -v ${graphdir}:/var/lib/neo4j/data/graph.db -v ${logdir}:/var/log/neo4j my/neo4j
By default port is 7474, graphdir is $PWD/graph.db and logdir is $PWD/log.
--rm removes the container on exit, however the database and logs are maintained on the host's file system. So no containers are left around.
-it allows the container and the Neo4j service running within it to receive signals so that the service can be gracefully shut down (the Neo4j server gracefully shuts down on SIGINT) and the container exited by hitting ^C or sending it a SIGINT if the developer puts this in the background. No need for separate start/stop commands.
Although I certainly wouldn't do this in production, I think this fine for a dev environment.
I am not familiar with fig but your scenario seems good.
Usually, I prefer to kill/delete + run my container instead of playing with start/stop though. That way, if there is a new image available, Docker will use it. This work only for stateless services. As you are using Volumes for persistent data, you could do something like this.
Regarding the image update, what about running docker pull <image> every N minutes and checking the "Status" that the command returns? If it is up to date, then do nothing, otherwise, kill/rerun the container.

Volume and data persistence

What is the best way to persist containers data with docker? I would like to be able to retain some data and be able to get them back when restarting my container. I have read this interesting post but it does not exactly answer my question.
As far as I understand, I only have one option:
docker run -v /home/host/app:/home/container/app
This will mount the countainer folder onto the host.
Is there any other option? FYI, I don't use linking containers (--link )
Using volumes is the best way of handling data which you want to keep from a container. Using the -v flag works well and you shouldn't run into issues with this.
You can also use the VOLUME instruction in the Dockerfile which means you will not have to add any more options at run time, however they're quite tightly coupled with the specific container, you'd need to use docker start, rather than docker run to get the data back (or of course -v to the volume which was created in the past, likely in /var/ somewhere).
A common way of handling volumes is to create a data volume container with volumes defined by -v Then when you create your app container, use the --volumes-from flag. This will make your new container use the same volumes as the container you used the -v on (your data volume container). Of course this may seem like you're shifting the issue somewhere else.
This makes it quite simple to share volumes over multiple containers. Perhaps you have a container for your application, and another for logstash.
create a volume-container: this format of -v creates a volume, directory e.g. /var/lib/docker/volume/d3b0d5b781b7f92771b7342824c9f136c883af321a6e9fbe9740e18b93f29b69
which is still a bind mounted /container/path/vol
docker run -v /foo/bar/vol --name volbox ubuntu
I can now use this container, as my volume.
docker run --volumes-from volbox --name foobox ubuntu /bin/bash
root#foobox# ls /container/path/vol
Now, if I distribute these two containers, they will just work. The volume will always be available to foobox, regardless which host it is deployed to.
The snag of course comes if you don't want your storage to be in /var/lib/docker/volumes...
I suggest you take a look at some of the excellent post by Michael Crosby
https://docs.docker.com/userguide/dockervolumes/
and the docker docs
https://docs.docker.com/userguide/dockervolumes/

Resources