Howto run a Prestashop docker container with persistent data? - docker

There is something I'm missing in many docker examples and that is persistent data. Am I right if I conclude that every container that is stopped will lose it's data?
I got this Prestashop image running with it's internal database:
https://hub.docker.com/r/prestashop/prestashop/
You just run docker run -ti --name some-prestashop -p 8080:80 -d prestashop/prestashop
Well you got your demo then, but not very practical.
First of all I need to hook an external MySQL container, but that one will also lose all it's data if for example my server reboots.
And what about all the modules and themes that are going to be added to the prestashop container?
It has to do with Volumes, but it is not clear to my how all the the host volumes needs to be mapped correctly and what path to the host is normally chosen. /opt/prestashop er something?

First of all, I don't have any experience with PrestaShop. This is an example which you can use for every docker container (from which you want to persist the data).
With the new version of docker (1.11) it's pretty easy to 'persist' your data.
First create your named volume:
docker volume create --name prestashop-volume
You will see this volume in /var/lib/docker/volumes:
prestashop-volume
After you've created your named volume container you can connect your container with the volume container:
docker run -ti --name some-prestashop -p 8080:80 -d -v prestashop-volume:/path/to/what/you/want/to/persist :prestashop/prestashop
(when you really want to persist everything, I think you can use the path :/ )
Now you can do what you want on your database.
When your container goes down or you delete your container, the named volume will still be there and you're able to reconnect your container with the named-volume.
To make it even more easy you can create a cron-job which creates a .tar of the content of /var/lib/docker/volumes/prestashop-volume/
When really everything is gone you can restore your volume by recreating the named-volume and untar your .tar-file in it.

Related

How to decide which file or folder to map with host?

I'm new to docker and pulled plesk panel docker image from this link. I succesfully built and ran this container using following command and created some data in it.
docker run -d -p 80:80 -p 443:443 -p 8880:8880 -p 8443:8443 -p 8447:8447 plesk/plesk
Whenever I restart container from image, I lost my data on plesk panel. Fair enough, because I'm creating a fresh container whenever I started container from scratch. I know that I need to create volumes and save data in it and mount container everytime from these volumes, it's ok. But here is my question, which file inside container will I map with host? How to decide it? Is it enough to create a volume with /data directory in every container? How to find container saves data in which of its directories?
Thanks in advance.

Making Docker "undeletable" volume

I have a docker named volume for database data. Now the thing is that when the database container is down and I (or anyone) run docker system prune it deletes all the unused containers, images and volumes including the one with database data. Is there a way to make the volume undeletable unless it is explicitly told to?
I suppose I can just mount a host directory to the container without making it a docker volume (and therefore without the risk of deleting it), but using docker volume seems like a cleaner way to do it.
When you run docker system pruneit is going to wipe out everything. But if you do something like this docker run -d -p 8080:8080 -p 1521:1521 -v /Users/noname_dev/programming/oracle-database:/u01/app/oracle -e DBCA_TOTAL_MEMORY=1024 oracle-database
then /Users/noname_dev/programming/oracle-database will still be there on your local but the container will naturally be gone till you create it again.

Docker: change port binding on an already created container with no data loss

Assuming that I have a MongoDb or Sql Server container with a lotta data, and all of a sudden (which is very probable) I need to change the port! Maybe due to a sudden security issue! And I need to stop the container and start it up again running on a different port. Why doesn't docker allow me to do that, if I run the image again a new container will be created with no data inside and that causes a lot of mess.
Is there a proper built-in solution? By proper I mean a solution that does not require me to back up databases, move them to out the container volume and restore them again. Something logical such as a command that can allow me to change the forwarded port, for example -p 1433:1234 to 27017:1234.
BLUF: Start your MongoDB container with a volume mapped in to keep the data persistant using this format: docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo
While I agree, it would be great if Docker had the ability to switch port numbers in a running container. As others said, each container is a process, and I do not know a way of changing a port on a running process.
You do not need to import your data if you have set up your volumes properly. I do this all the time for MySQL databases. The MyQSL image is just the database engine separate from the database if you map in your volumes correctly. That's how Docker is designed.
In looking at the section "Where to store data", it gives an example of mounting a volume to a folder on the host to keep your data. This should allow you to start a new container using the same data without having to re-import. But I'm not as familiar with MongoDB which is a NoSQL.
https://hub.docker.com/_/mongo/#!
You may need backup your database using this dump command:
docker exec some-mongo sh -c 'exec mongodump -d <database_name> --archive' > /some/path/on/your/host/all-collections.archive
Start a new container with the volume mapped and restore the data.
docker run --name some-mongo -v /my/own/datadir:/data/db -v /some/path/on/your/host/all-collections.archive:/data/db/collections.archive -d mongo
You'll need to restore that backup.
docker exec some-mongo sh -c 'exec mongorestore --db <database_name> --archive=/data/db/collections.archive
From that point on you should be able to simply stop and start a new container with the volumes mapped in. Your data should remain persistent. You should not need to dump and restore any more (well, obviously for normal backup purposes).
Container is the instantiation of a image.
The port number is the instantiation state of a container, so it can only be changed while creating a container.
You can change the port mapping by directly editing the hostconfig.json file at /var/lib/docker/containers/[hash_of_the_container]/hostconfig.json
You can determine the [hash_of_the_container] via the docker inspect command and the value of the "Id" field is the hash.
1) stop the container
2) change the file
3) restart your docker engine (to flush/clear config caches)
4) start the container
Reference: How do I assign a port mapping to an existing Docker container?

Docker: How a container persists data without volumes in the container?

I'm running the official solr 6.6 container used in a docker-compose environment without any relevant volumes.
If i modify a running solr container the data survives a restart.
I dont see any volumes mounted and it works for a plain solr container:
docker run --name solr_test -d -p 8983:8983 -t library/solr:6.6
docker exec -it solr_test /bin/bash -c 'echo woot > /opt/solr/server/solr/testfile'
docker stop solr_test
docker start solr_test
docker exec -it solr_test cat /opt/solr/server/solr/testfile
Above example prints 'woot'. I thought that a container doesnt persist any data? Also the documentation mentions that the solr cores are persisted in the container.
All i found, regarding container persistence is that i need to add volumes on my own like mentioned here.
So i'm confused: do containers store the data changed within the container or not? And how does the solr container achive this behaviour? The only option i see is that i misunderstood peristence in case of docker or the build of the container can set some kind of option to achieve this which i dont know about and didnt see in the solr Dockerfile.
This is expected behaviour.
The data you create inside a container persist as long as you don't delete the container.
But think containers in some way of throw away mentality. Normally you would want to be able to remove the container with docker rm and spawn a new instance including your modified config files. That's why you would need an e.g. named volume here, which survives a container life cycle on your host.
The Dockerfile, because you mention it in your question, actually only defines the image. When you call docker run you create a container from it. Exactly as defined in the image. A fresh instance without any modifications.
When you call docker commit on your container you snapshot it (including the changes you made to the files) and create a new image out of it. They achieve the data persistence this way.
The documentation you referring to explains this in detail.

Docker data container, boot2docker, and the local file system

I'm slowly working my way through understanding current Docker practices. I'm on a Mac, and I'm using boot2docker.
I've been able to use the docker -v local/directory:container/directory method to link a container directory to my local file system. Great, now I can easily edit things like site code in my local Mac file system and have the changes immediately available to my container (e.g. /var/www/html).
I'm now trying to separate my containers into discrete concerns. For example, a Web, Database, and File (e.g. busybox) container would be useful for a Wordpress site. Thing is, I don't know how to make my file container define volumes that I can then link to my local OS (similar to the -v local/directory:container/directory used by boot2docker).
This is probably not the most eloquent question, as I'm still fumbling through learning Docker, but if you can understand what I'm trying to achieve, I'd really appreciate any guidance provided.
Thanks!
Docker Volumes User Guide
I will use two docker containers for my simple example
marginalized_liskov and plagiarized_engelbart
Mount a Host Directory as a Data Volume (at runtime)
docker run -d -P --name marginalized_liskov -v /host/directory/context:/container/directory/context poop python server.py
marginalized_liskov is the name of the container.
poop is not only my favorite palindrome, but also the name of the volume that we're creating.
"/host/directory/context" is the location on the host that you want to mount
"/container/directory/context" is the location you want your new volume to be created in your container
python is of course the application to run
server.py is the argument provided to "python" for this sample.
Create a Named Volume in a container and mount another container to that volume
docker create -v /poop --name marginalized_liskov training/postgres
docker run -d --volumes-from marginalized_liskov --name plagiarized_engelbart ubuntu
This creates two containers.
marginalized_liskov gets a volume created named poop I built it from the postgres training image because that's what was used in the User Guide. Since we're just setting up a container to contain a data volume and not host applications, using the training/postgres image provides our functionality while remaining lean.
plagiarized_engelbart mounts the volumes from marginalized_liskov with the --volumes-from flag.

Resources