Docker: change port binding on an already created container with no data loss - docker

Assuming that I have a MongoDb or Sql Server container with a lotta data, and all of a sudden (which is very probable) I need to change the port! Maybe due to a sudden security issue! And I need to stop the container and start it up again running on a different port. Why doesn't docker allow me to do that, if I run the image again a new container will be created with no data inside and that causes a lot of mess.
Is there a proper built-in solution? By proper I mean a solution that does not require me to back up databases, move them to out the container volume and restore them again. Something logical such as a command that can allow me to change the forwarded port, for example -p 1433:1234 to 27017:1234.

BLUF: Start your MongoDB container with a volume mapped in to keep the data persistant using this format: docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo
While I agree, it would be great if Docker had the ability to switch port numbers in a running container. As others said, each container is a process, and I do not know a way of changing a port on a running process.
You do not need to import your data if you have set up your volumes properly. I do this all the time for MySQL databases. The MyQSL image is just the database engine separate from the database if you map in your volumes correctly. That's how Docker is designed.
In looking at the section "Where to store data", it gives an example of mounting a volume to a folder on the host to keep your data. This should allow you to start a new container using the same data without having to re-import. But I'm not as familiar with MongoDB which is a NoSQL.
https://hub.docker.com/_/mongo/#!
You may need backup your database using this dump command:
docker exec some-mongo sh -c 'exec mongodump -d <database_name> --archive' > /some/path/on/your/host/all-collections.archive
Start a new container with the volume mapped and restore the data.
docker run --name some-mongo -v /my/own/datadir:/data/db -v /some/path/on/your/host/all-collections.archive:/data/db/collections.archive -d mongo
You'll need to restore that backup.
docker exec some-mongo sh -c 'exec mongorestore --db <database_name> --archive=/data/db/collections.archive
From that point on you should be able to simply stop and start a new container with the volumes mapped in. Your data should remain persistent. You should not need to dump and restore any more (well, obviously for normal backup purposes).

Container is the instantiation of a image.
The port number is the instantiation state of a container, so it can only be changed while creating a container.

You can change the port mapping by directly editing the hostconfig.json file at /var/lib/docker/containers/[hash_of_the_container]/hostconfig.json
You can determine the [hash_of_the_container] via the docker inspect command and the value of the "Id" field is the hash.
1) stop the container
2) change the file
3) restart your docker engine (to flush/clear config caches)
4) start the container
Reference: How do I assign a port mapping to an existing Docker container?

Related

How I can update prometheus config file without losing data on docker

I have an docker container running prometheus and sometimes I have to update an config file inside the container, the problem is that I don't know who I can update this file without deleting and creating the container again.
docker run --network="host" -d --name=prometheus -p 9090:9090 -v ~/prometheus.yaml:/etc/prometheus/prometheus.yml prom/prometheus --config.file=/etc/prometheus/prometheus.yml
I want to know how I can update the prometheus.yaml without deleting and creating again the docker container.
You should VOLUME the data path of Prometheus outside of your container.
So if the container creates again, you can have your previous data.
The default data path of Prometheus is ./data but in docker it depends on your base-image.
In theory you can't since by principle containers are ephemeral. Meaning they're supposed to be disposable upon exiting. However, there are a few ways out of your predicament:
#1. Create a new Image from your running container: https://www.scalyr.com/blog/create-docker-image/ to persist the state.
#2. Copy your data from within the container to the "outside world" as a backup, if option 1 is not the right option for you (here's an explanation how to do so: https://linuxhandbook.com/docker-cp-example/). You could also log in to the container (docker exec -it <container-name> bash) and then use yum or apt install (depending on your base image) to install the necessary tools to make your backup (rsync, ...), if the sometimes very barebones baseimage does not provide these.
#3. As #Amir already mentioned, you should always create a Volume inside your Container and map it to the outside world to have a persistent data storage. You create a Volume by the VOLUME-Keyword in the Dockerfile: https://docs.docker.com/storage/volumes/ ..by doing so you can restart the container everytime if the config changes without worrying about data loss.
HTH
Use the reload url
Prometheus can reload its configuration at runtime. If the new configuration is not well-formed, the changes will not be applied. A configuration reload is triggered by sending a SIGHUP to the Prometheus process or sending a HTTP POST request to the /-/reload endpoint (when the --web.enable-lifecycle flag is enabled). This will also reload any configured rule files.
Use the following the change the config inside the container using:
docker exec -it <container_name> sh
Map the config to outside the docker container for persistence using
-v <host-path>:<container_path>

Is there a dockerfile RUN command that executes the argument on the host?

We're trying to build a Docker stack that includes our complete application: a Postgres database and at least one web application.
When the stack is started, we expect the application to be immediately working - there should not be any delay due to database setup or data import. So the database schema (DDL) and the initial data have to be imported when the image is created.
This could be achieved by a RUN command in the dockerfile, for example
RUN psql.exe -f initalize.sql -h myhost -d mydatabase -U myuser
RUN data-import.exe myhost mydatabase myuser
However, AFAIU this would execute data-import.exe inside the Postgres container, which can only work if the Postgres container is a Windows container. Our production uses a Linux Postgres distribution, so this is not a good idea. We need the image to be a Linux Postgres container.
So the natural solution is to execute data-import.exe on the host, like this:
When we run docker build, a Linux Postgres container is started.
RUN psql.exe ... runs some SQL commands inside the Postgres container.
Now, our data-import.exe is executed on the host. Its Postgres client connects to the database in the container and imports the data.
When the data import is done, the data is committed to the image, and docker builds an image which contains the Postgres database together with the imported data.
Is there such a command? If not, how can we implement this scenario in docker?
Use the correct tool, a dockerfile is not a hammer for everything.
Obviously you come from a state where you had postgres up before using some import-tool. Now you can mimic that strategy by firing up a postgres container (without dockerfile, just docker/kubernetes). Then run the import-tool, stop the postgres-container, and make a snapshot of the result using "docker commit". The committed image will be used for the next stages of your deployment.
In Docker generally the application data is stored separately from images and containers; for instance you'd frequently use a docker run -v option to store data in a host directory or Docker volume so that it would outlive its container. You wouldn't generally try to bake data into an image, both for scale reasons and because any changes will be lost when a container exits.
As you've described the higher-level problem, I might distribute something like a "test kit" that included a docker-compose.yml and a base data directory. Your Docker Compose file would use a stock PostgreSQL container with data:
postgres:
image: postgres:10.5
volumes:
- './postgres:/var/lib/postgresql/data'
To answer the specific question you asked, docker build steps only run individual commands within Docker container space; they can't run arbitrary host commands, read filesystem content outside of the tree containing the Dockerfile, or write any sort of host filesystem content outside the container.

How to start a existing mysql container in docker (toolbox)?

I have a container (i'm using this container https://hub.docker.com/_/mysql/) which had started before, with ID 5f96e9570d1b1475a888d7a615acdd9a7715c1ed6f0c40900f2e9c1ab485c7cf, but now how can i restart it ? I tried this command but not work
$ docker run --name mysql -e MYSQL_ROOT_PASSWORD=*Abcd1234 -d mysql:5.7
D:\CWindow10\Docker Toolbox\docker.exe: Error response from daemon: Conflict. The container name "/mysql" is already in use by container "5f96e9570d1b1475a888d7a615acdd9a7715c1ed6f0c40900f2e9c1ab485c7cf". You have to remove (or rename) that container to be able to reuse that name.
See 'D:\CWindow10\Docker Toolbox\docker.exe run --help'.
If i delete the container and retype the command, will the old data still exist in new container?
To restart an existing container, simply run docker start <container_name_or_id>.
Regarding the data: docker uses the concept of volumes to put data. For the mysql image, there's a section "Where to Store Data" on the docker hub site. If you don't manually declare where the image should go, docker will create one for you. If you want your data to be kept, the easiest way is to create a folder and tell the docker run command to map that volume. That way, you can still use it if you throw away your container.
use this command to restart container docker restart <CONTAINER>
starting new container will not preserve your data unless you have mounted external volume and stored data on it. Have a look at this blog http://blog.arungupta.me/docker-mysql-persistence/

Docker: How a container persists data without volumes in the container?

I'm running the official solr 6.6 container used in a docker-compose environment without any relevant volumes.
If i modify a running solr container the data survives a restart.
I dont see any volumes mounted and it works for a plain solr container:
docker run --name solr_test -d -p 8983:8983 -t library/solr:6.6
docker exec -it solr_test /bin/bash -c 'echo woot > /opt/solr/server/solr/testfile'
docker stop solr_test
docker start solr_test
docker exec -it solr_test cat /opt/solr/server/solr/testfile
Above example prints 'woot'. I thought that a container doesnt persist any data? Also the documentation mentions that the solr cores are persisted in the container.
All i found, regarding container persistence is that i need to add volumes on my own like mentioned here.
So i'm confused: do containers store the data changed within the container or not? And how does the solr container achive this behaviour? The only option i see is that i misunderstood peristence in case of docker or the build of the container can set some kind of option to achieve this which i dont know about and didnt see in the solr Dockerfile.
This is expected behaviour.
The data you create inside a container persist as long as you don't delete the container.
But think containers in some way of throw away mentality. Normally you would want to be able to remove the container with docker rm and spawn a new instance including your modified config files. That's why you would need an e.g. named volume here, which survives a container life cycle on your host.
The Dockerfile, because you mention it in your question, actually only defines the image. When you call docker run you create a container from it. Exactly as defined in the image. A fresh instance without any modifications.
When you call docker commit on your container you snapshot it (including the changes you made to the files) and create a new image out of it. They achieve the data persistence this way.
The documentation you referring to explains this in detail.

Howto run a Prestashop docker container with persistent data?

There is something I'm missing in many docker examples and that is persistent data. Am I right if I conclude that every container that is stopped will lose it's data?
I got this Prestashop image running with it's internal database:
https://hub.docker.com/r/prestashop/prestashop/
You just run docker run -ti --name some-prestashop -p 8080:80 -d prestashop/prestashop
Well you got your demo then, but not very practical.
First of all I need to hook an external MySQL container, but that one will also lose all it's data if for example my server reboots.
And what about all the modules and themes that are going to be added to the prestashop container?
It has to do with Volumes, but it is not clear to my how all the the host volumes needs to be mapped correctly and what path to the host is normally chosen. /opt/prestashop er something?
First of all, I don't have any experience with PrestaShop. This is an example which you can use for every docker container (from which you want to persist the data).
With the new version of docker (1.11) it's pretty easy to 'persist' your data.
First create your named volume:
docker volume create --name prestashop-volume
You will see this volume in /var/lib/docker/volumes:
prestashop-volume
After you've created your named volume container you can connect your container with the volume container:
docker run -ti --name some-prestashop -p 8080:80 -d -v prestashop-volume:/path/to/what/you/want/to/persist :prestashop/prestashop
(when you really want to persist everything, I think you can use the path :/ )
Now you can do what you want on your database.
When your container goes down or you delete your container, the named volume will still be there and you're able to reconnect your container with the named-volume.
To make it even more easy you can create a cron-job which creates a .tar of the content of /var/lib/docker/volumes/prestashop-volume/
When really everything is gone you can restore your volume by recreating the named-volume and untar your .tar-file in it.

Resources