How to set volume for dokku-persistent-storage - ruby-on-rails

I am trying to use dokku-persistent-storage so my uploads for my rails app stay on the server, but I don't quite understand how to build the path since I am new to Dokku and Docker.
(I am running this on an Ubuntu droplet on Digital Ocean)
I'm not sure if it should be something like this:
[SERVER IP ADDRESS]/home/dokku/myapp/public_folder
or
/home/dokku/myapp/public_folder
or if i'm way off and it should be something completely different.
This is what the github section says about it:
In your applications folder (/home/dokku/app_name) create a file called PERSISTENT_STORAGE.
Inside this file list one volume-map/volume per line to mount. For example:
/host/path:/container/path
/another/container/path
The above example will result in the following arguments being passed to docker during deploy and docker run:
-v /host/path:/container/path -v /another/container/path
Move information on docker volumes can be found here: http://docs.docker.io/en/latest/use/working_with_volumes/

I am not into Ruby or dokku, but if I understood correctly, you want your docker to have a persistent storage on the host machine.
PERSISTENT_STORAGE file, as to the documentation that you've quoted, contains mappings from host file-system directories to your container file-system directories (translated to -v arguments of the CLI).
Therefore, you should map the directory of your uploads in the container, to the desired directory on the host.
For example, if your app's uploads are saved to this dir (inside the docker container):
/home/dokku/myapp/public_folder
and you'd like them to be kept in your host at:
/home/some/dir
then, as I understand, the content of PERSISTENT_STORAGE file should be:
/home/some/dir:/home/dokku/myapp/public_folder
I hope I got you right.

Use Dokku's storage:mount option.
You'll need to SSH into your dokku host:
ssh dokku#host
Run the following command to link the storage directory for that app to the app/public/uploads folder, for example:
storage:mount <app> /var/lib/dokku/data/storage:/app/public/uploads
The Dokku docs cover this well at: at http://dokku.viewdocs.io/dokku/advanced-usage/persistent-storage/

Related

How to get files in AWS ECS' container?

If want to get files in ECS' /tmp path, is it necessary to set a volume item to map the path?
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_task_definition#volume
Or is there a way to run something like docker exec ... to see the container?
I am not sure I understand the question fully. You can "ecs exec" into a task (if that's what you want/need to do). Here is the doc page on how to do that and this is a longer blog post that dives into it.
If you instead need to pre-populate files in /tmp you have a couple of options. Either you pull them at container startup as part of a startup script. Or you mount the /tmp directory to an external share that hosts the data. Here is how.

`docker service update` not working to update config

Firstly every step is done seemingly successfully (not any error reported). But per what I understand about how to check if the new config is applied OK, it seems to be failed updating the config.
Suppose I have a config file with a simple content like this:
Well done
I created a config (the first version) like this:
echo 'Well done' | docker config create my-config -
Now I have a local file named my-config.txt (on the host machine) with content as described above, it's used as a template (source) to clone over the target on the Docker container. On the Docker container, there is already a config file with the same content (originally). Now I change the content of the file my-config.txt (on the host machine) to something like this:
Well done !!!
And next I update the current docker service (created before) by using docker service update to apply the new config file, like this:
//firstly create another version of config
docker config create my-config-2 /home/my_user/my-config.txt
docker service update \
--config-add source=my-config-2,target=my-config.txt \
--config-rm my-config \
my-service
As I said, it seems to execute successfully. But when I try opening the my-config.txt file on the Docker container, its content is kept unchanged, like this:
docker exec [container_id] cat my-config.txt
It still shows Well done whereas the expected content should be Well done !!!. Isn't that what it should be? Am I doing something wrong here? Or could you suggest something to diagnose this issue or something different from what I've done without having to trying to solve this issue?

Automatically Configure Config inside Docker Container

While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)

Run Ignite docker with custom config file

I've been successfully run Ignite docker with parameter CONFIG_URI=https://raw.githubusercontent.com/apache/ignite/master/examples/config/example-cache.xml.
But I want to enable persistence and create a custom config file which I want to pass instead of CONFIG_URI.
Is there a way to pass a CONFIG file from host with the docker run command ?
On your Docker run command, you can use the -v parameter (or the equivalent in the Dockerfile) to map a local directory to that of the container.
Then you'd move your configuration file in there and set your CONFIG_URI to point to that, something like CONFIG_URI=file:///opt/etc/ignite.xml.
Of course you'll need to create a volume of some kind for the persistent files; you don't want to be storing them inside the container.
As antkr notes, if you're using Kubernetes, you can use a config map and StatefulSets, but you'd still need to set the CONFIG_URL in the same way.
Since you are going to use persistence, configure persistent volume according to following documentation:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
Mount it to your pod and read the configuration file from the volume using the CONFIG_URI parameter.

docker-compose caches run results

I'm having an issue with docker-compose where I'm passing a file into the container when it's run. The issue is that it doesn't seem to recognize when the file has been changed and serves the saved result back indefinitely until I change the name of the file.
An example (modified names for brevity):
jono#macbook:~/myProj% docker-compose run vpn conf.opvn
Options error: Unrecognized option or missing parameter(s) in conf.opvn:71: AXswRE+
5aN64mYiPSatOACC6+bISv8RcDPX/lMYdLwe8zQY6qWtbrjFXrp2 (2.3.8)
Then I change the file, save it, and run the command again - exact same output.
Then without changing anything I do this:
jono#macbook:~/myProj% cp conf.opvn newconf.opvn
And when I run $ docker-compose run vpn newconf.opvn it works. Seems really silly.
I'm working with Tmux and Mac if there is some way that affects it. Is this the expected behaviour? I couldn't find anything documenting this on the docker-compose homepage.
EDIT:
Specifically I'm using this repo from the amazing Jess.
The image you are using is using volume in order to mount your current directory. Basically the file conf.opvn is copied to the docker container.
When you change the file, the container doesn't see that change, but it does pick up the rename (which the container sees as a new file). This most probably is due to user rights of the file and the user rights of the folder in the docker container where this file is mounted. Try changing the file's permissions to 777 before beginning the process and check again.
You can find a discussion about this in the official forum of docker

Resources