When i create a container group with 2 desired instances with a command containing the volume specification as follows:
> ... -v log_vol:/opt/ibm/logs --env
> LOG_LOCATIONS=/opt/ibm/logs/messages.log,/opt/ibm/logs/debug.log,/opt/ibm/logs/trace.log
> -e TRACE_LEVEL=*~info -e MAX_LOG_FILES=5 -e MAX_LOG_FILE_SIZE=20 ...
In this case each individual running-container-instance of the group will have a similar directory /opt/ibm/logs/ to store logs.
When the application within the individual container instance generates logs, the log data is lost as it is mounted to a shared volume called log_vol. The logs get replaced on every new entry.
Can someone suggest me on how to handle it?
Are there any ways that we can attach a volume specification post container instance creation?
In this case, it's best to think of the volume as something similar to a shared network drive, with the separate containers running on different hosts. If the processes are assuming they're the only one writing to the file, and caching/overwriting on each write, this will be the result.
Perhaps instead have the containers/programs write to something like /opt/ibm/logs/messages.$HOSTNAME.log so that the assumption they own their own logfile is correct? Or similarly, have the container create for itself /opt/ibm/logs/$HOSTNAME/ on boot, and then write to messages/debug/trace.log under there?
Related
I saw this post with different solutions for standard docker installation:
How to change the default location for "docker create volume" command?
At first glance I struggle to repeat the steps to change the default mount point for the rootless installation.
Should it be the same? What would be the procedure?
I just got it working. I had some issues because I had the service running while trying to change configurations. Key takeaways:
The config file is indeed stored in ~/.config/docker/. One must make a daemon.json file here in order to change preferences. We would like to change the data-root option (and storage-driver, in case the drive does not have capabilities
To start and stop the headless service one runs systemctl --user [start | stop] docker.
a. Running the systemwide service starts a parallel and separate instance of docker, which is not rootless.
b. When stopping make sure to stop the docker.socketfirst.
Sources are (see Useage section for rootless)
and (config file information)
We ended up with the indirect solution. We have identified the directory where the volumes are mounted by default and created a symbolic link which points to the place where we actually want to store the data. In our case it was enough. Something like that:
sudo ln -s /data /home/ubuntu/.local/share/docker/volumes"
I'm trying to find a generic best practice for how to:
Take an arbitrary (parent) Dockerfile, e.g. one of the official Docker images that run their containerized service as root,
Derive a custom (child) Dockerfile from it (via FROM ...),
Adjust the child in the way that it runs the same service as the parent, but as non-root user.
I've been searching and trying for days now but haven't been able to come up with a satisfying solution.
I'd like to come up with an approach e.g. similar to the following, simply for adjusting the user the original service runs as:
FROM mariadb:10.3
RUN chgrp -R 0 /var/lib/mysql && \
chmod g=u /var/lib/mysql
USER 1234
However, the issue I'm running into again and again is whenever the parent Dockerfile declares some path as VOLUME (in the example above actually VOLUME /var/lib/mysql), that effectively makes it impossible for the child Dockerfile to adjust file permissions for that specific path. The chgrp & chmod are without effect in that case, so the resulting docker container won't be able to start successfully, due to file access permission issues.
I understand that the VOLUME directive works that way by design and also why it's like that, but to me it seems that it completely prevents a simple solution for the given problem: Taking a Dockerfile and adjusting it in a simple, clean and minimalistic way to run as non-root instead of root.
The background is: I'm trying to run arbitrary Docker images on an Openshift Cluster. Openshift by default prevents running containers as root, which I'd like to keep that way, as it seems quite sane and a step into the right direction, security-wise.
This implies that a solution like gosu, expecting the container to be started as root in order to drop privileges during runtime isn't good enough here. I'd like to have an approach that doesn't require the container to be started as root at all, but only as the specified USER or even with a random UID.
The unsatisfying approaches that I've found until now are:
Copy the parent Dockerfile and adjust it in the way necessary (effectively duplicating code)
sed/awk through all the service's config files during build time to replace the original VOLUME path with an alternate path, so the chgrp and chmod can work (leaving the original VOLUME path orphaned).
I really don't like these approaches, as they require to really dig into the logic and infrastructure of the parent Dockerfile and how the service itself operates.
So there must be better ways to do this, right? What is it that I'm missing? Help is greatly appreciated.
Permissions on volume mount points don't matter at all, the mount covers up whatever underlying permissions were there to start with. Additionally you can set this kind of thing at the Kubernetes level rather than worrying about the Dockerfile at all. This is usually though a PodSecurityPolicy but you can also set it in the SecurityContext on the pod itself.
I've been successfully run Ignite docker with parameter CONFIG_URI=https://raw.githubusercontent.com/apache/ignite/master/examples/config/example-cache.xml.
But I want to enable persistence and create a custom config file which I want to pass instead of CONFIG_URI.
Is there a way to pass a CONFIG file from host with the docker run command ?
On your Docker run command, you can use the -v parameter (or the equivalent in the Dockerfile) to map a local directory to that of the container.
Then you'd move your configuration file in there and set your CONFIG_URI to point to that, something like CONFIG_URI=file:///opt/etc/ignite.xml.
Of course you'll need to create a volume of some kind for the persistent files; you don't want to be storing them inside the container.
As antkr notes, if you're using Kubernetes, you can use a config map and StatefulSets, but you'd still need to set the CONFIG_URL in the same way.
Since you are going to use persistence, configure persistent volume according to following documentation:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
Mount it to your pod and read the configuration file from the volume using the CONFIG_URI parameter.
I have a web application inside a docker image . The web application is a bit complex, so every time I create a new component inside my app I have to mount another directory.The problem is that I will end up with a command having too many mounts:
docker run -v ... -v ... -v ... ... myimage
Is there a better solution for this?
The main idea of dockerization that you have immutable containers which you can run everywhere with same result(stateless). If your container has state maybe you have bad architecture solution for your application. Maybe you should separate your application on two. For example, first application will be stateless and another will manage your first application storage. As a variant you can create all your new directory in only one volume:
-v ./app_state:/app_state
with the next app_state dir structure
app_state
|__ subvolume_1
|__ subvolume_2
|
.
.
.
|__ subvolume_n
If thé problèmes is that the command became too long to be typed in a terminal you can use docker compose or a custom script. Then you'll be able to mount as many volumes you want without to rewrite the all stuff anytime you launch a container.
Ok, so i suppose your web application store somewhere in a database a list of the projects and the path wher it's stored in the filesystem. If you can modify the source of the web application maybe you can add a procedure that create a file that map project path. And then create a script that start your container and mount each projects in that file (by parsing it with awk). If you can't modify the web app i'm sure you can at least access the projects list in your database and make the parsing process direcly in your container's running script
so your web app create a file like that :
Project1 /opt/project1
Project2 /opt/project2
and you container's running script look like that :
#!/bin/bash
VOLUMES=$(cat projects.txt | awk '{print "-v " $2":/home/"$1}')
COMMAND=$(docker run $VOLUMES myimage)
I am trying to use dokku-persistent-storage so my uploads for my rails app stay on the server, but I don't quite understand how to build the path since I am new to Dokku and Docker.
(I am running this on an Ubuntu droplet on Digital Ocean)
I'm not sure if it should be something like this:
[SERVER IP ADDRESS]/home/dokku/myapp/public_folder
or
/home/dokku/myapp/public_folder
or if i'm way off and it should be something completely different.
This is what the github section says about it:
In your applications folder (/home/dokku/app_name) create a file called PERSISTENT_STORAGE.
Inside this file list one volume-map/volume per line to mount. For example:
/host/path:/container/path
/another/container/path
The above example will result in the following arguments being passed to docker during deploy and docker run:
-v /host/path:/container/path -v /another/container/path
Move information on docker volumes can be found here: http://docs.docker.io/en/latest/use/working_with_volumes/
I am not into Ruby or dokku, but if I understood correctly, you want your docker to have a persistent storage on the host machine.
PERSISTENT_STORAGE file, as to the documentation that you've quoted, contains mappings from host file-system directories to your container file-system directories (translated to -v arguments of the CLI).
Therefore, you should map the directory of your uploads in the container, to the desired directory on the host.
For example, if your app's uploads are saved to this dir (inside the docker container):
/home/dokku/myapp/public_folder
and you'd like them to be kept in your host at:
/home/some/dir
then, as I understand, the content of PERSISTENT_STORAGE file should be:
/home/some/dir:/home/dokku/myapp/public_folder
I hope I got you right.
Use Dokku's storage:mount option.
You'll need to SSH into your dokku host:
ssh dokku#host
Run the following command to link the storage directory for that app to the app/public/uploads folder, for example:
storage:mount <app> /var/lib/dokku/data/storage:/app/public/uploads
The Dokku docs cover this well at: at http://dokku.viewdocs.io/dokku/advanced-usage/persistent-storage/