DEVILBOX : Using symlinks for data/www/{symbolic_link} - docker

There is anyway to add custom folder using symbolic link into data folder in Devilbox ?
When i put the symbolic link the Auto Virtual Host ignore the folder.
Thank You.

Ok, i make it work. For future reference, make all the root folders by hand, but inside of the root folders symbolic links are accepted.
data/www/proj-slug/{htdocs} width {htdocs} -> ~/git/proj-slug

I used to had the same exact issue but I solved it.
TL;DR; Solution:
Create a new file in the root of devilbox project called docker-compose.override.yml
Copy and paste this into the file
# IMPORTANT: The version must match the version of docker-compose.yml
---
version: '2.3'
services:
php:
volumes:
- $HOME:$HOME
httpd:
volumes:
- $HOME:$HOME
Now you are able to create symlinks such as
data/www/proj-slug/htdocs -> ~/your-custom-path
Explanation
Following the reply of #Pirex360 is not enough to use symlinks. Indeed, when you create the symlink, the vhost page says "There is no htdocs folder".
The problem is that everything inside Devilbox is "Dockerized", therefore your Docker container is not able to access to files and folder that are not mounted inside the Docker container.
Searching a bit online I have found an issue on the project repository that talks about here. Many users thought about the possibility to modify the docker-compose.yml file in order to mount the folder they need.
Docker, however, already provides a way to override docker compose file. The correct way is to create a docker-compose.override.yml file.
The configuration pasted above permit to mount the host user home folder inside the docker. As a matter of fact, you can find your home folder mounted in Docker homes.
You can check out the effect entering in the Docker environment by means of shell file
$ cd devilbox
$ ./shell.sh # or .bat
$ ls /home
devilbox/ yourusername/
Then each symlink pointing to /home/yourusername/custompath is working as expected!
Laravel public folder
If you are only trying to create a symlink that map public folder to htdocs you can use the guide provided by Devilbox, Setup Laravel.
#marcogmonteiro talked about this in a comment. However this method works only if you are symlinking files or folder that are not outside of the Docker visible tree.
Important disclaimer
This way of mounting your home folder into Docker breaks the isolation between the container and your files and folder. So, the Docker container, CAN read/write/execute your files.

Related

How to access the file generated inside docker container by defining the volume in docker-compose.yml file

I am rookie when it comes to docker container , can somebody help me here.
There is a project which generates a output files when running inside docker container , I need to add Volume inside docker-compose.yml file, So that the output files are accessible outside the container.
Please do provide me indetail knowledge of how can I do this?
Define volumes in the docker-compose.yml file, mapping source on the server to target in the container:
volumes:
- "./relative/path/on/host:/absolute/path/in/container:rw"
Reference:
https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose
There hopefully should be a readme.md file in the project which lists the options on how to get the output (files).

Docker: copy file and directory with docker-compose

I created one Docker Image for a custom application which needs some license files (some files and one directory) to run, so I'm using the COPY command in the Dockerfile to copy license file to the image:
# Base image: Application installed on Debian 10 but unlicensed
FROM me/application-unlicensed
# Copy license files
COPY *.license /opt/application/
COPY application-license-dir /var/lib/application-license-dir
I use this Dockerfile to build a new image with the license for a single container. As I have 5 different licenses I created 5 different images each with one specific license file and directory.
The licenses are also fixed to a MAC Address, so when I run one of the five container I specify its own MAC Address with the --mac-address parameter:
docker run --rm --mac-address AB:CD:EF:12:34:56 me/application-license1
This work, but I wish to have a better and smarter way to manage this:
As with docker-compose is it possible to specify the container MAC Address, could I just use the unlicensed base image and copy license files and the license directory when I build the 5 containers with docker-compose?
Edit: let me better explain the structure of license files
The application is deployed into /opt/application directory into the Docker image.
License files (*.license) are into the /opt/application at the same level of the application itself, to they cannot be saved into a Docker volume unless I create some symlinks (but I have to check if the application will work this way).
The directory application-license-dir needs to be at /var/lib/application-license-dir, so it can be mounted into a Docker volume (I have to check if the application will work but I think so).
Both *.license files and the files into the application-license-dir are binary, so I cannot script or create them at runtime.
So:
can docker-compose create a local directory on the Docker host
server before binding and mounting it to a Docker volume?
can docker-compose copy my licenses file and my license directory from
the GIT repository (locally cloned) to the local directory created
during the step 1?
can docker-compose create some symlinks into the
container's /opt/application directory for the *.license files
stored into the volume?
For things that are different every time you run a container or when you run a container on a different system, you generally don't want to specify these in a Dockerfile. This includes the license files you show above; things like user IDs also match this pattern; depending on how fixed your configuration files are they can also count. (For things that are the same every time you run the container, you do want these in your image; especially this is the application source code.)
You can use a Docker bind mount to inject files into a container at run time. There is Compose syntax for bind mounts using the volumes: directive.
This would give you a Compose file roughly like:
version: '3'
services:
app1:
image: me/application-unlicensed
volumes:
- './app1.license:/opt/application/app.license'
- './application-license-dir:/var/lib/application-license-dir'
mac_address: 'AB:CD:EF:12:34:56'
Bind mounts like this are a good match for pushing configuration files into containers. They can provide an empty host directory into which log files can be written, but aren't otherwise a mechanism for copying data out of an image. They're also useful as a place to store data that needs to outlive a container, if your application can't store all of its state in an external database.
According to this commit docker-compose has mac_address support.
Mounting license files with -v could be an option.
You can set mac_address for the different containers as
mac_address: AB:CD:EF:12:34:12. For documentation reference see this
For creating multiple instances from the same image, you will have to copy paste each app block 5 times in your docker-compose file and in each you can set a different mac_adddress

ddev mount additional folders for shared composer packages

Having a monorepo with multiple ddev projects and a shared composer packages folder, I would like to mount additional folders into the webcontainers.
I am trying to develop a set of TYPO3 extensions and test them with v9 and v10 simultaneously, for this I created two ddev projects inside one git repository and one packages folder containing my extensions. Path relative ddev composer will not see my packages folder though, as it is outside the mount root. Symlinking doesn't work cross docker boundaries. So I would like to use docker facilities to mount an additional directory into the ddev web container.
Is there some good way to do so?
You can add additional mounts in ddev by creating a new docker-composer yaml configuration file in .ddev, for example .ddev/docker-compose.mounts.yaml with the following content:
version: '3.6'
services:
web:
volumes:
- "$HOME/mydirectory:/home/mydirectory"
This will mount the local mydirectory directory in my home directory to /home/mydirectory inside ddev's docker container.
When doing the above answer with enabled mutagen, there will be an error about a scan crossed filesystem boundary. To avoid this, edit .ddev/mutagen/mutagen.yaml: Remove the leading #ddev-generated and add an additional path-entry like - "/home/mydirectory".

Common.py at Kiwi. How to mount to docker

I followed this Kiwi TCMS step, but what is really for me to is to understand how do you mount the common.py(main configuration file) to the working kiwi instance.
I don't see the place of common.py in the kiwi, so I dunno where to mount it? Or do I have to recreate the images every time to get the new settings?
EDIT:
I've tried Kiwi TCMS configuration settings guide and I changed some settings in tcms/settings/common.py
How to implement that setting in the working Kiwi environment?
The config file approach
The common.py file it seems to be located at tcms/settings/common.py as per your second link
All sensible settings are defined in tcms/settings/common.py. You will have to update some of them for your particular production environment.
If you really want to map only this file then from the root of your proeject:
docker run -v ./tcms/settings/common.py:/absolute/container/path/to/tcms/settings/common.py [other-options-here] image-name
Running the docker command with the above volume map will replace the file inside the docker container /absolute/container/path/to/tcms/settings/common.py with the one in the host tcms/settings/common.py, thus the application will run with the settings defined in the host.
If you don't know the full path to tcms/settings/common.py inside the docker container, then you need to add the Dockerfile to your question so that we can help further.
The ENV file approach
If not already existing a .env file in the root of your project create one and add there all env variables in the common.py:
.env example:
KIWI_DB_NAME=my_db_name
KIWI_DB_USER=my_db_user
KIWI_DB_PASSWORD=my_db_password
KIWI_DB_HOST=my_db_host
KIWI_DB_PORT=my_db_port
Add as many environment variables to the .env file as the ones you find in the python code that you want to customize.
Start the docker container from the place where the .env file is with the flag --env-file .env, something like:
docker run --env-file .env [other-options-here] image-name

Wanted persistent data with Docker volumes but have empty directories instead

I'm writing a Dockerfile for a java application but I'm struggling with volumes: the mounted volumes are empty.
I've read the Dockerfile reference guide and the best pratice to write Dockerfiles, but, for a start, my example is quite complicated.
What I want to do is to be able to have the following items on the host (in a mounted volume):
configuration folder,
log folder,
data folder,
properties files
Let me summarize :
When the application is installed (extracted from the tar.gz with the RUN command), it writes a bunch of files and directories (including log and conf).
When the application is started (with CMD or ENTRYPOINT), it creates a data folder if it doesn't exist and put data files in it.
I'm only interested in:
/rootapplicationfolder/conf_folder
/rootapplicationfolder/log_folder
/rootapplicationfolder/data_folder
/rootapplicationfolder/properties_files
I'm not interested in /rootapplicationfolder/binary_files
There is something taht I dont't see. I've read and applied the information found in the two following links without success.
Questions:
Should I 'mkdir'only the top level dir on the host to be mapped with /rootapplicationfolder ?What about the files ?
Is the order of 'VOLUME' in my Dockerfile important ?
Does it need to be placed before or after the deflating (RUN tar zxvf compressed_application) ?
https://groups.google.com/forum/#!topic/docker-user/T84nlzw_vpI
Docker on Linux - Empty mounted volumes
Try using Docker-compose, use the volumes property to set what path you want to mount between your machine and your container.
version 2 Example
web:
image: my-web-app
build:.
command: bash -c "npm start"
ports:
- "8888:8888"
volumes:
- .:/home/app/code (This mount your current path with /home/app/code)
- /home/app/code/node_modules/ (unmount sub directory)
environment:
NODE_ENV: development
You can look at this repository too.
https://github.com/williamcabrera4/docker-flask/blob/master/docker-compose.yml
Well, I've managed to get want I want.
First, I haven't ant VOLUME directive in my Dockerfile.
All the shared directories are created with the -v option of the docker run command.
After that, I had issues with extracting the archive whenever the "extracting" would overwrite an existing directory mounted with -v because it's simply not possible.
So, I deflate the archive somewhere where the -v mounted volumes don't exist and AFTER this step, I mv the contents of deflated/somedirectory to -vMountedSomeDirectory.
I still had issues with docker on CentOS, the mv would copy the files to the destination but would be unable to delete them at the source after the move. I got tired and simply use a debian distribution instead.

Resources