ddev mount additional folders for shared composer packages - docker

Having a monorepo with multiple ddev projects and a shared composer packages folder, I would like to mount additional folders into the webcontainers.
I am trying to develop a set of TYPO3 extensions and test them with v9 and v10 simultaneously, for this I created two ddev projects inside one git repository and one packages folder containing my extensions. Path relative ddev composer will not see my packages folder though, as it is outside the mount root. Symlinking doesn't work cross docker boundaries. So I would like to use docker facilities to mount an additional directory into the ddev web container.
Is there some good way to do so?

You can add additional mounts in ddev by creating a new docker-composer yaml configuration file in .ddev, for example .ddev/docker-compose.mounts.yaml with the following content:
version: '3.6'
services:
web:
volumes:
- "$HOME/mydirectory:/home/mydirectory"
This will mount the local mydirectory directory in my home directory to /home/mydirectory inside ddev's docker container.

When doing the above answer with enabled mutagen, there will be an error about a scan crossed filesystem boundary. To avoid this, edit .ddev/mutagen/mutagen.yaml: Remove the leading #ddev-generated and add an additional path-entry like - "/home/mydirectory".

Related

DEVILBOX : Using symlinks for data/www/{symbolic_link}

There is anyway to add custom folder using symbolic link into data folder in Devilbox ?
When i put the symbolic link the Auto Virtual Host ignore the folder.
Thank You.
Ok, i make it work. For future reference, make all the root folders by hand, but inside of the root folders symbolic links are accepted.
data/www/proj-slug/{htdocs} width {htdocs} -> ~/git/proj-slug
I used to had the same exact issue but I solved it.
TL;DR; Solution:
Create a new file in the root of devilbox project called docker-compose.override.yml
Copy and paste this into the file
# IMPORTANT: The version must match the version of docker-compose.yml
---
version: '2.3'
services:
php:
volumes:
- $HOME:$HOME
httpd:
volumes:
- $HOME:$HOME
Now you are able to create symlinks such as
data/www/proj-slug/htdocs -> ~/your-custom-path
Explanation
Following the reply of #Pirex360 is not enough to use symlinks. Indeed, when you create the symlink, the vhost page says "There is no htdocs folder".
The problem is that everything inside Devilbox is "Dockerized", therefore your Docker container is not able to access to files and folder that are not mounted inside the Docker container.
Searching a bit online I have found an issue on the project repository that talks about here. Many users thought about the possibility to modify the docker-compose.yml file in order to mount the folder they need.
Docker, however, already provides a way to override docker compose file. The correct way is to create a docker-compose.override.yml file.
The configuration pasted above permit to mount the host user home folder inside the docker. As a matter of fact, you can find your home folder mounted in Docker homes.
You can check out the effect entering in the Docker environment by means of shell file
$ cd devilbox
$ ./shell.sh # or .bat
$ ls /home
devilbox/ yourusername/
Then each symlink pointing to /home/yourusername/custompath is working as expected!
Laravel public folder
If you are only trying to create a symlink that map public folder to htdocs you can use the guide provided by Devilbox, Setup Laravel.
#marcogmonteiro talked about this in a comment. However this method works only if you are symlinking files or folder that are not outside of the Docker visible tree.
Important disclaimer
This way of mounting your home folder into Docker breaks the isolation between the container and your files and folder. So, the Docker container, CAN read/write/execute your files.

Docker: copy file and directory with docker-compose

I created one Docker Image for a custom application which needs some license files (some files and one directory) to run, so I'm using the COPY command in the Dockerfile to copy license file to the image:
# Base image: Application installed on Debian 10 but unlicensed
FROM me/application-unlicensed
# Copy license files
COPY *.license /opt/application/
COPY application-license-dir /var/lib/application-license-dir
I use this Dockerfile to build a new image with the license for a single container. As I have 5 different licenses I created 5 different images each with one specific license file and directory.
The licenses are also fixed to a MAC Address, so when I run one of the five container I specify its own MAC Address with the --mac-address parameter:
docker run --rm --mac-address AB:CD:EF:12:34:56 me/application-license1
This work, but I wish to have a better and smarter way to manage this:
As with docker-compose is it possible to specify the container MAC Address, could I just use the unlicensed base image and copy license files and the license directory when I build the 5 containers with docker-compose?
Edit: let me better explain the structure of license files
The application is deployed into /opt/application directory into the Docker image.
License files (*.license) are into the /opt/application at the same level of the application itself, to they cannot be saved into a Docker volume unless I create some symlinks (but I have to check if the application will work this way).
The directory application-license-dir needs to be at /var/lib/application-license-dir, so it can be mounted into a Docker volume (I have to check if the application will work but I think so).
Both *.license files and the files into the application-license-dir are binary, so I cannot script or create them at runtime.
So:
can docker-compose create a local directory on the Docker host
server before binding and mounting it to a Docker volume?
can docker-compose copy my licenses file and my license directory from
the GIT repository (locally cloned) to the local directory created
during the step 1?
can docker-compose create some symlinks into the
container's /opt/application directory for the *.license files
stored into the volume?
For things that are different every time you run a container or when you run a container on a different system, you generally don't want to specify these in a Dockerfile. This includes the license files you show above; things like user IDs also match this pattern; depending on how fixed your configuration files are they can also count. (For things that are the same every time you run the container, you do want these in your image; especially this is the application source code.)
You can use a Docker bind mount to inject files into a container at run time. There is Compose syntax for bind mounts using the volumes: directive.
This would give you a Compose file roughly like:
version: '3'
services:
app1:
image: me/application-unlicensed
volumes:
- './app1.license:/opt/application/app.license'
- './application-license-dir:/var/lib/application-license-dir'
mac_address: 'AB:CD:EF:12:34:56'
Bind mounts like this are a good match for pushing configuration files into containers. They can provide an empty host directory into which log files can be written, but aren't otherwise a mechanism for copying data out of an image. They're also useful as a place to store data that needs to outlive a container, if your application can't store all of its state in an external database.
According to this commit docker-compose has mac_address support.
Mounting license files with -v could be an option.
You can set mac_address for the different containers as
mac_address: AB:CD:EF:12:34:12. For documentation reference see this
For creating multiple instances from the same image, you will have to copy paste each app block 5 times in your docker-compose file and in each you can set a different mac_adddress

Recompiling VueJS app before docker-compose up

I want to deploy vueJS app inside a docker nginx container but before that container runs the vueJS source has to be compiled via npm run build I want to compilation to run in a container and then exit leaving only the compiled result for the nginx container.
Every time docker-compose up is run the vueJS app has to be recompiled as there is a .env file on the host OS that has to be volume mounted and the variables in here could be updated.
The ideal way I think would be some way of creating stages for docker compose like in gitlab ci so there would be a build stage and when that's finished the nginx container starts. But when I looked this up I couldn't see a way to do this.
What would be the best way to compile my vueJS app every time docker-compose up is run?
If you're already building your Vue.js app into a container (with a Dockerfile), you can make use of the build directive in your docker-compose.yml file. That way, you can use docker-compose build to create containers manually, or use run --build to build containers before they launch.
For example, this Compose file defines a service using a container build file, instead of a prebuilt image:
version: '3'
services:
vueapp:
build: ./my_app # There should be a Dockerfile in this directory
That means I can both build containers and run services separately:
docker-compose build
docker-compose up
Or, I can use the build-before-run option:
# Build containers, and recreate if necessary (build cache will be used)
docker-compose up --build
If your .env file changes (and containers don't pick up changes on restart), you might consider defining them in container build file. Otherwise, consider putting the .env file into a directory (and mount the directory, not the file, because some editors will use a swap file and change the inode - and this breaks the mount). If you mount a directory and change files within the directory, the changes will reflect in the container, because the parent directory's inode didn't change.
I ended up having an nginx container that reads the files from a volume mount and a container that builds the app and places the files in the same volume mount. While the app is compiling, nginx reads the old version and when the compilation is finished the files get replaced with the new ones.

Wanted persistent data with Docker volumes but have empty directories instead

I'm writing a Dockerfile for a java application but I'm struggling with volumes: the mounted volumes are empty.
I've read the Dockerfile reference guide and the best pratice to write Dockerfiles, but, for a start, my example is quite complicated.
What I want to do is to be able to have the following items on the host (in a mounted volume):
configuration folder,
log folder,
data folder,
properties files
Let me summarize :
When the application is installed (extracted from the tar.gz with the RUN command), it writes a bunch of files and directories (including log and conf).
When the application is started (with CMD or ENTRYPOINT), it creates a data folder if it doesn't exist and put data files in it.
I'm only interested in:
/rootapplicationfolder/conf_folder
/rootapplicationfolder/log_folder
/rootapplicationfolder/data_folder
/rootapplicationfolder/properties_files
I'm not interested in /rootapplicationfolder/binary_files
There is something taht I dont't see. I've read and applied the information found in the two following links without success.
Questions:
Should I 'mkdir'only the top level dir on the host to be mapped with /rootapplicationfolder ?What about the files ?
Is the order of 'VOLUME' in my Dockerfile important ?
Does it need to be placed before or after the deflating (RUN tar zxvf compressed_application) ?
https://groups.google.com/forum/#!topic/docker-user/T84nlzw_vpI
Docker on Linux - Empty mounted volumes
Try using Docker-compose, use the volumes property to set what path you want to mount between your machine and your container.
version 2 Example
web:
image: my-web-app
build:.
command: bash -c "npm start"
ports:
- "8888:8888"
volumes:
- .:/home/app/code (This mount your current path with /home/app/code)
- /home/app/code/node_modules/ (unmount sub directory)
environment:
NODE_ENV: development
You can look at this repository too.
https://github.com/williamcabrera4/docker-flask/blob/master/docker-compose.yml
Well, I've managed to get want I want.
First, I haven't ant VOLUME directive in my Dockerfile.
All the shared directories are created with the -v option of the docker run command.
After that, I had issues with extracting the archive whenever the "extracting" would overwrite an existing directory mounted with -v because it's simply not possible.
So, I deflate the archive somewhere where the -v mounted volumes don't exist and AFTER this step, I mv the contents of deflated/somedirectory to -vMountedSomeDirectory.
I still had issues with docker on CentOS, the mv would copy the files to the destination but would be unable to delete them at the source after the move. I got tired and simply use a debian distribution instead.

Docker-compose specify config file

I'm new to docker and now I want to use docker-compose. I would like to provide the docker-compose.yml script the config file with host/port, maybe credentials and other cassandra configuration.
DockerHub link.
How to specify it in the compose script:
cassandra:
image: bitnami/cassandra:latest
You can do it using Docker compose environment variables(https://docs.docker.com/compose/environment-variables/#substituting-environment-variables-in-compose-files). You can also specify a separate environment file with environment variables.
Apparently, including a file outside of the scope of the image build, such as a config file, has been a major point of discussion (and perhaps some tension) between the community and the folks at Docker. A lot of the developers in that linked issue recommended adding config files in a folder as a volume (unfortunately, you cannot add an individual file).
It would look something like:
volumes:
- ./config_folder:./config_folder
Where config_folder would be at the root of your project, at the same level as the Dockerfile.
If Docker compose environment cannot solved your problem, you should create a new image from the original image with COPY to override the file.
Dockerfile:
From orginal_image:xxx
COPY local_conffile in_image_dir/conf_dir/

Resources