Cannot find docker-compose.yml - docker

I wanted to edit the docker compose file of a project. After using docker inspect I found this:
com.docker.compose.project.config_files /data/compose/15/docker-compose.yml
But the directory /data/* does not exist and therefore I cannot find the compose file to edit. Where can I find the docker-compose.yml file?
I tried to access the /data directory but this doesn’t exist

I solved the problem by restoring the docker-compose.yml file from a previous Portainer backup and starting the stack via SSH and not in Portainer

Related

Docker Compose: omitting directory /config/certs.yml, ERROR: no configuration file found

I am trying to use Docker Desktop to run this tutorial to install wazuh in a docker container (single-node deployment). I make a new container in the docker desktop and then try to run the docker compose command in vscode but get the error mentioned in the title. I have tried to change the project directory but it always points to the root directory by /config/certs.yml. my command is
docker-compose --project-directory /com.docker.devenvironments.code/single-node --file /com.docker.devenvironments.code/single-node/generate-indexer-certs.yml run --rm generator
my directory structure is as follows:
where certs.yml is in the config folder, but upon running this command the error always points to the root folder, which is not my project folder. The only folder i want to run this from is the com.docker.devenvironments.code folder, or somehow change where the command finds the certs.yml file. I have also tried cd into different folders and trying to run the command, but get the same error.
Thank you very much in advance for your help!
Looking quickly at the documentation link provided in the question, you can try the following thing:
Move the docker-compose definition from the folder wazuh-docker/single-node/docker-compose.yml to outer directory which is the main definition wazuh-docker in your case it will be com.docker.devenvironments.code I believe into a separate <your_compose.yaml> with the same definition, but change the volume mounts as:
Wazuh App Copyright (C) 2021 Wazuh Inc. (License GPLv2)
version: '3'
services:
generator:
image: wazuh/wazuh-certs-generator:0.0.1
hostname: wazuh-certs-generator
volumes:
- ./single-node/config/wazuh_indexer_ssl_certs/:/certificates/
- ./single-node/config/certs.yml:/config/certs.yml
...
Then, docker-compose -f <your_compose.yaml> up should do. Note: your_defined.yaml is the yaml with the volume mount changes and is created at the root of the project. Also, looking at the image provided in the question, you might want to revisit the documentation and run the command to generate the certificates docker-compose -f generate-indexer-certs.yml run --rm generator from the single-node folder.
You can change the directory of the certs.yml file in the following section of the generate-indexer-certs.yml file:
volumes:
- ./config/certs.yml:/config/certs.yml
You can replace ./config/certs.yml with the path where the certs.yml file is located.
In your case:
volumes:
- /com.docker.devenvironments.code/single-node/config/certs.yml:/config/certs.yml

How to access the file generated inside docker container by defining the volume in docker-compose.yml file

I am rookie when it comes to docker container , can somebody help me here.
There is a project which generates a output files when running inside docker container , I need to add Volume inside docker-compose.yml file, So that the output files are accessible outside the container.
Please do provide me indetail knowledge of how can I do this?
Define volumes in the docker-compose.yml file, mapping source on the server to target in the container:
volumes:
- "./relative/path/on/host:/absolute/path/in/container:rw"
Reference:
https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose
There hopefully should be a readme.md file in the project which lists the options on how to get the output (files).

DEVILBOX : Using symlinks for data/www/{symbolic_link}

There is anyway to add custom folder using symbolic link into data folder in Devilbox ?
When i put the symbolic link the Auto Virtual Host ignore the folder.
Thank You.
Ok, i make it work. For future reference, make all the root folders by hand, but inside of the root folders symbolic links are accepted.
data/www/proj-slug/{htdocs} width {htdocs} -> ~/git/proj-slug
I used to had the same exact issue but I solved it.
TL;DR; Solution:
Create a new file in the root of devilbox project called docker-compose.override.yml
Copy and paste this into the file
# IMPORTANT: The version must match the version of docker-compose.yml
---
version: '2.3'
services:
php:
volumes:
- $HOME:$HOME
httpd:
volumes:
- $HOME:$HOME
Now you are able to create symlinks such as
data/www/proj-slug/htdocs -> ~/your-custom-path
Explanation
Following the reply of #Pirex360 is not enough to use symlinks. Indeed, when you create the symlink, the vhost page says "There is no htdocs folder".
The problem is that everything inside Devilbox is "Dockerized", therefore your Docker container is not able to access to files and folder that are not mounted inside the Docker container.
Searching a bit online I have found an issue on the project repository that talks about here. Many users thought about the possibility to modify the docker-compose.yml file in order to mount the folder they need.
Docker, however, already provides a way to override docker compose file. The correct way is to create a docker-compose.override.yml file.
The configuration pasted above permit to mount the host user home folder inside the docker. As a matter of fact, you can find your home folder mounted in Docker homes.
You can check out the effect entering in the Docker environment by means of shell file
$ cd devilbox
$ ./shell.sh # or .bat
$ ls /home
devilbox/ yourusername/
Then each symlink pointing to /home/yourusername/custompath is working as expected!
Laravel public folder
If you are only trying to create a symlink that map public folder to htdocs you can use the guide provided by Devilbox, Setup Laravel.
#marcogmonteiro talked about this in a comment. However this method works only if you are symlinking files or folder that are not outside of the Docker visible tree.
Important disclaimer
This way of mounting your home folder into Docker breaks the isolation between the container and your files and folder. So, the Docker container, CAN read/write/execute your files.

Mount files in read-only volume (where source is in .dockerignore)

My app depends on secrets, which I have stored in the folder .credentials (e.g. .credentials/.env, .credentials/.google_api.json, etc...) I don't want these files built into the docker image, however they need to be visible to the docker container.
My solution is:
Add .credentials to my .dockerignore
Mount the credentials folder in read-only mode with a volume:
# docker-compose.yaml
version: '3'
services:
app:
build: .
volumes:
- ./.credentials:/app/.credentials:ro
This is not working (I do not see any credentials inside the docker container). I'm wondering if the .dockerignore is causing the volume to break, or if I've done something else wrong?
Am I going about this the wrong way? e.g. I could just pass the .env file with docker run IMAGE_NAME --env-file .env
Edit:
My issue was to do with how I was running the image. I was doing docker-compose build and then docker run IMAGE_NAME, assuming that the volumes were build into the image. However this seems not to be the case.
Instead the above code works when I do docker-compose run app(where app is the service name) after building.
From the comments, the issue here is in looking at the docker-compose.yml file for your container definition while starting the container with docker run. The docker run command does not use the compose file, so no volumes were defined on the resulting container.
The build process itself creates an image where you do not specify the source of volumes. Only the Dockerfile and your build context is used as an input to the build. The rest of the compose file are all run time settings that apply to containers. Many projects do not even use the compose file for building the image, so all settings in the compose file for those projects are a way to define the default settings for containers being created.
The solution is to using docker-compose up -d to test your docker-compose.yml.

Common.py at Kiwi. How to mount to docker

I followed this Kiwi TCMS step, but what is really for me to is to understand how do you mount the common.py(main configuration file) to the working kiwi instance.
I don't see the place of common.py in the kiwi, so I dunno where to mount it? Or do I have to recreate the images every time to get the new settings?
EDIT:
I've tried Kiwi TCMS configuration settings guide and I changed some settings in tcms/settings/common.py
How to implement that setting in the working Kiwi environment?
The config file approach
The common.py file it seems to be located at tcms/settings/common.py as per your second link
All sensible settings are defined in tcms/settings/common.py. You will have to update some of them for your particular production environment.
If you really want to map only this file then from the root of your proeject:
docker run -v ./tcms/settings/common.py:/absolute/container/path/to/tcms/settings/common.py [other-options-here] image-name
Running the docker command with the above volume map will replace the file inside the docker container /absolute/container/path/to/tcms/settings/common.py with the one in the host tcms/settings/common.py, thus the application will run with the settings defined in the host.
If you don't know the full path to tcms/settings/common.py inside the docker container, then you need to add the Dockerfile to your question so that we can help further.
The ENV file approach
If not already existing a .env file in the root of your project create one and add there all env variables in the common.py:
.env example:
KIWI_DB_NAME=my_db_name
KIWI_DB_USER=my_db_user
KIWI_DB_PASSWORD=my_db_password
KIWI_DB_HOST=my_db_host
KIWI_DB_PORT=my_db_port
Add as many environment variables to the .env file as the ones you find in the python code that you want to customize.
Start the docker container from the place where the .env file is with the flag --env-file .env, something like:
docker run --env-file .env [other-options-here] image-name

Resources