Container based Nginx configuration - docker

Seeking help from developers familiar with Wodby container management. The main objective is changing the MIME Types that are gzipped. I'm confused with the documentation for customizing my Nginx container. The documentation:
https://wodby.com/docs/1.0/stacks/drupal/containers/
suggests I copy "/etc/nginx/conf.d/vhost.conf", modify it, deploy it the repo and use an environment variable to include it. My problem is, even if I could find this file, which is not mounted on the server when created via Wodby, it does not appear that I'm actually able to change the MIME types or the default_type as they are already defined in the nginx.conf file.
I have also attempted to modify the Wodby stack to mount the /etc/ directory so that I could manually edit the nginx.conf file if I had to, but that only freezes the deployment.
Any help would be tremendously appreciated.

Two options
clone a repo https://github.com/wodby/nginx/, change the template file /templates/nginx.conf.tmpl as much as you need and build your own image. See Makefile (/Makefile) for the commands they use to build the image themselves. Use this image as the image for your nginx container from docker-compose.
Run a container with the default settings, shell into the container with docker-compose exec nginx sh and copy the nginx file from the container (use cat /etc/nginx/nginx.conf and copy it somewhere). Create a new file locally and mount it via the docker-compose.yml for the nginx container like
volumes:
- ./nginx-custom.conf:/etc/nginx/nginx.conf

Related

Common.py at Kiwi. How to mount to docker

I followed this Kiwi TCMS step, but what is really for me to is to understand how do you mount the common.py(main configuration file) to the working kiwi instance.
I don't see the place of common.py in the kiwi, so I dunno where to mount it? Or do I have to recreate the images every time to get the new settings?
EDIT:
I've tried Kiwi TCMS configuration settings guide and I changed some settings in tcms/settings/common.py
How to implement that setting in the working Kiwi environment?
The config file approach
The common.py file it seems to be located at tcms/settings/common.py as per your second link
All sensible settings are defined in tcms/settings/common.py. You will have to update some of them for your particular production environment.
If you really want to map only this file then from the root of your proeject:
docker run -v ./tcms/settings/common.py:/absolute/container/path/to/tcms/settings/common.py [other-options-here] image-name
Running the docker command with the above volume map will replace the file inside the docker container /absolute/container/path/to/tcms/settings/common.py with the one in the host tcms/settings/common.py, thus the application will run with the settings defined in the host.
If you don't know the full path to tcms/settings/common.py inside the docker container, then you need to add the Dockerfile to your question so that we can help further.
The ENV file approach
If not already existing a .env file in the root of your project create one and add there all env variables in the common.py:
.env example:
KIWI_DB_NAME=my_db_name
KIWI_DB_USER=my_db_user
KIWI_DB_PASSWORD=my_db_password
KIWI_DB_HOST=my_db_host
KIWI_DB_PORT=my_db_port
Add as many environment variables to the .env file as the ones you find in the python code that you want to customize.
Start the docker container from the place where the .env file is with the flag --env-file .env, something like:
docker run --env-file .env [other-options-here] image-name

Make a directory from one container available to another one while keeping files from the original one

Let's say you have an image with a Rails application, containing assets. And you want to serve them from another container running Nginx.
From what I gather, mounting a volume makes the contents of a directory disappear. So, if you mount one volume into two containers, like,
volumes:
assets:
services:
app:
volumes:
assets:/app/public/assets
nginx:
volumes:
assets:/assets
they both will see an empty folder. You can very well fill it up by hand. But if you were to deploy a newer version of the Rails app image, those two won't see the changes.
Am I missing something? Is there a way to handle files without proxying them to Rails app or copying them from container to container?
UPD First container with non-empty directory that gets the volume mounted determines its initial content.
You can add the following lines before starting Rails to your Rails image's Dockerfile (CMD or ENTRYPOINT):
rm -r /assets/*
cp -r /app/public/assets/* /assets
And mount the volume into /assets for both services.
This way every time your container restarts (on docker stack deploy when it's changed), the volume is refilled with fresh assets that is visible to nginx container.

How to change Prometheus.yml file in the container

How can I change my / Prometheus/ Prometheus.yml on the container itself
I want it to track
1) my appserver - an Node application in a docker container
2) my postgres db
3) my apached and nginx web server
I do know that one has to change the Prometheus.yml file and add targets
Generic mechanisms to change docker images are
Mount your configuration file at the desired path.
Create a new image by copying the co fig file in the new Dockerfile. Not recommended if you have to use different configs for different environments/apps
Change the file on the running container if the application (peomerheus in this case) supports it. I know that some of the apps like Kibana do this. Good for debugging, not recommended for production environments.
It's hard to be precise with an answer given the lack of details but in general, you place your modified prometheus.yml file within the Docker context and modify your Dockerfile to add the instruction
COPY prometheus.yml /path/to/prometheus.yml

Wanted persistent data with Docker volumes but have empty directories instead

I'm writing a Dockerfile for a java application but I'm struggling with volumes: the mounted volumes are empty.
I've read the Dockerfile reference guide and the best pratice to write Dockerfiles, but, for a start, my example is quite complicated.
What I want to do is to be able to have the following items on the host (in a mounted volume):
configuration folder,
log folder,
data folder,
properties files
Let me summarize :
When the application is installed (extracted from the tar.gz with the RUN command), it writes a bunch of files and directories (including log and conf).
When the application is started (with CMD or ENTRYPOINT), it creates a data folder if it doesn't exist and put data files in it.
I'm only interested in:
/rootapplicationfolder/conf_folder
/rootapplicationfolder/log_folder
/rootapplicationfolder/data_folder
/rootapplicationfolder/properties_files
I'm not interested in /rootapplicationfolder/binary_files
There is something taht I dont't see. I've read and applied the information found in the two following links without success.
Questions:
Should I 'mkdir'only the top level dir on the host to be mapped with /rootapplicationfolder ?What about the files ?
Is the order of 'VOLUME' in my Dockerfile important ?
Does it need to be placed before or after the deflating (RUN tar zxvf compressed_application) ?
https://groups.google.com/forum/#!topic/docker-user/T84nlzw_vpI
Docker on Linux - Empty mounted volumes
Try using Docker-compose, use the volumes property to set what path you want to mount between your machine and your container.
version 2 Example
web:
image: my-web-app
build:.
command: bash -c "npm start"
ports:
- "8888:8888"
volumes:
- .:/home/app/code (This mount your current path with /home/app/code)
- /home/app/code/node_modules/ (unmount sub directory)
environment:
NODE_ENV: development
You can look at this repository too.
https://github.com/williamcabrera4/docker-flask/blob/master/docker-compose.yml
Well, I've managed to get want I want.
First, I haven't ant VOLUME directive in my Dockerfile.
All the shared directories are created with the -v option of the docker run command.
After that, I had issues with extracting the archive whenever the "extracting" would overwrite an existing directory mounted with -v because it's simply not possible.
So, I deflate the archive somewhere where the -v mounted volumes don't exist and AFTER this step, I mv the contents of deflated/somedirectory to -vMountedSomeDirectory.
I still had issues with docker on CentOS, the mv would copy the files to the destination but would be unable to delete them at the source after the move. I got tired and simply use a debian distribution instead.

Overwrite files with `docker run`

Maybe I'm missing this when reading the docs, but is there a way to overwrite files on the container's file system when issuing a docker run command?
Something akin to the Dockerfile COPY command? The key desire here is to be able to take a particular Docker image, and spin several of the same image up, but with different configuration files. (I'd prefer to do this with environment variables, but the application that I'm Dockerizing is not partial to that.)
You have a few options. Using something like docker-compose, you could automatically build a unique image for each container using your base image as a template. For example, if you had a docker-compose.yml that look liked:
container0:
build: container0
container1:
build: container1
And then inside container0/Dockerfile you had:
FROM larsks/thttpd
COPY index.html /index.html
And inside container0/index.html you had whatever content you
wanted, then running docker-compose build would generate unique
images for each entry (and running docker-compose up would start
everything up).
I've put together an example of the above
here.
Using just the Docker command line, you can use host volume mounts,
which allow you to mount files into a container as well as
directories. Using my thttpd as an example again, you could use the
following -v argument to override /index.html in the container
with the content of your choice:
docker run -v index.html:/index.html larsks/thttpd
And you could accomplish the same thing with docker-compose via the
volume entry:
container0:
image: larsks/thttpd
volumes:
- ./container0/index.html:/index.html
container1:
image: larsks/thttpd
volumes:
- ./container1/index.html:/index.html
I would suggest that using the build mechanism makes more sense if you are trying to override many files, while using volumes is fine for one or two files.
A key difference between the two mechanisms is that when building images, each container will have a copy of the files, while using volume mounts, changes made to the file within the image will be reflected on the host filesystem.

Resources