My project directory looks like this, I run docker-compose that builds a dockerfile inside a subdirectory. Inside the dockerfile I'm executing RUN python3 Preprocessing.py the file transform the raw data into preprocessed data and saves to a new file (it's in read in the picture) df.to_csv('./data/preprocessed_dataset.csv') I've been trying to link the volumes of the host and container to be able to see the preprocessed file but could not achieve that. I'm not able to see the output file on my computer.
I tried this in docker-compose:
volumes:
- ./data:/data
Related
I use docker-compose for a simple keycloak container and I've been trying to install a new theme for keycloak.
However I've been unable to copy even a single file to the container usnig a Dockerfile. The Dockerfile and docker-compose.yml are in the same directory
Neither of these commands work or cause any events or warnings in the logs.
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
Copying manually with
sudo docker cp test docker_keycloak_1:/tmp
works without any issues.
Quick understanding on Docker.
docker build: Create an image from a Dockerfile
docker run: Create a container from an image.
(you can create yourself the image or use a existing image from docker hub )
Based on what you said, you have 2 options.
Create a new docker image based on the existing one and add the theme.
something like
# Dockerfile
FROM jboss/keycloak
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
and then use docker build to create your new image
Volume the theme in the correct directory
using docker-compose volume
version: '3'
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
volumes:
- "./docker/kctheme/theme/login:/opt/jboss/keycloak/themes/keycloak/login"
Files have to be in the same directory or a subdirectory of the directory with your Dockerfile build file if you use COPY, and have to be present at build time. No absolute paths.
/tmp as destination is also a bit tricky, because the startup process of the container might have a /tmp cleanout, which means that you would never see that file in a running container.
I have a local Gitlab docker image running and added a bunch of projects. This project/repository data seems to end up inside of the 3 volumes that have been created by the image.
I want to create a single .tar of the Gitlab instance which includes the complete image + all data found in the volumes. It's okay that the .tar becomes huge.
I tried to accomplish this by using docker commit and docker save but I have been unable to save the volumes along with the image.
How can I create such a single .tar export.
If I was going to set this up, I'd have a single Docker Compose file that contained all of the relevant pieces in a single directory tree.
version: '3'
services:
db:
image: 'postgres:11'
volumes:
- './postgres:/var/lib/postgresql/data'
gitlab:
image: gitlab-community-edition
# Details below here made up
ports:
- 8080:80
env:
PGHOST: db
volumes:
- './gitlab:/data'
The important thing here is that every piece of persisted data is in the same directory tree on the host. The actual contents of the container filesystem aren't important (every piece of persisted data is in these directories) and the images aren't important (they can be pulled from Docker Hub).
You may use docker cp to achieve this.
docker cp my_container:/path/to/gitlab /destination/folder
and then tar the contents of the destination folder.
You’ll need to use docker export to create the tar of the image. Docker import then to install it back. Essentially you’re looking for an installer from the sounds of it. You can write a bash script which copies whatever files you need and exports the images.
So I just need a simply thing which is copying a script whose relative path (from the working dir.) is influxdb/init.iql (InfluxDB initialization script) to the path /docker-entrypoint-initdb.d/ which is a way to initialize an InfluxDB database according to InfluxDB Docker image doc:
Initialization Files
If the Docker image finds any files with the extensions .sh or .iql inside of the /docker-entrypoint-initdb.d folder, it will execute them
Right now, my docker-compose.yml is:
version: "3.3"
services:
influxdb:
image: influxdb:latest
container_name: influxdb
ports:
- "8087:8086"
volumes:
- influxdb-data:/var/lib/influxdb
volumes:
influxdb-data:
I need the script to be copied before the image gets built as if it finds the script in the specified path, it will execute it when building the image.
How can I do this? I thought about implementing a Makefile but I would rather prefer to use Docker to accomplish this if it is possible to not add an unnecessary extra piece to the project.
Thanks in advance.
The docker-compose file tells how to RUN an image, not how to BUILD it. These two are completely separate concepts. Also I'm not sure what are you trying to do. If you need to initialize your container with data, just mount a script (or an iql file) to the /docker-entrypoint-initdb.d location inside volumes of the docker-compose file, eg.:
volumes:
- influxdb-data:/var/lib/influxdb
- project_import.sh:/docker-entrypoint-initdb.d/import.sh:ro
The script(s) (or iql file(s)) will be executed when the container starts, not when the image is built. If you don't believe me check out the entrypoint script of the image to see how this process works.
Just remember that those scripts will get executed each time the container starts.
I'm writing a Dockerfile for a java application but I'm struggling with volumes: the mounted volumes are empty.
I've read the Dockerfile reference guide and the best pratice to write Dockerfiles, but, for a start, my example is quite complicated.
What I want to do is to be able to have the following items on the host (in a mounted volume):
configuration folder,
log folder,
data folder,
properties files
Let me summarize :
When the application is installed (extracted from the tar.gz with the RUN command), it writes a bunch of files and directories (including log and conf).
When the application is started (with CMD or ENTRYPOINT), it creates a data folder if it doesn't exist and put data files in it.
I'm only interested in:
/rootapplicationfolder/conf_folder
/rootapplicationfolder/log_folder
/rootapplicationfolder/data_folder
/rootapplicationfolder/properties_files
I'm not interested in /rootapplicationfolder/binary_files
There is something taht I dont't see. I've read and applied the information found in the two following links without success.
Questions:
Should I 'mkdir'only the top level dir on the host to be mapped with /rootapplicationfolder ?What about the files ?
Is the order of 'VOLUME' in my Dockerfile important ?
Does it need to be placed before or after the deflating (RUN tar zxvf compressed_application) ?
https://groups.google.com/forum/#!topic/docker-user/T84nlzw_vpI
Docker on Linux - Empty mounted volumes
Try using Docker-compose, use the volumes property to set what path you want to mount between your machine and your container.
version 2 Example
web:
image: my-web-app
build:.
command: bash -c "npm start"
ports:
- "8888:8888"
volumes:
- .:/home/app/code (This mount your current path with /home/app/code)
- /home/app/code/node_modules/ (unmount sub directory)
environment:
NODE_ENV: development
You can look at this repository too.
https://github.com/williamcabrera4/docker-flask/blob/master/docker-compose.yml
Well, I've managed to get want I want.
First, I haven't ant VOLUME directive in my Dockerfile.
All the shared directories are created with the -v option of the docker run command.
After that, I had issues with extracting the archive whenever the "extracting" would overwrite an existing directory mounted with -v because it's simply not possible.
So, I deflate the archive somewhere where the -v mounted volumes don't exist and AFTER this step, I mv the contents of deflated/somedirectory to -vMountedSomeDirectory.
I still had issues with docker on CentOS, the mv would copy the files to the destination but would be unable to delete them at the source after the move. I got tired and simply use a debian distribution instead.
Maybe I'm missing this when reading the docs, but is there a way to overwrite files on the container's file system when issuing a docker run command?
Something akin to the Dockerfile COPY command? The key desire here is to be able to take a particular Docker image, and spin several of the same image up, but with different configuration files. (I'd prefer to do this with environment variables, but the application that I'm Dockerizing is not partial to that.)
You have a few options. Using something like docker-compose, you could automatically build a unique image for each container using your base image as a template. For example, if you had a docker-compose.yml that look liked:
container0:
build: container0
container1:
build: container1
And then inside container0/Dockerfile you had:
FROM larsks/thttpd
COPY index.html /index.html
And inside container0/index.html you had whatever content you
wanted, then running docker-compose build would generate unique
images for each entry (and running docker-compose up would start
everything up).
I've put together an example of the above
here.
Using just the Docker command line, you can use host volume mounts,
which allow you to mount files into a container as well as
directories. Using my thttpd as an example again, you could use the
following -v argument to override /index.html in the container
with the content of your choice:
docker run -v index.html:/index.html larsks/thttpd
And you could accomplish the same thing with docker-compose via the
volume entry:
container0:
image: larsks/thttpd
volumes:
- ./container0/index.html:/index.html
container1:
image: larsks/thttpd
volumes:
- ./container1/index.html:/index.html
I would suggest that using the build mechanism makes more sense if you are trying to override many files, while using volumes is fine for one or two files.
A key difference between the two mechanisms is that when building images, each container will have a copy of the files, while using volume mounts, changes made to the file within the image will be reflected on the host filesystem.