I know this is written sloppy but I'm new to Docker and just trying to get the hang of it. I'm pulling an image FROM wordpress:4.9.2-php7.2-apache
Im then attempting to overwrite the deflate.conf file from inside my Dockerfile. The command I'm using is as follows.
ADD /deflate.conf ../../../etc/apache2/mods-available/
Using this command the image builds properly but as soon as I run it the container immediately fails?
When I comment out the ADD line, and build/run the image the container runs fine. So I attempted the copy command from the command line like so:
docker cp deflate.conf <name>:../../../etc/apache2/mods-available/deflate.conf
Using this command everything is fine, and I get the desired result.
Im not sure why my Dockerfile won't work but the command line does.
Any help would greatly appreciated.
Why not use COPY or ADD with the absolute path. Instead of using ../../../etc/apache2/mods-available/deflate.conf, the format should be:
COPY /deflate.conf /etc/apache2/mods-available/deflate.conf
You can also use Docker volumes for this. Ignore the COPY during docker build, then with your run command, include -v option:
$ docker run -v /deflate.conf:/etc/apache2/mods-available/deflate.conf -d -p 80:80 test
This should also do the trick. Please note that the modules on /etc/apache2/mods-available/ are not loaded by default when apache service starts, active modules should be copied to /etc/apache2/mods-enabled/.
I think same result as the command line you can achieve with:
COPY /deflate.conf ../../../etc/apache2/mods-available/deflate.conf
in your Dockerfile.
Related
I'm somewhat new to Docker. I would like to be able to use Docker to distribute a CLI program, but run the program normally once it has been installed. To be specific, after running docker build on the system, I need to be able to simply run my-program in the terminal, not docker run my-program. How can I do this?
I tried something with a Makefile which runs docker build -t my-program . and then writes a shell script to ~/.local/bin/ called my-program that runs docker run my-program, but this adds another container every time I run the script.
EDIT: I realize is the expected behavior of docker run, but it does not work for my use-case.
Any help is greatly appreciated!
If you want to keep your script, add the remove flag --rm to the docker run command. The remove flag removes the container automatically after the entry-point process has exit.
Additionally, I would personally prefer an alias for this. Simply add something like this example alias my-program="docker run --rm my-program" to your ~/.bashrc or ~/.zshrc file. This even has the advantage that all other parameters after the alias (my-program param1 param2) are automatically forwarded to the entry-point of your image without any additional effort.
I need to copy my customized keycloak themes into keycloak container to use it like mention here:
https://medium.com/#auscunningham/change-login-theme-in-keycloak-docker-image-55b5fa5ceec4
After identifying my container id: docker container ls and making a list of files like this: docker exec 7e3a420017a8 ls ./keycloak/themes
It returns the list of themes correctly, but using this to copy my files from local to container:
docker cp ./mycustomthem 7e3a420017a8:/keycloak/themes/
or
docker cp ./mycustomthem 7e3a420017a8:./keycloak/themes/
I get the following error:
Error: No such container:path: 7e3a420017a8:/keycloak
I cannot imagine where the error is, since I can list the files into the folder and container, could you help me?
Thank you in advance.
Works on my computer.
docker cp mycustomthem e67f76e8740b:/opt/jboss/keycloak/themes/raincatcher-theme
You have added the wrong path in command add full path /opt/jboss/keycloak/themes/raincatcher-theme.
This seems like a weird way to approach this problem. Why not just have a Dockerfile that uses the Keycloak container as the base image and then copies the theme into the container at build time? Then just run the image you build? This will also be a more stable pattern in the long term if you ever decide to add any plugins or customizations and it provides an easy upgrade path to new versions by just changing the base image in your Dockerfile.
Update according to your new question update:
Try the following:
docker cp ./mycustomthem 7e3a420017a8:/opt/jboss/keycloak/themes/
The correct path in Keycloak is actually /opt/jboss/keycloak/themes/
I have a docker-container with a Python3 environment and various libraries installed.
I'm trying to develop a simple Python program against this environment.
So what I have is a volume with my source code outside the container which is ADDed and set as WORKDIR in the Dockerfile.
I'm then shelling into the container and trying to run the program on the command-line.
When I hit an error, I want to simply change the source in my editor which is outside the container, and run again.
However, when I do this, the executing code in the container doesn't seem to be taking any notice of the changes I made.
If I do
docker-compose up --build
and rebuild the container then it does.
Obviously this is very slow.
Surely it should be possible for the container to see changes to the code I'm working on without being rebuilt? If so, how do I make this happen?
Using ADD bakes files into a container image, so as you've noticed, updating files in a running application requires an entire container rebuild and restart. To get around this, you can mount a directory on your host machine over the path you've copied into your container using ADD.
To do this with Docker, you can use -v or --volume. Using Docker Compose, you can list the directory to be mounted under volumes:. For example, if you had the following in your build file:
# Copy app code into the container working directory
ADD /my/app/code /usr/app/src
You can then mount your live code over the baked-in files at container start time (note that directory paths must be absolute - you can use $PWD for this):
$ docker run -v /my/live/app/code:/usr/app/src python:latest
$ docker run -v "$PWD"/app/code:/usr/app/src python:latest
The docker-compose.yml equivalent is as follows:
my-service:
image: python:latest
volumes:
- /my/live/app/code:/usr/app/src
- ./relative/paths:/work/too
There's more about bind mounts in the documentation.
I'm using a docker image as my dev environment for a specific version of PHP. I am using PHP for a command line script so every time I make a change to the script I would like it to automatically make changes in the container.
I'm not sure if this is even possible. I assumed it was. I mostly use docker-compose and I can add VOLUMES easily to achieve this, but not in this instance with docker.
My Dockerfile:
FROM php:7.2-cli
COPY ./app /usr/src/app/
WORKDIR /usr/src/app
ENTRYPOINT [ "php" ]
CMD [ "./index.php" ]
I first run:
docker build -t app .
And then
docker run app
Everything works well. But if I change something in index.php I have to run the steps again. Is this expected behaviour or is there any way to have docker watch for changes?
docker run -v /home/user/location:/usr/src/app app
Use a volume so that the files within the container reflect local changes.
https://docs.docker.com/storage/volumes/#choose-the--v-or---mount-flag
If you are editing a file using vim/sublime outside docker in your host, this is normal, because vim/sublime does not simple "edit" that file, but create a new file. see : https://github.com/moby/moby/issues/15793
solution:
After some googling, Sublime text has atomic_save instead so Adding "atomic_save": false to user preferences worked (After a restart).
if you are using docker-compose, use this command:
$ docker-compose run --build
I am trying to pass a directory inside the container, eventually where this can be automated. However I don't see any alternative other than physically editing the Dockerfile and manually typing the specific directory to be added.
Note: I have tried mounted volumes, however that solution doesn't help my issue, as I want to eventually call the container on a directory which will eventually have a script run on the directory in the container--not simply copying the local directory inside the container.
Method 1:
$ --build-arg project_directory=/path/to/dir
ARG project_directory
ADD $project_directory .
My unsuccessful solution assumes that I can use the argument's value as a basic string that the ADD command can interpret just as if I was just manually entering the path.
not simply copying the local directory inside the container
That's exactly what you're doing now, by using ADD $project_directory. If you need to make changes from the container and have them reflected onto the host, use:
docker run -v $host_dir:$container_dir image:tag
The command above launches a new container, and it's quite possible for you to launch it with different directory names. You can do so in a loop, from a jenkins pipeline, a shell script, or whatever suits your development environment.
#!/bin/bash
container_dir=/workspace
for directory in /src /realsrc /kickasssrc
do
docker run -v $directory:$container_dir image:tag
done