I'm trying to copy an entire directory from my docker image to my local machine.
The image is a keycloak image, and I'd like to copy the themes folder so I can work on a custom theme.
I am running the following command -
docker cp 143v73628670f:keycloak/themes ~/Development/Code/Git/keycloak-recognition-login-branding
However I am getting the following response -
Error response from daemon: Could not find the file keycloak/themes in container 143v73628670f
When I connect to my container using -
docker exec -t -i 143v73628670f /bin/bash
I can navigate to the themes by using -
cd keycloak/themes/
I can see it is located there and the files are as expected in the terminal.
I'm running the instance locally on a Mac.
How do I copy that entire themes folder to my local machine? What am I doing wrong please?
EDIT
As a result of running 'pwd' your should run the Docker cp command as follows:
docker cp 143v73628670f:/opt/jboss/keycloak/themes ~/Development/Code/Git/keycloak-recognition-login-branding
You are forgetting the trailing ' / '. Therefore your command should look like this:
docker cp 143v73628670f:/keycloak/themes/ ~/Development/Code/Git/keycloak-recognition-login-branding
Also, you could make use of Docker volumes, which allows you to pass a local directory into the container when you run the container
Related
I'm struggling with testing my app with my Cypress with docker, I use the dedicated docker image with this command : docker run -it -v $PWD:/e2e -w /e2e cypress/included:8.7.0
I have ALWAYS this error when I launch it : `Could not find a Cypress configuration file, exiting.
We looked but did not find a default config file in this folder: /e2e`
Meaning that cypress can't find the cypress.json but it is precisely in the dedicated folder, here is my directory/file tree :
pace
front
cypress
cypress.json
So this is a standard file tree for e2e testing, and despite all of my tricks (not using $PWD but using full directory path, reinstall docker, colima engine etc. nothings works, and if I run npm run cypress locally everything works just fine !
Needless to say that I am in the /pace/front directory when I'm trying these commands
Can you help me please ?
The -v $PWD:/e2e is a docker instruction to mount a volume (a bind mount). It mounts the current directory to /e2e inside the docker container at runtime.
In the docs it mention a structure where it expects the cypress.json file to end up directly under /e2e. To get it do be like this you have to do either:
-v $PWD/pace/front:/e2e
run the command from inside the pace/front directory
Since the CMD and ENTRYPOINT commands in docker run from the WORKDIR you could also try running it from where you were but changing the workdir as:
-w /e2e/pace/front
I have not seen their dockerfile, but my assumption is that that would work.
My personal choice would be to just run it from pace/front
I am creating a docker container that will run a minecraft server. (Yes i know, these already exist). And of course i want the world to be saved when the container is turned off.
This is my dockerfile:
FROM anapsix/alpine-java
COPY ./ /home
CMD ["java","-jar","/home/main.jar"]
EXPOSE 25565
Then i build the container:
docker build -t minecraftdev .
Run the container:
docker run -dp 25565:25565 -v C:/Users/user/server:/home minecraftdev
And then the files in the image, server.properies, the server jar file and EULA.txt is wiped.
Is there another way i don't now of to get the container to store data? And this is without placing the files in the server folder.
Thank you for your answers, i was able to fix it by -v C:/Users/user/server/world:/home/world As the world files are stored in that folder, Instead of changing out all the files in the folder as i didn't know -v did.
Minecraft makes the server.jar file and i don't know how to change so it stores all the files in another place.
I am trying to debug some issues in my GitLab CI pipeline. I have a step B which is using some artifacts from step A.
Step A is very long (and is working in the CI), so I don't want to run it locally: I just download the resulting artifacts from GitLab. So I have an artifacts.zip, which I extracted to obtain an output and a logs directory. So far so good.
I want to run step B locally, using gitlab-runner. Note that I am using version 9.5 (https://docs.gitlab.com/runner/install/old.html).
I am using this command:
gitlab-runner exec docker step-b
As I explained, step-b needs the artifacts from step-a. This is what I tried:
gitlab-runner exec docker --docker-volumes ~/Downloads/output step-b
One of the script executed in step B is doing something like mv ../output /some/where/else. However, this script fails with the following error:
mv: cannot stat '../output': No such file or directory
Following this error, I have two questions:
Where is this script executed? It's called like that from the .gitlab-ci.yml:
./scripts/my_script.sh.
What is the . in this context?
How can I make sure that using the --docker-volumes ~/Downloads/output will mount the directory in the right place so my script can find it?
EDIT
As requested, here is a description of step A.
script:
- mkdir -p /usr/local/compil_result
- ./scripts/compil.sh
- mv /usr/local/compil_result ./output
artifacts:
paths:
- output
- logs
Since you're not mentioned what docker image you use, I assume it's a custom image you or your colleague have made. I think you need to check the Dockerfile of your docker image back, to make sure where the working directory of that script is.
Or you could also try to get inside the shell, and see the structure inside your container first,
docker run --rm -it --entrypoint /bin/bash your-image-name
To mount a Docker volume, you need to host directory and container directory separated by a colon, and use full path directory for both of them.
Something like this,
gitlab-runner exec docker --docker-volumes '/home/username/Downloads/output:/output' step-b
I'm brand new to Docker, and I'm trying to run a Dockerfile on my Windows 10 machine, but it's hanging initially and not doing anything.
My Dockerfile:
FROM busybox:latest
CMD ["date"]
My command from docker
$ docker build -f /projects/docker_test .
Other things of note:
Docker Toolbox installed on Windows 10 Home edition
Environmental variable:
HOME = G:\projects\
Dockerfile location:
G:\projects\docker_test\Dockerfile
File created initially with Notepad.
EDIT: I am able to load other docker containers just fine. Docker simply hangs when I try to access a local Dockerfile.
What worked for me was adding a .dockerignore file and add there the folders that are not part of the built image (in my case /node_modules).
The -f option is used to specify the path to the Dockerfile.
Try with:
docker build -t docker_test -f /projects/docker_test/Dockerfile /projects/docker_test
or:
cd G:\projects\docker_test\
docker build -t docker_test .
The reason for this is If we have any other folders or nested folders and files present in the same directory. Then this is happening and resolution is either to add .dockerignorefile or just move it to a folder and then move to that folder from command prompt and then execute docker build command.
I have been using Julia v0.4.5 for some time now along with IJulia. I am now trying to set up a Docker container in which I can run the code in one of my .jl files. To set up a working Julia inside a container I have copied the code in this Dockerfile: https://hub.docker.com/r/julialang/julia/~/dockerfile/
Using the above I get julia to work from my terminal with the command
docker run -i -t larajordan/juliatest:0.3
and then when the container opens I use the command
julia
to open Julia from the container's terminal. When using the Julia REPL I usually just execute the below command to run a .jl file. When I try this from the julia REPL inside the container however, it does not work and gives the below error message.
julia> include("/home/lara/SourceCode/researchhpc/wind_spacer/julia_learning/variables.jl")
ERROR: could not open file /home/lara/SourceCode/researchhpc/wind_spacer/julia_learning/variables.jl
in include at ./boot.jl:261
in include_from_node1 at ./loading.jl:320
I'm pretty sure that this is because the container is looking within itself for the .jl file and obviously this file doesn't exist within the container. I have tried to find out how to copy my .jl file into the contianer but it doesn't seem to work. The method I have tried is the following, from outside the container:
docker cp filename.jl /var/lib/docker/aufs/mnt/<full docker contianer id>/root/filename.jl
I get the error
cp: cannot create regular file ‘/var/lib/docker/devicemapper/mnt/a2c36e7f6f08c345a668550974a575384b5a3d465f411d3589bd5a6ac0fad13d/rootfs/root’: No such file or directory
Another thing I think will cause a problem once I get the .jl file inside the container is that the .jl file uses the command 'using '. These packages are not added to Julia or available inside the container either. I'll have to add them to the container's version of Julia. This can be done from the Dockerfile if the one I'm using is anything to go by; it seems like IJulia package is added and built within the Dockerfile with the below commands.
RUN /opt/julia/bin/julia -e 'Pkg.add("IJulia")'
RUN /opt/julia/bin/julia -e 'Pkg.build("IJulia")'
Any help on getting packages to be added from within the Dockerfile and also getting .jl files to run from the Julia REPL inside a container or just to run from the terminal inside the container would be appreciated.
you have to mount the host dir in a container https://docs.docker.com/engine/userguide/containers/dockervolumes/#mount-a-host-directory-as-a-data-volume
try
docker run -it -v /home/lara/SourceCode/researchhpc/wind_spacer/julia_learning:/opt/julia_learning larajordan/juliatest:0.3
then run in your REPL
julia> include("/opt/julia_learning/variables.jl")