Docker container copy files from local path into container - docker

I need to copy my customized keycloak themes into keycloak container to use it like mention here:
https://medium.com/#auscunningham/change-login-theme-in-keycloak-docker-image-55b5fa5ceec4
After identifying my container id: docker container ls and making a list of files like this: docker exec 7e3a420017a8 ls ./keycloak/themes
It returns the list of themes correctly, but using this to copy my files from local to container:
docker cp ./mycustomthem 7e3a420017a8:/keycloak/themes/
or
docker cp ./mycustomthem 7e3a420017a8:./keycloak/themes/
I get the following error:
Error: No such container:path: 7e3a420017a8:/keycloak
I cannot imagine where the error is, since I can list the files into the folder and container, could you help me?
Thank you in advance.

Works on my computer.
docker cp mycustomthem e67f76e8740b:/opt/jboss/keycloak/themes/raincatcher-theme
You have added the wrong path in command add full path /opt/jboss/keycloak/themes/raincatcher-theme.

This seems like a weird way to approach this problem. Why not just have a Dockerfile that uses the Keycloak container as the base image and then copies the theme into the container at build time? Then just run the image you build? This will also be a more stable pattern in the long term if you ever decide to add any plugins or customizations and it provides an easy upgrade path to new versions by just changing the base image in your Dockerfile.

Update according to your new question update:
Try the following:
docker cp ./mycustomthem 7e3a420017a8:/opt/jboss/keycloak/themes/
The correct path in Keycloak is actually /opt/jboss/keycloak/themes/

Related

How to copy docker container directories to Google Compute Engine instance

I'm new to Google Cloud and Docker and I can't for the life of me figure out how to copy directories from the Docker container (pushed to the Container Registry) to the Google Compute Engine instance. I think I need to mount the volume but I don't really know how. In the docker container the main directory is /app which has my files. Basically I want to do this to see the docker container's files in Google Cloud.
I assumed that if i did: docker pull [HOSTNAME]/[PROJECT-ID]/[IMAGE]:[TAG] inside the cloud shell that the files would show up somewhere in the cloud shell i.e. in var/lib/docker but when I cd to var/lib/docker and type in: ls I get
ls: cannot open directory '.': Permission denied
Just to add I've tried following the "Connecting to Cloud Storage buckets" tutorial https://cloud.google.com/compute/docs/disks/gcs-buckets
But realised that this is for single files. Is it possible to copy over the whole root directory of the Docker image using gsutil? Do I need to use something else instead, like persistent disks?
You need to have docker installed in order to run your images and of course be able to copy anything from inside the image to your host filesystem.
Use docker cp CONTAINER:SRC_PATH DEST_PATH to copy files.
Have a look at the official Docker Documentation on how to use this command.
Simillar topic was also discussed here on StackOverflow and has a very good answer.

Copy file into Dockerfile from different directory

Is it possible for Dockerfile to copy over some file from the host filesystem and not from the context it's being build from ?
# Inside Dockerfile
FROM gradle:latest
COPY ~/.super/secrets.yaml /opt
# I think you can work around it with but doesn't look nice
COPY ../../../../../../.super/secrets.yaml /opt
when I ran the command on the /home/user/some/path/to/project/ path ?
docker build .
The usual way to get "external" files into your docker container is by copying them into your build directory before starting the docker build. It is strongly recommended to create a script for this to ensure that your preparation step is reproducible.
No this is not possible to go up the directory. Here is why.
When runnig docker build . have you ever considered what this dot stand for at the end? Well, here is part of docker documentation
The docker build command builds Docker images from a Dockerfile and a
“context”. A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
As you see, this dot is referencing context path (here it means "this directory"). All files under context path get sent to docker daemon and you can reference only these files in your Dockerfile. Of course, you can think that you are clever and reference / (root path) so you have access to all files on your machine. (I highly encourage you to try this and see what happens). What you should see that happens is that docker client is freezing. Or, does it really? Well, it's not really freezing, its sending all / directory to docker daemon and it can take ages or (what's more probable) you may run out of memory.
So now when you understand this limitation you see that the only way to make it work is to copy the file you are interested in to the location of context path and then run docker build command.
When you do docker build ., that last argument is the build context directory: you can only access files from it.
You could do docker build ../../../ but then every single file in that root directory will get packaged up and send to the Docker daemon, which will be slow.
So instead, do something like:
cp ../../../secret.yaml .
docker build .
rm secret.yaml
However, keep in mind that will result in the secret being embedded in the image forever, which might be a security risk. If it's a secret you need for runtime, better to pass it in via environment variable at runtime. If you only need the secret for building the image, there are other alternatives, e.g. https://pythonspeed.com/articles/docker-build-secrets/.

Does data needs to have a specific format to upload it in Docker?

I would like to know if there is a specific way to upload data to Docker, I've been stuck on this during a week and I am sure the answer will be something simple.
Does anyone know? I am working with a windows 10 machine.
You can mount directories on the host system inside the container and access their contents that way, if that's what you mean by 'data'.
You should check out Manage data in containers for more info.
You can use the docker cp command to copy the file.
For eg: If you want to copy abc.txt to location /usr/local/folder inside some docker container(you can get docker container name from NAMES column by executing command docker ps.) then you just execute,
docker cp abc.txt ContainerName:/usr/local/folder
(abc.txt is a local to the foler from where you are executing the command. You can provide the full path of the file.)
After this just get into the container by,
docker exec -it ContainerName bash
then cd /usr/local/folder. you will see your file copied their.

Updating a container created from a custom dockerfile

Before anything, I have read this question and the related links in it, but I am still confused about how to resolve this on my setup.
I wrote my own docker file to install Archiva, which is very similar to this file. I created an image from the docker file using docker build -t archiva . and have a container which I run using docker run archiva. As seen in the docker file, the user data that I want to preserve is in a volume.
Now I want to upgrade to Archive 2.2.0. How can I update my container, so that the user-data thats in the volume is preserved? If I change the docker file by h=just changing the version number, and run the docker build again, it will just create another container.
Best practice
The option --volume of the docker-run enables sharing files between host and container(s) and especially preserve consistent [user] data.
The problem is ..
.. it appears that you are not using --volume and that the user data are in the image. (and that's a bad practice beacuse it leads to the situation you are in: unable to upgrade a service easily.
One solution (the best IMO) is
Back-up the user data
To use the command docker-cp: "Copy files/folders between a container and the local filesystem."
docker cp [--help] CONTAINER:SRC_PATH DEST_PATH
Upgrade your Dockerfile
By editing your Dockerfile and changing the version.
Use the --volume option
Use docker run -v /host/path/user-data:container/path/user-data archiva
And you're good!

Ansible docker module missing CP command?

The docker client offers the cp sub-command as explained here, which is very handy when one needs to copy a file into a container (note: this is somewhat analogous to Dockerfile ADD instruction in image building). In Docker 1.8 the cp command has been even expanded a bit.
However, reading the Ansible docker module documentation, it appears that this is missing? Here are my 2 questions:
Did I misunderstand the Ansible documentation?
if Ansible is missing the cp thing, has anyone found a workaround? I can think of something like using Ansible copy module to transport the files to the remote machine first, and then run there the native docker client with cp, but ideally Ansible's docker module would have done this in a single shot as part of the docker module?
Thanks in advance.
You can also use the synchronize command ~ examples provided in this link:
http://opensolitude.com/2015/05/26/building-docker-images-with-ansible.html
Using ansible shell module helped:
- name: copy db dump to localhost
ansible.builtin.shell: docker cp container:/tmp/dump.sql /tmp/dump.sql
You didn't. Ansible's docker module does not support copying files or folders from a container.
There is no simple way to do so. I mean you need a hack. Off the top of my head you can play with '-' argument for docker cp option.
However from my point of view if you wish to copy something into a container you're probably doing something wrong. Containers should be ephemeral.

Resources