I am currently using free radius docker follwing the instructions (https://hub.docker.com/r/freeradius/freeradius-server/).
After starting the free radius server, I want to change the certificate configurations in the container. So I tried to mount the /etc/raddb/certs directory, but i found that i cannot do so. Doing so makes the container unable to start. I wonder if there is any way to mount /etc/raddb/certs in the free radius container ??? Thanks!!!
My docker-compose.yml is as attached.
docker-compose.yml
Related
I am a newbie as far as both Airflow and Docker are concerned; to make things more complicated, I use Astronomer, and to make things worse, I run Airflow on Windows. (Not on a Unix subsystem - could not install Docker on Ubuntu 20.4). "astro dev start" breaks with an error, but in Docker Desktop I see, and can start, 3 Airflow-related containers. They see my DAGs just fine, but my DAGs don't see the local file system. Is thus unavoidable with the Airflow + Docker combo? (Seems like a big handicap; one can only use a file in the cloud).
In general, you can declare a volume at image runtime in Docker using the -v switch with your docker run command to mount a local folder on your host to a mount point in your container, and you can access that point from inside the container.
If you go on to use docker-compose up to orchestrate your containers, you can specify volumes in the docker-compose.yml file for your containers which configures the volumes for the containers that run.
In your case, the Astronomer docs here suggest it is possible to create a custom directive in the Astronomer docker-compose.override.yml file to mount the volumes in the Airflow containers created as part of your astro commands for your stack which should then be visible from your DAGs.
I am running windows 10 and the most recent version of docker. I am trying to run a docker image and transfer files to and from the image.
I have tried using the "docker cp" command, but from what I've seen online, this does not appear to work for docker images. It only works for containers.
When searching for info on this topic, I have only seen responses dealing with containers, not for images.
A docker image is basically a template used for containers. If you add something to the image it will show up in all of the containers. So if you just want to share a single set of files that don't change you can add the copy command to your docker file, and then run the new image and you'll find the container.
Another option is to use shared volumes. Shared volumes are basically folders that exist on both the host machine and the running docker container. If you move a file on the host system into that folder it will be available on the container (and if you put something from the container into the folder on the container side you can access it from the host side).
I created two docker containers with compose on Docker for Windows, using wordpress and mariadb. I've created a volume for wordpress that points to my PC's normal filesystem, but mariaDB's is still contained within the Hyper-V's Virtual Hard Disk.
The mount point is at /var/lib/docker/volumes/1995...ca3/_data
I've tried looking at previous answers, but the link that would explain how to backup, copy, or restore volumes redirects to a general volume explanation. Most plugins or scripts I've seen for Docker typically refers to a *nix environment.
Would anyone know of a modern method to export and import volumes mounted to Linux containers in Docker for Windows?
The way I normally do this is to start a container that mounts two volumes, the source volume and the destination volume, and I run a command in that container that copies the contents of one volume to another. I don't have a copy of windows at hand to find out how to copy all files recursively, but I'm sure it can do it quite easily.
I have installed some software in a docker image. When I run the software, it creates some setting files (dot files) under the root home folder. The problem is docker container wipes those files when I quit the container.
Is there a way to keep those dot files after I quite containers? I know I can manually save the container into a image. But that is not an elegant solution. That means every time I used the container, I need to save it to a image.
Any better solutions?
Thanks!
A simple solution would be to use a volume.
docker volume create configuration
And then you just run each container with it.
docker run -d -v configuration:container_configuration_dir your_image_name
Left side of : is name of volume created with first command and right side is dir inside container where your dot files are created.
Keep in mind how mounts work and for more details check docker docs on volumes.
So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.