As said in documentation, if I want to run testcontainers inside a docker I have to consider the following points:
The docker socket must be available via a volume mount
The 'local' source code directory must be volume mounted at the same path inside the container that Testcontainers runs in, so that Testcontainers is able to set up the correct volume mounts for the containers it spawns.
How to comply with 2nd point, mainly with the -v $PWD:$PWD condition if I use Docker for Windows?
On windows, instead socket, docker uses named pipes.
docker run -v \\.\pipe\docker_engine:\\.\pipe\docker_engine
But you need Windows v1709 and special version of Docker for Windows, since this feature is experimental.
More info:
https://blog.docker.com/2017/09/docker-windows-server-1709/
As for the $PWD, on windows cmd you can use %CD% variable which does this same job. Powershell also has a $pwd, same as in linux. But unfortunatelly, they does not work with docker-compose, as they're not true environment variables.
I think easiest would be to execute a short script to create .env file on windows where PWD= will be set to the current dir:
echo PWD=%cd% > .env
and you can use $PWD in docker-compose same as on linux.
Related
I have docker-compose file with volumes section for given container:
video-streaming:
image: video-streaming
build:
context: ./video-streaming
dockerfile: Dockerfile-dev
container_name: video-streaming
volumes:
- /tmp/history/npm-cache:/root/.npm:z
I'm running docker on windows and image is linux based.
When I enter container and add file to /root/.npm and then close the container and run it again then the file is still there so this volume works. But the question is where can I find it's location on Windows host?
You should find the volumes in C:\ProgramData\docker\volumes. The filename will be a hash, which you can check with docker inspect.
If not, then note that you are simply mounting a host directory /tmp/history/npm-cache to your container. This directory is your volume.
When using docker for windows the question is if you are using the old Docker Toolbox or the newer ones that use WSL/WSL2
Docker Desktop configured Linux Containers and WSL/WSL2
The docker engine is actually not running on the windows, but inside the WSL instance, docker desktop makes docker commands available on the windows for ease of use.
So the volumes are probably inside that WSL instance (linux)
you can find out what WSL instances you have by typing wsl -l in powershell.
their file-system is available in \\\wsl$ path on windows.
In your case, the volume is not named, its in the exact location you specified for it.
/tmp/history/npm-cache but inside the WSL instance that docker engine is installed on.
Through WSL
in powershell write wsl ls /tmp/history, you should see npm-cache there.
wsl command allows piping linux commands that will be run on the actual linux wsl instance (default one) which is probably the one running the docker engine.
alternatively, you can connect to that linux by just typing wsl and going to that path cd /tmp/history
once inside the wsl instance you can write explorer.exe . to open explorer in that location (on windows)
notice that the path will always start with \\wsl$ so you can go to that path on windows and see all of you wsl instances and their file-systems, try to search for "npm-cache" in explorer, you might find it.
via Docker commands
docker volume ls will give you all of the available volumes. yours is not named, so its probably one of the 'UUID' ones. you can inspect each one to find its location (probably still inside the wsl instance)
docker volume inspact {the-uuid-of-the-volume}
ones you inspect it, you will see each volume has a Mountpoint field which points to the location of the volume (inside the wsl instance)
unnamed volumes are created with different permissions from your user, so you might need sudo to interact with them via the wsl terminal.
if its through windows file explorer on \\wsl$ you might not need extra permissions.
I am a newbie as far as both Airflow and Docker are concerned; to make things more complicated, I use Astronomer, and to make things worse, I run Airflow on Windows. (Not on a Unix subsystem - could not install Docker on Ubuntu 20.4). "astro dev start" breaks with an error, but in Docker Desktop I see, and can start, 3 Airflow-related containers. They see my DAGs just fine, but my DAGs don't see the local file system. Is thus unavoidable with the Airflow + Docker combo? (Seems like a big handicap; one can only use a file in the cloud).
In general, you can declare a volume at image runtime in Docker using the -v switch with your docker run command to mount a local folder on your host to a mount point in your container, and you can access that point from inside the container.
If you go on to use docker-compose up to orchestrate your containers, you can specify volumes in the docker-compose.yml file for your containers which configures the volumes for the containers that run.
In your case, the Astronomer docs here suggest it is possible to create a custom directive in the Astronomer docker-compose.override.yml file to mount the volumes in the Airflow containers created as part of your astro commands for your stack which should then be visible from your DAGs.
I want to know if Docker can run apps installed in host in the container so that I dont need to install the app on each images which wastes the hard disk space.
I know Linux is different since it requires dependencies and packages locally but I wonder if it is possible to use it like in Windows VM.
In Windows Hyper-V, I did this by sharing the network folder containing portable apps with the container and run apps from inside the Windows VM.
Thank you.
You can link a directory on your host containing the executables into your container. Then it will be accessible in the container. To do so, you can use VOLUMES -- Mount a host directory as a data volume and mount a host directory (here: /tmp/foo) into your container (here: /foo) and execute a script called foo.sh in your container's location /foo/foo.sh:
mkdir /tmp/foo
echo -e "#\!/bin/sh\n\necho foo" > /tmp/foo/foo.sh
docker run --rm -v /tmp/foo:/foo alpine sh /foo/foo.sh
=> foo
The same way, you can add binaries from your host to your container... But I do not think that this is intended and should be used, because a container should work as a standalone, isolated "lightweight-VM". You add an unnecessary dependency to your host machine to it, which seems not to be an elegant solution.
I am using Docker to deploy my ASP.NET Core Web API microservices, and am looking at the options for injecting configuration into each container. The standard way of using an appsettings.json file in the application root directory is not ideal, because as far as I can see, that means building the file into my docker images, which would then limit which environment the image could run in.
I want to build an image once which can they be provided configuration at runtime and rolled through the dev, test UAT and into Production without creating an image for each environment.
Options seem to be:
Providing config via environment variables. Seems a bit tedious.
Somehow mapping a path in the container to a standard location on the host server where appsettings.json sits, and getting the service to pick this up (how?)
May be possible to provide values on the docker run command line?
Does anyone have experience with this? Could you provide code samples/directions, particularly on option 2) which seems the best at the moment?
It's possible to create data volumes in the docker image/container. And also mount a host directory into a container. The host directory will then by accessible inside the container.
Adding a data volume
You can add a data volume to a container using the -v flag with the docker create and docker run command.
$ docker run -d -P --name web -v /webapp training/webapp python app.py
This will create a new volume inside a container at /webapp.
Mount a host directory as a data volume
In addition to creating a volume using the -v flag you can also mount a directory from your Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp.
Refer to the Docker Data Volumes
We are using other packaging system for now (not docker itself), but still have same issue - package can be deployed in any environment.
So, the way we are doing it now:
Use External configuration management system to hold and manage configuration per environment
Inject to our package the basic environment variables to hold the configuration management system connection details
This way we are not only allowing the package to run in almost any "known" environment, but also run-time configuration management.
When you are running docker, you can use environment variable options of the run command:
$ docker run -e "deep=purple" ...
Is it possible to copy files to a local machine by running a command inside of a docker container. I am aware of docker cp <containerId>:container/file/path /host/file/path However, my understanding is that this has to be run from outside of the docker container. Is there a way to do it or something similar from within?
For some context I have a python script that is run inside of a docker container with something like the following command docker run -ti -rm --net=host buildServer:5000/myProgram /myProgram.py -h. I would like to retrieve the files that are generated from this program so they can be edited. I could run the docker container in detached mode, docker cp the desired file and the shutdown the container. However, I would like to be able to abstract this away from the user.
Docker containers by design don't have any access to the host filesystem unless you provide it explicitly via volume mounts. So, in your example, you could do something like:
docker run -ti -v /tmp/data:/data -rm --net=host buildServer:5000/myProgram /myProgram.py -h
And within the container, the /data directory would be mapped to /tmp/data on your host. You could then copy files into /data to get at them on your host.
This assumes that you're running Docker on Linux. If you are using Windows or OS X there may be additional steps, since in those environments Docker is actually running on a Linux virtual machine and volume access may or may not behave as expected (I don't use those platforms so I can't comment authoritatively).
For more information:
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume