Docker-Compose Volume mounting from windows to linux container makes everything executable - docker

Im working on some Ansible stuff that I we have setup in a docker container. when run from a linux system it works great. When run from a windows system I get the following error:
ERROR! Problem running vault password script /etc/ansible-deployment/secrets/vault-dev.txt ([Errno 8] Exec format error). If this is not a script, remove the executable bit from the file.
Basically what this is saying is that the file is marked as an executable. What i've noticed (and hasnt been a huge problem until now) is that all files mounted to a linux container from windows are ALWAYS tagged with the executable attribute.
Is there any way to control/prevent this?

Did you try adding :ro at the end of the mounted path?
Something like this:
HOST:CONTAINER:ro

This is a limitation of the SMB-based approach that Docker for Windows uses for making host-mounted volumes work, see here
To solve the executable bit error, I ended up passing Ansible a python script as the --vault-password-file argument as a workaround, see here.
#!/usr/bin/env python
import os
vault_password = open('PATH_TO_YOUR_VAULT_PASSWORD_FILE', 'r')
print vault_password.read()
vault_password.close()
Since the python script is executed in the container, the vault password file path needs to be accessible in the container - I'm mounting it as a volume, but you can also build it into your image. The latter is a security risk and is not recommended.

Related

Mounting single file in docker running on windows server 2019 (error: Invalid volume specification...)

I am trying to run multiple Linux containers in Docker EE running on Windows server 2019. Everything is going well until I mount a single file to a container, like:
VOLUME:
- c:\xxx\yyy.xml:/app/yyy.xml
When I spun up an instance I receive an error:
ERROR: for xxx Cannot create container for service s1: invalid volume specification: 'C:\Users\xxx\yyy.xml:/app/yyy.xml' invalid mount config for type "bind": source path must be a directory
Mounting a single file is possible in running Docker CE (on windows).
Is there a way get this working without too many custom workarounds?
Sadly no - Here is the link to the relevant issue on github.
https://github.com/moby/moby/issues/30555
The gist of it...
thaJeztah wrote
Correct, bind-mounting files is not possible on Windows. On Linux,
there's also quite some pitfalls though, so mounting a directory is
preferred in many situations.
However, Drewster727 pointed out the following
For some context on my situtation--
We're running apps in the legacy .NET framework world (/sad-face) --
we have .config files mixed in with our application binaries. We don't
want to build environment-specific containers, so of course, we try to
share config files that have been transformed per environment directly
inside the container to the location our application expects them.
In case it helps anyone, I have a simple entrypoint.ps1 script hack to
get around this issue for now. Share a directory to c:\conf with
config files in it, the script will copy them into the app context
folder on start:
if(Test-Path c:\conf){
Copy-Item -path c:\conf*.* -Recurse -Destination . -Force }
For Windows bound volumes, you have to format the path like this:
VOLUME:
- /c/xxx/yyy.xml:/app/yyy.xml
I created a handy AutoHotkey script to make creating Docker-formatted Windows paths a lot easier in Windows:
How to mount a host directory in a Docker container

How to execute a jmeter jmx file from a standard docker container?

I'd like to pull down a standard docker container and then issue it a command that will read and execute a .jmx test file from the current folder (or specified path) and drop the results into the same folder (or another specified path/filename). Bonus points if the stdout from jmeter's console app comes through from the docker run command.
I've been looking into this for quite some time and the solutions I've found are way more complex than I'd like. Some require that I create my own dockerfile and build my own image. Others require that I set up a Docker volume first on my machine and then use that as part of the command. Still others want to run fairly lengthy bash shell scripts. I'm running on Windows and would prefer something that just works with the standard docker CLI running in any Windows prompt (it should work from cmd or PowerShell or bash, not just one of these).
My end goal is that I want to test some APIs using jmeter tests that already exist. The APIs are running in another locally running container that I can execute with a path and port. I want to be able to run these tests from any machine without first having to install Java and jmeter.

Making a new container with the same configuration as the old one

Let's say I make a container with some flags. For instance,
docker run -v my_volume:/data my_cool_image
Now, let's say my_cool_image is updated to a new version. Is there a nice way to make a new container with the same -v flag as the old one? The container has been properly configured so that the data does not get stored in the container, so deleting the old container is not a concern.
The best solution I can find is to use docker-compose, but that seems a bit silly for single-container systems.
I'd use a shell script or a Docker Compose YAML file. (Compose isn't really overkill; if you add some error handling and write out one option per line for readability, the shell script and the YAML file wind up being about the same length.)
There's nothing built in to Docker that can extract the docker run options from an existing container.

Accessing Files on a Windows Docker Container Easily

Summary
So I'm trying to figure out a way to use docker to be able to spin up testing environments for customers rather easily. Basically, I've got a customized piece of software that want to install to a Windows docker container (microsoft/windowsservercore), and I need to be able to access the program folder for that software (C:\Program Files\SOFTWARE_NAME) as it has some logs, imports/exports, and other miscellaneous configuration files. The installation part was easy, and I figured that after a few hours of messing around with docker and learning how it works, but transferring files in a simple manner is proving far more difficult than I would expect. I'm well aware of the docker cp command, but I'd like something that allows for the files to be viewed in a file browser to allow testers to quickly/easily view log/configuration files from the container.
Background (what I've tried):
I've spent 20+ hours monkeying around with running an SSH server on the docker container, so I could just ssh in and move files back and forth, but I've had no luck. I've spent most of my time trying to configure OpenSSH, and I can get it installed, but there appears to be something wrong with the default configuration file provided with my installation, as I can't get it up and running unless I start it manually via command line by running sshd -d. Strangely, this runs just fine, but it isn't really a viable solution as it is running in debug mode and shuts down as soon as the connection is closed. I can provide more detail on what I've tested with this, but it seems like it might be a dead end (even though I feel like this should be extremely simple). I've followed every guide I can find (though half are specific to linux containers), and haven't gotten any of them to work, and half the posts I've found just say "why would you want to use ssh when you can just use the built in docker commands". I want to use ssh because it's simpler from an end user's perspective, and I'd rather tell a tester to ssh to a particular IP than make them interact with docker via the command line.
EDIT: Using OpenSSH
Starting server using net start sshd, which reports it starting successfully, however, the service stops immediately if I haven't generated at least an RSA or DSA key using:
ssh-keygen.exe -f "C:\\Program Files\\OpenSSH-Win64/./ssh_host_rsa_key" -t rsa
And modifying the permissions using:
icacls "C:\Program Files\OpenSSH-Win64/" /grant sshd:(OI)(CI)F /T
and
icacls "C:\Program Files\OpenSSH-Win64/" /grant ContainerAdministrator:(OI)(CI)F /T
Again, I'm using the default supplied sshd_config file, but I've tried just about every adjustment of those settings I can find and none of them help.
I also attempted to setup Volumes to do this, but because the installation of our software is done at compile time in docker, the folder that I want to map as a volume is already populated with files, which seems to make docker fail when I try to start the container with the volume attached. This section of documentation seems to say this should be possible, but I can't get it to work. Keep getting errors when I try to start the container saying "the directory is not empty".
EDIT: Command used:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination=C:/temp my_container
Running this on a ProxMox VM.
At this point, I'm running out of ideas, and something that I feel like should be incredibly simple is taking me far too many hours to figure out. It particularly frustrates me that I see so many blog posts saying "Just use the built in docker cp command!" when that is honestly a pretty bad solution when you're going to be browsing lots of files and viewing/editing them. I really need a method that allows the files to be viewed in a file browser/notepad++.
Is there something obvious here that I'm missing? How is this so difficult? Any help is appreciated.
So after a fair bit more troubleshooting, I was unable to get the docker volume to initialize on an already populated folder, even though the documentation suggests it should be possible.
So, I instead decided to try to start the container with the volume linked to an empty folder, and then start the installation script for the program after the container is running, so the folder populates after the volume is already linked. This worked perfectly! There's a bit of weirdness if you leave the files in the volume and then try to restart the container, as it will overwrite most of the files, but things like logs and files not created by the installer will remain, so we'll have to figure out some process for managing that, but it works just like I need it to, and then I can use windows sharing to access that volume folder from anywhere on the network.
Here's how I got it working, it's actually very simple.
So in my dockerfile, I added a batch script that unzips the installation DVD that is copied to the container, and runs the installer after extracting. I then used the CMD option to run this on container start:
Dockerfile
FROM microsoft/windowsservercore
ADD DVD.zip C:\\resources\\DVD.zip
ADD config.bat C:\\resources\\config.bat
CMD "C:\resources\config.bat" && cmd
Then I build the container without anything special:
docker build -t my_container:latest .
And run it with the attachment to the volume:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination="C:/Program Files (x86)/{PROGRAM NAME}" my_container
And that's it. Unfortunately, the container takes a little longer to start (it does build faster though, for what that's worth, as it isn't running the installer in the build), and the program isn't installed/running for another 5 minutes or so after the container does start, but it works!
I can provide more details if anyone needs them, but most of the rest is implementation specific and fairly straightforward.
Try this with Docker composer. Unfortunately, I cannot test it as I'm using a Mac it's not a "supported platform" (way to go Windows). See if that works, if not try volume line like this instead - ./my_volume:C:/tmp/
Dockerfile
FROM microsoft/windowsservercore
# need to ecape \
WORKDIR C:\\tmp\\
# Add the program from host machine to container
ADD ["<source>", "C:\tmp"]
# Normally used with web servers
# EXPOSE 80
# Running the program
CMD ["C:\tmp\program.exe", "any-parameter"]
Docker Composer
Should ideally be in the parent folder.
version: "3"
services:
windows:
build: ./folder-of-Dockerfile
volume:
- type: bind
source: ./my_volume
target: C:/tmp/
ports:
- 9999:9092
Folder structure
|---docker-composer.yml
|
|---folder-of-Dockerfile
|
|---Dockerfile
Just run docker-composer up to build and start the container. Use -d for detach mode, should only use once you know its working properly.
Useful link Manage Windows Dockerfile

Docker-compose volumes not mounted correctly in VirtualBox under Windows

I am trying to run Hyperledger's BYFN Tutorial on a Win10 Home using Docker Toolbox, with VirtualBox 5.2.4. I am using the default image for the VirtualBox VM.
I have set up a shared folder (not in C:/Users, but on my other drive) and it seems to be functioning correctly - changes I make from either Windows, or the docker-machine are reflected in both places as intended. I successfully generate the network artifacts using "./byfn -m generate", but I get an error when trying to "./byfn up" it.
What happens is that, as far as I can see from the logs, all the containers get brought up correctly, but for some reason the volumes of the cli container are not attached correctly (I think). When byfn.sh finishes I get the following error:
When I ssh into the cli container, I can see the channel-artifacts, crypto and scripts folders, but their contents don't seem to correlate with the volumes: part of the docker-compose file. First, the scripts folder is empty (whereas in the docker-compose file it's specified that it should mount a bunch of files), so I get the above error. Second, the channel-artifacts containes only 1 directory named genesis.block, which should actually be a file. And in the crypto folder there are just a bunch of directories.
As you might have guessed, I'm pretty new at docker, so this might be intended behavior, but I'm still getting an error.
Please let me know if I can provide additional information. Thanks in advance.

Resources