Nmake can't locate files within a docker mount - docker

I have a docker container I use to compile a project, build a setup and export it out of the container. For this I mount the checked out sources (using $(Build.SourcesDirectory):C:/git/ in the volumes section of the TFS docker run task) and an output folder in 2 different folders. Now my project contains a submodule which is also correctly checked out, all the files are there. However, when my script executes nmake I get the following error:
Cannot find file: \ContainerMappedDirectories\347DEF6A-D43B-48C0-A5DF-CE228E5A10FD\src\Submodule\Submodule.pro
Where the path of the mapped container maps to C:/git/ inside the windows docker container(running on a windows host). I was able to start the docker container with an interactive powershell and mount the folder and find out the following things:
All the files are there in the container.
When doing docker cp project/ container:C:/test/ and running my build script it finds all the files and compiles successfully.
when copying the mounted project within docker with powershell and starting the build script it also works.
So it seems nmake has trouble traversing a mounted container within docker. Any idea how to fix this? I'd rather avoid copying the project into the container because that takes quite some time as compared to simply mounting the checked out project.

Related

Docker - executable file not found in $PATH

In my Ubuntu Server I have following directory structure: /home/docker/groovy. In this location I have simple groovy file. On Docker it is running container groovy_repo_1.
After I entered groovy directory I wanted to perform such script on container:
docker exec groovy_repo_1 docker.groovy
Output:
rpc error: code = 2 desc = oci runtime error: exec failed:
container_linux.go:247: starting container process caused "exec:
\"docker.groovy\": executable file not found in $PATH"
Why is it happen?
Docker works with long-lived immutable images and short-lived containers. If you have a script or any other sort of program you want to run, the best practice is generally to package it into an image and then run a container off of it. There is already a standard groovy image so your Dockerfile can be something as basic as:
FROM groovy:2.6
RUN mkdir /home/groovy/scripts
WORKDIR /home/groovy/scripts
COPY docker.groovy .
CMD ["groovy", "docker.groovy"]
You can develop and test your application locally, then use Docker to deploy it. Especially if you're looking at multi-host deployment solutions like docker-swarm or kubernetes it's important that the image be self-contained and has the script included in it.
Your server and your container have different filesystems, unless you specify otherwise mounting a server folder on a container folder with --volume command.
Here you expect your container to know about docker.groovy file juste because you run the command in the server folder containing the file.
One way to do this would be to start a container with your current server folder mounted in a random in your container, and run the groovy script as an entrypoint. Something like this (untested)
docker run -v .:/random groovy_repo_1 /random/docker.groovy
Does the file exist... in the path and inside the container?
The path inside the container may be different from the path on your host. You can update the PATH environment variable during the build, or you can call the binary with a full path, e.g. /home/container_user/scripts/docker.groovy (we don't know where this file is inside your container from the question provided). That path needs to be to the script inside of your container, docker doesn't have direct access to the host filesystem, but you can run your container with a volume mount to bind mount a path on the host into the container.
If it is a shell script, check the first line, e.g. #!/bin/bash
You may have a groovy command at the top of your script. This binary must exist inside the container, and be in your path that's defined inside the container.
Check for windows linefeeds on linux shell scripts
If the script was written on windows, you may have the wrong linefeeds in the script. It will look for groovy\r instead of groovy and the first command won't exist.
If it is a binary, there is likely a missing library
You'll see this if the groovy binary was added with something like a COPY command, instead of compiling locally or installing from the package manager. You can use ldd /path/to/groovy inside the container to inspect the linked libraries.
This list is from my DC2018 presentation: https://sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#59

How to mount a container's directory in a docker run command

I am trying to run a "build container" which simply builds an output file from source code and then exits. So I have to bind mount the source code directory when running this build container so that it can build the output file.
The complicated part is that the source code directory is inside a separate container.
One solution is to manually copy the source code directory from the separate container to the docker host. Then I can give the path to the source code directory on the docker host, like docker run --mount type=bind,src=/path/to/sourcecode,dst=/local build_container.
The separate container has access to the docker socket so I can issue docker commands directly from it. I would like to be able to run the "build container" from this separate container so that the docker daemon magically knows that the source directory is on this separate container.
Is there a way to do that without having to manually copy the source code directory back to the docker host?

'docker cp' from docker container to jenkins workspace during build

Is there a way I can copy files from docker container to jenkins workspace while tests are running i.e. not pre- or post- build
Currently docker is on a server within the organisation and when I kick off a Jenkins job (maven project), it runs tests within the above container.
During the test, there are files downloaded and I would like to be able to access those files in the jenkins workspace, during execution. So I tried the following as part of my code:
docker cp [containerName]:/home/seluser/Downloads /var/jenkins_home/jobs/[jobName]/workspace
But the files don't get copied over to the workspace. I have also tried doing this locally, i.e. getting the files copied to a directory on my laptop:
docker cp [containerName]:/home/seluser/Downloads /Users/[myUsername]/testDownloads
and it worked. Is there something I'm missing regarding how to do this for jenkins workspace?
Try adding /. as :
docker cp [containerName]:/home/seluser/Downloads/. /var/jenkins_home/jobs/[jobName]/workspace/

How can I configure go sdk and GOPATH from docker container?

I'm trying to configure golang project with Jetbrains Gogland and docker compose. I want to use GOPATH and go from the docker container. I mean using the go installation from the container for the autocomplete etc without installing golang on the local machine.
the project structure is:
project root
docker-compose.yml
back|
Dockerfile
main.go
some other packages
front|
all the front files...
After that, I want to deploy my back folder to the /go/src/app in the docker container. The problem is that when I develop the project I can''t use autocomplete as this project is not in my local GOPATH and there are different golang versions in the docker container and on my local machine
I already read this question but I still can't solve my issue.
At the moment this is not possible. Nor do I see how it could be possible in the future. Mounting a volume in docker means you "hide" the contents of that folder from the container and use the files on the host instead. As such, any time you'll mount the directory from your machine, your container files from that instance won't be available to the machine. This means you can't have Go installed in the container and then mount a folder and use that location for the Go sources. If you are thinking: I'll just mount things in another place, do some symlink magic / copy files around, that's just a bad idea that leads to nowhere.
Gogland supports remote debugging as of EAP 10, released a a few weeks ago. This allows you to debug applications running in containers or on remote hosts. As such, you can install Go, and the source code on your machine but have them running in containers.

Docker Mounting Error - File Not Found

I am trying to run the below Docker command but am receiving a file not found error. I have verified that the local folder /D/VMs/... contains the appropriate file, and that the adam-submit command is properly functioning. I believe there is an issue with how I am mounting the local folder - I assumed that it would be at the location /data for the docker machine. For context, I am following the tutorial at http://ampcamp.berkeley.edu/5/exercises/genome-analysis-with-adam.html
using the docker image at https://hub.docker.com/r/heuermh/adam/
Docker Run:
docker run -v '/D/VMs/hs/adam/data:/data' heuermh/adam adam-submit transform '/data/NA12878.sam' '/data/NA12878.adam'
Docker Run #2:
docker run -v //d/vms/hs/adam/data:/data heuermh/adam adam-submit transform /data/NA12878.sam /data/NA12878.adam
Error:
Exception in thread "main" java.io.FileNotFoundException: Couldn't find any files matching /data/NA12878.sam. If you are trying to glob a directory of Parquet files, you need to glob inside the directory as well (e.g., "glob.me.*.adam/*", instead of "glob.me.*.adam"
From the directories you listed, it looks like you're running Docker for Windows. This runs inside of a VM, and folders mapped into a container are mapped from that VM. To map a folder from the parent OS, it needs to first be shared to the VM which is enabled by default on C:/Users.
If you're using docker-machine, check the settings of your VirtualBox, otherwise, check the settings of Docker itself for sharing folders and make sure /D/VMs is included.

Resources