Unable to assign environment file in singularity on high-performance cluster: 'file not found in .' - environment-variables

I think I have a rather easy question, but I was not able to find a solution, so I hope to get some useful hints.
I'm trying to get a program running in Singularity on a high-performance cluster. For this program called FORCE (more specifically, I want to use force-level1-csd function, which simply downloads a set of satellite images), I need to reference an environment file called .boto, which contains the credentials to gs utils to enable the download of a large number of satellite images.
There is a tutorial on docker for this program, which works nicely. Here is example code using Docker that I successfully applied on my own computer (everything after force-level1-csd are arguments specific to the function and likely not relevant to the problem described here):
docker run -it -v /scratch/csxxyy/force/:/opt/data --env FORCE_CREDENTIALS=/app/credentials/ -v $HOME:/app/credentials/ davidfrantz/force force-level1-csd -n -c 0,90 -d 20150701,20221017 -s S2A,S2B /opt/data/meta /opt/data/level1/sentinel2 /opt/data/level1/l1_pool.txt /opt/data/aoi_force_level1.shp
Since Docker is not available on the HPC and I cannot download the 30TB satellite images on my local machine, though, I have to use Singularity.
But with the Singularity code I use, I get the following error:
Error: gsutil config file was not found in .
The Singularity code I use is an attempt to simply "translate" the docker commands to Singularity and looks like this:
singularity exec --bind /scratch/csxxyy/force/:/opt/data/ --env FORCE_CREDENTIALS=/app/credentials docker://davidfrantz/force:latest force-level1-csd -n -c 0,90 -d 20150701,20221017 -s S2A,S2B /opt/data/meta /opt/data/level1 /opt/data/level1/l1_pool.txt /opt/data/aoi_force_level1.shp
Unfortunately, I get the above error. Precisely, the program seems unable to identify the credentials file .boto in /scratch/csxxyy/force/app/credentials/. I know for sure that the file exists at this location.
I have tried around with the arguments --env and --env-file, e.g. --env FORCE_CREDENTIALS=/opt/data/app/credentials and --env-file FORCE_CREDENTIALS=/app/credentials/.boto, but the error did not change. I also changed the name of .boto to boto because I thought that secret files might not be visible, but was not successful.
So my questions are:
What am I missing here? What is the correct "translation" from Docker to Singularity in my case?
Thank you very much for your help.

Related

I am trying to access a file and I am getting the `docker: invalid reference format: repository name must be lowercase`. error. Any adivce?

I am completely new to linux and docker so please be patient and will greatly apreciate an easy to understand anwser. I am following this guide: https://degauss.org/using_degauss.html and I am at the section where it states Using the DeGAUSS Geocoder. I have set my working directory and I am trying to run docker run --rm -v $PWD:/tmp degauss/geocoder:3.2.1 filtered_file.csv(changed the name for this example as well as the version of geocoder). However when I type that into ubuntu linux subsystem 22.04.1, I get the following error: docker: invalid reference format: repository name must be lowercase. I am not sure what this means. I changed my working directory using cd /mnt/c/Users/Name/Desktop/"FOLDER ONE"/"Folder 0002"/"Here"/. What should I do to fix this issue?
(pwd shows me that the working directory is /mnt/c/Users/Name/Desktop/FOLDER ONE/Folder 0002/Here/
Thanks in advance for your help.
I am expecting the geocoder to run, I have docker open in the background. All I have been able to do is type in docker run --rm -v $PWD:/tmp degauss/geocoder:3.2.1 filtered_file.csv and it is not wokring as noted with the error docker: invalid reference format: repository name must be lowercase.The latest version of geocoder is 3.2.1
You need to put the variable reference $PWD in double quotes. This is generally good practice when using the Unix Bourne shell and I'd recommend always doing it.
docker run --rm -v "$PWD:/tmp" ...
# ^ ^
What's happening here is that the shell first expands the variable reference, then splits the command into words. So you get
docker run --rm -v /mnt/.../FOLDER ONE/Folder 0002/Here/:/tmp ...
docker run --rm \
-v /mnt/.../FOLDER \ # create anonymous volume on this container directory
ONE/Folder \ # image name
0002/Here/:/tmp ... # main container command
The double quotes avoid the word splitting, and you very rarely actually want it.

boto3: config profile could not be found

I'm testing my lambda function wrapped in a docker image and provided environment variable AWS_PROFILE=my-profile for the lambda function. However, I got an error : "The config profile (my-profile) could not be found" while this information is there in ~/.aws/credentials and ~/.aws/config files. Below are my commands:
docker run -e BUCKET_NAME=my-bucket -e AWS_PROFILE=my-profile-p 9000:8080 <image>:latest lambda_func.handler
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '"body":{"x":5, "y":6}}'
The thing is that if I just run the lambda function as a separated python script then it works.
Can someone show me what went wrong here?
Thanks
When AWS is showing how to use their containers, such as for local AWS Glue, they share the ~/.aws/ in read-only mode with the container using volume option:
-v ~/.aws:/root/.aws:ro
Thus if you wish to follow AWS example, your docker command could be:
docker run -e BUCKET_NAME=my-bucket -e AWS_PROFILE=my-profile-p 9000:8080 -v ~/.aws:/root/.aws:ro <image>:latest lambda_func.handler
The other way is to pass the AWS credentials using docker environment variables, which you already are trying.
You need to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Your home directory (~) is not copied to Docker container, so AWS_PROFILE will not work.
See here for an example: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html

How can I solve 403:Forbidden whilst using a docker container?

I'm new to Docker and currently following this tutorial:
Learn Docker in 12 minutes
I created the necessary files and I made it up to display "Hello World!" on localhost:80.
Beyond that point, I tried to mount the container using the direct reference to my folder so I can update the index.php file to mimic the development evironment, and then I come with this error:
All I did is change the way the image is ran so I can update the content of the index.php file and see the changes reflect in the webpage when I hit F5.
Currently using Docker for Windows on Windows 10 Pro
Docker for Windows is running
I followed every steps scrupulously so I don't get myself fooled and it didn't work for me it seems.
To answer Mornor's question, here is the result for docker ps
And here for docker logs [container-name]
And since I now better understand what happens under the hood, how do I go to solve my problem illustrated in the log?
Here is my Dockfile
And the command I executed to run my image:
docker run -p 80:80 -v /wmi/tutorials/docker/src/:/var/www/html/ hello-world
And so you see that the file exists:
Error is coming from Apache which tries to show you the directory contents as there is no index file available. Either your docker mapping is not working correctly, or your apache does not have php support installed on it. You are accessing http://localhost, try http://localhost/index.php.
If you get same error, problem is with mapping. If you get php code the problem is with missing PHP support in Apache.
I think you're wrongly mouting your index.php. What you could do to debug it, is to firstly check if the index.php is indeed mounted within the container.
You could issue the following command :
docker run -p 80:80 -v /wmi/tutorials/docker/src/:/var/www/html/ hello-world bash -c 'ls -lsh /var/www/html/'
(use sh instead of bash if it does not work). If you can indeed see a index.php, then congratulations your file is correctly mounted, and the error is not coming from Docker, but from Apache.
If index.php is not there, then you have to check your Dockerfile. You mount src/, check if /src is in the same directory as your Dockerfile.
Keep us updated :)
I know the answer is late but the answer is very easy:
this happens When using docker and you have SELinux, be aware that the host has no knowledge of container SELinux policy.
by adding z
docker run -p 80:80 -v /wmi/tutorials/docker/src/:/var/www/html/:z hello-world
this will automatically do the chcon .... that you need to do.
Check whether the html folder has the proper permission or not.
Thank you

Any way to retrieve the command originally used to create a Docker container?

This question seems to have been often asked, but I cannot find any answer that correctly and clearly specifies how to accomplish this.
I often create test docker containers that I run for a while. Eventually I stop the container and restart it simply using docker start <name>. However, sometimes I am looking to upgrade to a newer image, which means deleting the existing container and creating a new one from the updated image.
I've been looking for a reliable way to retrieve the original 'docker run' command that was used to create the container in the first place. Most responses indicate to simply use docker inspect and look at the Config.Cmd element, but that is not correct.
For instance, creating a container as:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Qwerty123<(*' -e TZ=America/Toronto -p 1433:1433 -v c:/dev/docker/mssql:/var/opt/mssql --name mssql -d microsoft/mssql-server-linux
using docker inspect will show:
$ docker inspect mssql | jq -r '.[0]["Config"]["Cmd"]'
[
"/bin/sh",
"-c",
"/opt/mssql/bin/sqlservr"
]
There are many issues created on github for this same request, but all have been closed since the info is already in the inspect output - one just has to know how to read it.
Has anyone created a utility to easily rebuild the command from the output of the inspect command? All the responses that I've seen all refer to the wrong info, notably inspecting the Config.Cmd element, but ignoring the Mounts, the Config.Env, Config.ExposedPorts, Config.Volumes, etc elements.
There are few utilities out there which can help you.
Give it a try
https://github.com/bcicen/docker-replay
https://github.com/lavie/runlike
If you want to know more such cool tools around docker check this https://github.com/veggiemonk/awesome-docker
Of course docker inspect is the way to go, but if you just want to "reconstruct" the docker run command, you have
https://github.com/nexdrew/rekcod
it says
Reverse engineer a docker run command from an existing container (via docker inspect).
Another way is Christian G answer at
How to show the run command of a docker container
using bash-preexec
I've had the same issue and ended up looking at .bash_history file to find the command I used.
This would give you all the docker create commands you'd run;
grep 'docker create' .bash_history
Note: if you ran docker create in that same session you'll need to logout/login for the .bash_history to flush to disk.

How to Set Capabilities on Node Browser with selenium Docker

I am new with selenium docker. I want to create a Chrome/Firefox node with capabilities (Selenium Grid). How to add capabilities when I add a Selenium Node docker container?
I found this command so far...
docker run -d --link selenium-hub:hub selenium/node-firefox:2.53.0
but I don't know how to add capabilities on it. Already use this command but not working.
docker run -d --link selenium-hub:hub selenium/node-firefox:2.53.0 -browser browserName=firefox,version=3.6,maxInstances=5,platform=LINUX
Solved... adding SE_OPTS will help you to set capabilites
docker run -d -e SE_OPTS="-browser browserName=chromeku,version=56.0,maxInstances=3,platform=WINDOWS" --link selenium-hub:hub selenium/node-chrome:2.53.0
There are multiple ways of doing this and SE_OPTS is one of them, however for me it complicated what I was trying to accomplish. Using SE_OPTS forced me to set capabilities I didn't want to change, otherwise they would be reset to blank/null
I wanted to do:
SE_OPTS=-browser applicationName=Testing123
but I was forced to do:
SE_OPTS=-browser applicationName=Testing123,browserName=firefox,maxInstances=1,version=59.0.1
Another way to set capabilities is to supply your own config.json
-nodeConfig /path/config.json
You can find a default config.json
Or you can start the node container and copy the current one from it
docker cp <containerId>:/opt/selenium/config.json /host/path/target
You can also take a look at entry_point.sh, either on github or on the running container:
/opt/bin/entry_point.sh
You can run bash on the node container via:
sudo docker exec -i -t <container> bash
This will let you see how SE_OPTS is used and how config.json is generated. Note config.json is generated only if you don't supply one.
/opt/bin/generate_config
By examining generate_config you can see quite a few ENV vars such as:
FIREFOX_VERSION, NODE_MAX_INSTANCES, NODE_APPLICATION_NAME etc.
This leads to the third way to set capabilities which is to set the environment variables being used by generate_config, in my case APPLICATION_NODE_NAME
docker run -d -e "NODE_APPLICATION_NAME=Testing123"
Finally, when using SE_OPTS be careful not to accidentally change values. Specifically, the browser version. You can see by looking at entry_point.sh the browser version is calculated.
FIREFOX_VERSION=$( firefox -version | cut -d " " -f 3 )
If you change it to something else you will not get the results you are looking for.

Resources