Micromamba inside Docker container - docker

I have a base Docker image:
FROM ubuntu:21.04
WORKDIR /app
RUN apt-get update && apt-get install -y wget bzip2 \
&& wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba \
&& touch /root/.bashrc \
&& ./bin/micromamba shell init -s bash -p /opt/conda \
&& cp /root/.bashrc /opt/conda/bashrc \
&& apt-get clean autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}
SHELL ["bash", "-l" ,"-c"]
and derive from it another one:
ARG BASE
FROM $BASE
RUN source /opt/conda/bashrc && micromamba activate \
&& micromamba create --file environment.yaml -p /env
While building the second image I get the following error: micromamba: command not found for the RUN section.
If I run 1st base image manually I can launch micromamba, it is running correctly
I can run temporary image which were created for 2nd image building, micromamba is available via CLI, running correctly.
If I inherit from debian:buster, or alpine, for example, it is building perfectly.
What a problem with the Ubuntu? Why it cannot see micromamba during 2nd Docker image building?
PS using scaffold for building, so it can understand correctly, where is $BASE and what is it.

The ubuntu:21.04 image comes with a /root/.bashrc file that begins with:
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
When the second Dockerfile executes RUN source /opt/conda/bashrc, PS1 is not set and thus the remainder of the bashrc file does not execute. The remainder of the bashrc file is where micromamba initialization occurs, including the setup of the micromamba bash function that is used to activate a micromamba environment.
The debian:buster image has a smaller /root/.bashrc that does not have a line similar to [ -z "$PS1" ] && return and therefore the micromamba function gets loaded.
The alpine image does not come with a /root/.bashrc so it also does not contain the code to exit the file early.
If you want to use the ubuntu:21.04 image, you could modify you first Dockerfile like this:
FROM ubuntu:21.04
WORKDIR /app
RUN apt-get update && apt-get install -y wget bzip2 \
&& wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba \
&& touch /root/.bashrc \
&& ./bin/micromamba shell init -s bash -p /opt/conda \
&& grep -v '[ -z "\$PS1" ] && return' /root/.bashrc > /opt/conda/bashrc # this line has been modified \
&& apt-get clean autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}
SHELL ["bash", "-l" ,"-c"]
This will strip out the one line that causes the early termination.
Alternatively, you could make use of the existing mambaorg/micromamba docker image. The mambaorg/micromamba:latest is based on debian:slim, but mambaorg/micromamba:jammy will get you a ubuntu-based image (disclosure: I maintain this image).

Related

Why does the build in this dockerfile fail, while exact the same commands run manually succeed?

I'm trying to build the following Dockerfile:
FROM ubuntu:focal
RUN ln -snf /usr/share/zoneinfo/Europe/Berlin /etc/localtime && echo Europe/Berlin > /etc/timezone \
&& apt-get update \
&& apt-get install -y git default-jdk-headless ant libcommons-lang3-java libbcprov-java \
&& git clone https://gitlab.com/pdftk-java/pdftk.git \
&& cd pdftk \
&& mkdir lib \
&& ln -st lib /usr/share/java/{commons-lang3,bcprov}.jar \
&& ant jar
CMD ["java", "-jar", "/pdftk/build/jar/pdftk.jar"]
When building the image, it fails upon the ant step with several errors like this:
[javac] symbol: class ASN1Sequence
[javac] location: class PdfPKCS7
[javac] /pdftk/java/pdftk/com/lowagie/text/pdf/PdfPKCS7.java:282: error: cannot find symbol
[javac] BigInteger serialNumber = ((ASN1Integer)issuerAndSerialNumber.getObjectAt(1)).getValue();
However when starting a container manually (docker run -it --rm ubuntu:focal) and executing exact the same commands (Sure no typo, Copy/Pasted the whole block several times), the build succeeds.
Any idea what might be different during the docker build and a manually started container?
Wow, this one is a tricky one. 🔎
When you build the container, the program which executes your instruction set is shell (/bin/sh) whereas when you run docker run -it --rm ubuntu:focal, it is going to be run on bash (/bin/bash).
Basically, you manually ran all your instructions on bash.
The easiest solution would be to use bash to run your instruction set because it already works as you tested.
You can simply instruct the docker to run all your instructions on bash using this command at the top:
SHELL ["/bin/bash", "-c"]
The changed Dockerfile will be:
FROM ubuntu:focal
SHELL ["/bin/bash", "-c"]
RUN ps -p $$
RUN ln -snf /usr/share/zoneinfo/Europe/Berlin /etc/localtime && echo Europe/Berlin > /etc/timezone \
&& apt-get update \
&& apt-get install -y git default-jdk-headless ant libcommons-lang3-java libbcprov-java \
&& git clone https://gitlab.com/pdftk-java/pdftk.git \
&& cd pdftk \
&& mkdir lib \
&& ln -st lib /usr/share/java/{commons-lang3,bcprov}.jar \
&& ant jar
CMD ["java", "-jar", "/pdftk/build/jar/pdftk.jar"]
Hope this helps you. Cheers 🍻 !!!

`bash: webots: command not found` in my docker container because of multiple FROMs

I have a docker container that has Webots and ROS2 installed. However, running webots while inside the container returns bash: webots: command not found. Why?
Container that does run webots (but no ROS2)
Here's a container run from the Webots installation instructions that DOES successfully run webots (but lacks ROS2 like I need):
$ xhost +local:root > /dev/null 2>&1 #so webots won't say unable to load Qt platform plugin "xcb"
$ docker run -it -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw cyberbotics/webots:R2021a-ubuntu20.04
Container that does NOT run webots
Here's my docker container which does NOT successfully run webots, but instead says bash: webots: command not found. However, it DOES successfully run webots_ros2 demos (I think the issue has to do with how I'm inheriting from two containers, because if I swap the order of my two ARG and FROM statements, webots is found but ros2 is not. I'm not sure the solution though):
Dockerfile
# inherit both the ROS2 and Webots containers
ARG BASE_IMAGE_WEBOTS=cyberbotics/webots:R2021a-ubuntu20.04
ARG IMAGE_ROS2=niurover/ros2_foxy:latest
FROM $BASE_IMAGE_WEBOTS AS base
FROM $IMAGE_ROS2 AS image_ros2
# resolve a missing dependency for webots demo
RUN apt-get update && apt-get install -y \
libxtst6 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Finally open a bash command to let the user interact
CMD ["/bin/bash"]
launch.sh (used to launch docker container)
#! /bin/bash
CONTAINER_USER=$USER
CONTAINER_NAME=webots_ros2_foxy
USER_ID=$UID
IMAGE=niurover/webots_ros2_foxy:latest
if [ $(uname -r | sed -n 's/.*\( *Microsoft *\).*/\1/ip') ];
then
xhost +local:$CONTAINER_USER
xhost +local:root
fi
sudo docker run -it --rm \
--name $CONTAINER_NAME \
--user=$USER_ID\
--env="DISPLAY" \
--env="CONTAINER_NAME=$CONTAINER_NAME" \
--workdir="/home/$CONTAINER_USER" \
--volume="/home/$CONTAINER_USER:/home/$CONTAINER_USER" \
--volume="/etc/group:/etc/group:ro" \
--volume="/etc/passwd:/etc/passwd:ro" \
--volume="/etc/shadow:/etc/shadow:ro" \
--volume="/etc/sudoers.d:/etc/sudoers.d:ro" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
$IMAGE bash\
if [ $(uname -r | sed -n 's/.*\( *Microsoft *\).*/\1/ip') ];
then
xhost -local:$CONTAINER_USER
xhost -local:root
fi
Summary
As you can see, both containers use cyberbotics/webots:R2021a-ubuntu20.04, and the second container uses all of the options of the first container, but with some extras. Why does the first container run webots successfully, while the second container can't find the command?
I ended up using Leonardo Dagnino's suggestion, and it worked. I had to copy a couple successive ROS2 containers' contents to make the tree hierarchy work off of the webots base image, but it got me where I was going. For prosperity, here is the new Dockerfile in full:
# Use Webots docker container as base
ARG BASE_IMAGE_WEBOTS=cyberbotics/webots:R2021a-ubuntu20.04
FROM $BASE_IMAGE_WEBOTS AS base
# ==================================================================================
# niurover/ros2_foxy uses osrf/ros:foxy-desktop as its base, so I need to add code from
# container heirarchy all the way back to where it can stem off of `base` from above
# ==================================================================================
# ----------------------------------------------------------------------------------
# taken from Dockerfile for ros:foxy-ros-core-focal found at:
# https://github.com/osrf/docker_images/blob/master/ros/foxy/ubuntu/focal/ros-core/Dockerfile
# ----------------------------------------------------------------------------------
## setup timezone # NOTE commented out since timezone should already be set up
#RUN echo 'Etc/UTC' > /etc/timezone && \
# ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime && \
# apt-get update && \
# apt-get install -q -y --no-install-recommends tzdata && \
# rm -rf /var/lib/apt/lists/*
# install packages
RUN apt-get update && apt-get install -q -y --no-install-recommends \
dirmngr \
gnupg2 \
&& rm -rf /var/lib/apt/lists/*
# setup keys
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
# setup sources.list
RUN echo "deb http://packages.ros.org/ros2/ubuntu focal main" > /etc/apt/sources.list.d/ros2-latest.list
# setup environment
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
ENV ROS_DISTRO foxy
# install ros2 packages
RUN apt-get update && apt-get install -y --no-install-recommends \
ros-foxy-ros-core=0.9.2-1* \
&& rm -rf /var/lib/apt/lists/*
## setup entrypoint # NOTE ignore this part of their Dockerfile
#COPY ./ros_entrypoint.sh /
#
#ENTRYPOINT ["/ros_entrypoint.sh"]
#CMD ["bash"]
# ----------------------------------------------------------------------------------
# taken from Dockerfile for ros:foxy-ros-base-focal found at:
# https://github.com/osrf/docker_images/blob/master/ros/foxy/ubuntu/focal/ros-base/Dockerfile
# ----------------------------------------------------------------------------------
# install bootstrap tools
RUN apt-get update && apt-get install --no-install-recommends -y \
build-essential \
git \
python3-colcon-common-extensions \
python3-colcon-mixin \
python3-rosdep \
python3-vcstool \
&& rm -rf /var/lib/apt/lists/*
# bootstrap rosdep
RUN rosdep init && \
rosdep update --rosdistro $ROS_DISTRO
# setup colcon mixin and metadata
RUN colcon mixin add default \
https://raw.githubusercontent.com/colcon/colcon-mixin-repository/master/index.yaml && \
colcon mixin update && \
colcon metadata add default \
https://raw.githubusercontent.com/colcon/colcon-metadata-repository/master/index.yaml && \
colcon metadata update
# install ros2 packages
RUN apt-get update && apt-get install -y --no-install-recommends \
ros-foxy-ros-base=0.9.2-1* \
&& rm -rf /var/lib/apt/lists/*
# ----------------------------------------------------------------------------------
# taken from Dockerfile for osrf/ros:foxy-desktop-focal (or is it osrf/ros:foxy-desktop?) found at:
# https://github.com/osrf/docker_images/blob/master/ros/foxy/ubuntu/focal/desktop/Dockerfile
# ----------------------------------------------------------------------------------
# This is an auto generated Dockerfile for ros:desktop
# generated from docker_images_ros2/create_ros_image.Dockerfile.em
#FROM ros:foxy-ros-base-focal # NOTE commented out since satisfied by above
# install ros2 packages
RUN apt-get update && apt-get install -y --no-install-recommends \
ros-foxy-desktop=0.9.2-1* \
&& rm -rf /var/lib/apt/lists/*
# ----------------------------------------------------------------------------------
# taken from Dockerfile for niurover/ros2_foxy found at:
# https://github.com/NIURoverTeam/Dockerfiles/blob/master/ros2_foxy/Dockerfile
# ----------------------------------------------------------------------------------
#ARG BASE_IMAGE=osrf/ros:foxy-desktop # NOTE commented out since satisfied by above
# Install work packages
#FROM $BASE_IMAGE as base # NOTE commented out since satisfied by above
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
tmux \
curl \
wget \
vim \
sudo \
unzip \
python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install ROS Packages
RUN apt-get update && apt-get install -y \
ros-foxy-turtlesim \
~nros-foxy-rqt* \
ros-foxy-teleop-tools \
ros-foxy-joy-linux \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN pip3 install pyserial
#CMD ["bash"] # NOTE ignore this part of the Dockerfile
# ----------------------------------------------------------------------------------
# new stuff added on top of niurover/ros2_foxy to assist with Webots + ROS2
# ----------------------------------------------------------------------------------
# resolve a missing dependency for webots demo
RUN apt-get update && apt-get install -y \
libxtst6 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Finally open a bash command to let the user interact
CMD ["/bin/bash"]
When you have multiple FROM commands, you're not "inheriting" both of their contents into the same image - you're doing a multi-stage build. This allows you to COPY from that stage specifying the --from option. By default, the last stage in your Dockerfile will be the target (so, in your example, you're only actually using the ros2 image. The webots image is not actually being used there.
You have two options here:
Copy just the files you need from the webots image using COPY --from=base
This will probably be hard and finicky. You'll need to copy all dependencies; and if they're acquired through your package manager (apt-get), you'll leave dpkg's local database inconsistent.
Copy one of the Dockerfiles and change their FROM
This will probably work fine as long as they both use the same base distribution. You can go into one of the project's repositories and grab their Dockerfile, rebuilding it from the other image - just change, for example, cyberbotics/webots:R2021a-ubuntu20.04's Dockerfile to have FROM niurover/ros2_foxy:latest. It may require tinkering with the other commands there, though.

How to set environment variables dynamically by script in Dockerfile?

I build my project by Dockerfile. The project need to installation of Openvino. Openvino needs to set some environment variables dynamically by a script that depends on architecture. The sciprt is: script to set environment variables
As I learn, Dockerfile can't set enviroment variables to image from a script.
How do I follow way to solve the problem?
I need to set the variables because later I continue install opencv that looks the enviroment variables.
What I think that if I put the script to ~/.bashrc file to set variables when connect to bash, if I have any trick to start bash for a second, it could solve my problem.
Secondly I think that build openvino image, create container from that, connect it and initiliaze variables by running script manually in container. After that, convert the container to image. Create new Dockerfile and continue building steps by using this images for ongoing steps.
Openvino Dockerfile exp and line that run the script
My Dockerfile:
FROM ubuntu:18.04
ARG DOWNLOAD_LINK=http://registrationcenter-download.intel.com/akdlm/irc_nas/16612/l_openvino_toolkit_p_2020.2.120.tgz
ENV INSTALLDIR /opt/intel/openvino
# openvino download
RUN curl -LOJ "${DOWNLOAD_LINK}"
# opencv download
RUN wget -O opencv.zip https://github.com/opencv/opencv/archive/4.3.0.zip && \
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.3.0.zip
RUN apt-get -y install sudo
# openvino installation
RUN tar -xvzf ./*.tgz && \
cd l_openvino_toolkit_p_2020.2.120 && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh -s silent.cfg && \
# rm -rf /tmp/* && \
sudo -E $INSTALLDIR/install_dependencies/install_openvino_dependencies.sh
WORKDIR /home/sa
RUN /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh" && \
echo "source /opt/intel/openvino/bin/setupvars.sh" >> /home/sa/.bashrc && \
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc && \
$INSTALLDIR/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
$INSTALLDIR/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
RUN bash
# opencv installation
RUN unzip opencv.zip && \
unzip opencv_contrib.zip && \
# rm opencv.zip opencv_contrib.zip && \
mv opencv-4.3.0 opencv && \
mv opencv_contrib-4.3.0 opencv_contrib && \
cd ./opencv && \
mkdir build && \
cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_INF_ENGINE=ON -D ENABLE_CXX11=ON -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=OFF -D INSTALL_C_EXAMPLES=OFF -D ENABLE_PRECOMPILED_HEADERS=OFF -D OPENCV_ENABLE_NONFREE=ON -D OPENCV_EXTRA_MODULES_PATH=/home/sa/opencv_contrib/modules -D PYTHON_EXECUTABLE=/usr/bin/python3 -D WIDTH_GTK=ON -D BUILD_TESTS=OFF -D BUILD_DOCS=OFF -D WITH_GSTREAMER=OFF -D WITH_FFMPEG=ON -D BUILD_EXAMPLES=OFF .. && \
make && \
make install && \
ldconfig
You need to cause the shell to load that file in every RUN command where you use it, and also at container startup time.
For startup time, you can use an entrypoint wrapper script:
#!/bin/sh
# Load the script of environment variables
. /opt/intel/openvino/bin/setupvars.sh
# Run the main container command
exec "$#"
Then in the Dockerfile, you need to include the environment variable script in RUN commands, and make this script be the image's ENTRYPOINT.
RUN . /opt/intel/openvino/bin/setupvars.sh && \
/opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
/opt/intel/openvino/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
RUN ... && \
. /opt/intel/openvino/bin/setupvars.sh && \
cmake ... && \
make && \
...
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD same as the command you set in the original image
If you docker exec debugging shells in the container, they won't see these environment variables and you'll need to manually re-read the environment variable script. If you use docker inspect to look at low-level details of the container, it also won't show the environment variables.
It looks like that script just sets a couple of environment variables (especially $LD_LIBRARY_PATH and $PYTHONPATH), if to somewhat long-winded values, and you could just set these with ENV statements in the Dockerfile.
If you look at the docker build output, there are lines like ---> 0123456789ab after each build step; those are valid image IDs that you can docker run. You could run
docker run --rm 0123456789ab \
env \
| sort > env-a
docker run --rm 0123456789ab \
sh -c '. /opt/intel/openvino/bin/setupvars.sh && env' \
| sort > env-b
This will give you two local files with the environment variables with and without running this setup script. Find the differences (say, with comm(1)), put ENV before each line, and add that to your Dockerfile.
You can't really use .bashrc in Docker. Many common paths don't invoke its startup files: in the language of that documentation, neither a Dockerfile RUN command nor a docker run instruction is an "interactive shell" so those don't read dot files, and usually docker run ... command doesn't invoke a shell at all.
You also don't need sudo (you are already running as root, and an interactive password prompt will fail); RUN sh -c is redundant (Docker inserts it on its own); and source isn't a standard shell command (prefer the standard ., which will work even on Alpine-based images that don't have shell extensions).

My docker starts zookeeper, but it then automatically exists

i write dockfile start zookeeper
FROM buildpack-deps:sid-scm
RUN apt-get update && apt-get install -y --no-install-recommends \
bzip2 \
unzip \
xz-utils \
gettext-base \
&& rm -rf /var/lib/apt/lists/*
COPY zookeeper-3.4.12.tar.gz /opt
COPY config.template.properties /opt
RUN tar xfz /opt/zookeeper-3.4.12.tar.gz -C /opt
ENV ZK_HOME /opt/zookeeper-3.4.12
COPY startzookeeper.sh /opt
RUN chmod a+x /opt/startzookeeper.sh $ZK_HOME
CMD ["/opt/startzookeeper.sh"]
the startzookeeper.sh file is
#!/usr/bin/env bash
eval "cat <<EOF
$(</opt/config.template.properties)
EOF
" | tee /opt/zoo.cfg 2> /dev/null
#echo "$ZK_HOME" > 2.txt
cp /opt/zoo.cfg "$ZK_HOME"/conf
#
exec "$ZK_HOME/bin/zkServer.sh" start
but when i run docker ps,it is empty.
i try add tail -f /dev/null,but it does not work.
i don't know why,the zookeeper should run always,why it exist?
thanks any suggestions.
You could adapt your script to imitate the one from the official zookeeper-docker
(from hub.docker.com)
Its docker-entrypoint.sh ends with exec "$#", which executes "zkServer.sh", "start-foreground".
The important part is the start-foreground option, which ensures the process does not exit immediately, as that would exit your container as well.

Syntaxnet spec file and Docker?

I'm trying to learn Synatxnet. I have it running through Docker. But I really dont know much about either program Synatxnet or Docker. On the Github Sytaxnet page it says
The SyntaxNet models are configured via a combination of run-time
flags (which are easy to change) and a text format TaskSpec protocol
buffer. The spec file used in the demo is in
syntaxnet/models/parsey_mcparseface/context.pbtxt.
How exactly do I find the spec file to edit it?
I compiled SyntaxNet in a Docker container using these Instructions.
FROM java:8
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
RUN mkdir -p $SYNTAXNETDIR \
&& cd $SYNTAXNETDIR \
&& apt-get update \
&& apt-get install git zlib1g-dev file swig python2.7 python-dev python-pip -y \
&& pip install --upgrade pip \
&& pip install -U protobuf==3.0.0b2 \
&& pip install asciitree \
&& pip install numpy \
&& wget https://github.com/bazelbuild/bazel/releases/download/0.2.2b/bazel-0.2.2b-installer-linux-x86_64.sh \
&& chmod +x bazel-0.2.2b-installer-linux-x86_64.sh \
&& ./bazel-0.2.2b-installer-linux-x86_64.sh --user \
&& git clone --recursive https://github.com/tensorflow/models.git \
&& cd $SYNTAXNETDIR/models/syntaxnet/tensorflow \
&& echo "\n\n\n" | ./configure \
&& apt-get autoremove -y \
&& apt-get clean
RUN cd $SYNTAXNETDIR/models/syntaxnet \
&& bazel test --genrule_strategy=standalone syntaxnet/... util/utf8/...
WORKDIR $SYNTAXNETDIR/models/syntaxnet
CMD [ "sh", "-c", "echo 'Bob brought the pizza to Alice.' | syntaxnet/demo.sh" ]
# COMMANDS to build and run
# ===============================
# mkdir build && cp Dockerfile build/ && cd build
# docker build -t syntaxnet .
# docker run syntaxnet
First, comment out the command line in the dockerfile, then create and cd into an empty directory on your host machine. You can then create a container from the image, mounting a directory in the container to your hard-drive:
docker run -it --rm -v /pwd:/tmp bash
You'll now have a bash session in the container. Copy the spec file into /tmp from /opt/tensorflow/syntaxnet/models/parsey_mcparseface/context.pbtxt (I'm guessing that's where it is given the info you've provided above -- I can't get your dockerfile to build an image so I can't confirm it; you can always run find . -name context.pbtxt from root to find it), and exit the container (ctrl-d or exit).
You now have the file on your host's hd ready to edit, but you really want it in a running container. If the directory it comes from contains only that file, then you can simply mount your host directory at that path in the container. If it contains other things, then you can use a, so called, bootstrap script to move the file from your mounted directory (in the example above, that's tmp) to its home location. Alternatively, you may be able to tell the software where to find the spec file with a flag, but that will take more research.

Resources