I am facing a challenge while working with Docker in a corporate network environment. To overcome the network restrictions, I have configured the Docker daemon's DNS as mentioned in this Stack issue. Additionally, I have set the proxy environment variables in the Docker image as follows:
ENV http_proxy = http://login:pass#proxy-server.fr:1111
ENV https_proxy = http://login:pass#proxy-server.fr:1111
ENV ftp_proxy = http://login:pass#proxy-server.fr:1111
ENV no_proxy = 127.0.0.1, z.z.z.z , y.y.y.y, x.x.x.x,localhost
By doing this I succeded to bypass the apt-get, but the problem here is that when I try to do this by writing the proxy parameters inbside of /etc/environment using a command in this form:
RUN echo "\nexport http_proxy = http://login:pass#proxy-server.fr:1111\nexport https_proxy = ... etc" >> /etc/environment
In order to refresh the environment variables I follow this Stack issue by adding this line to change the default shell from /bin/sh to /bin/bash Note that this is needed otherwise you'll get the error /bin/sh source command not found
SHELL ["/bin/bash", "-c"]
RUN source /etc/environment
Then to check if the refreshing has happened I just type
env | grep proxy
There is no proxy configuration and there for I can not perform ** RUN apt-get update**
Note that if I run the container and perform this refreshing command
source /etc/environment
And then perform apt-get update every thing goes well !!!
I don't really get what is exactely the problem thank you for giving any explanation.
Thank you for your reading.
VERSIONS:
Docker version 20.10.23
Kubuntu 22.04 LTS
Every RUN command in Dockerfile starts a new shell. So when you call RUN source /etc/environment The environment is set for this shell but for the next RUN apt-get update a new shell is started and it has a new environment which doesn't have your http_proxy variable.
The convention in these situations is to use ENV insturction just like you mentioned at start of the question. But if you're sure that you want to use /etc/environment file you can use it like this:
RUN source /etc/environment && apt-get update
Related
In my Dockerfile, I have:
ENV ENVIRONMENT=$ENVIRONMENT
CMD NODE_ENV=$ENVIRONMENT npm run serve
However, I need to run another command BEFORE serve, and make sure NODE_ENV Is set for that one too.
I tried this:
ENV ENVIRONMENT=$ENVIRONMENT
RUN NODE_ENV=$ENVIRONMENT npm run upgrade
CMD NODE_ENV=$ENVIRONMENT npm run serve
However, NODE_ENV doesn't seem to be set for RUN.
What am I missing?
(Note: edited tag removed)
Environment variables set in a RUN statement do not persist.
It's like you open a shell, set the environment variable and close the shell session again. The next shell will not have the environment variable you set in the previous shell session.
How to fix this? Add the NODE_ENV Variable as an ENV in your dockerfile
ENV ENVIRONMENT=$ENVIRONMENT \
NODE_ENV=$ENVIRONMENT
RUN npm run upgrade
CMD npm run serve
I developed a few ROS packages and I want to put the packages in a docker container because installing all the ROS packages all the time is tedious. Therefore I created a dockerfile that uses a base ROS image, installed all the necessary dependencies, copied my workspace, built the workspace in the docker container and sourced everything afterward. You can find the docker file here:
FROM ros:kinetic-ros-base
RUN apt-get update && apt-get install locales
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN apt-get update && apt-get install -y \
&& rm -rf /var/likb/apt/lists/*
COPY . /catkin_ws/src/
WORKDIR /catkin_ws
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; catkin_make'
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; source devel/setup.bash'
CMD ["roslaunch", "master_launch sim_perception.launch"]
The problem is: When I run the docker container wit the "run" command, docker doesn't seem to know that I sourced my new ROS workspace and therefore it cannot launch automatically my launch script. If I run the docker container as bash script with "run -it bash" I can source my workspace again and then roslaunch my .launch file.
So can someone tell me how to write my dockerfile correctly so I launch my .launch file automatically when I run the container? Thanks!
From Docker Docs
Each RUN instruction is run independently and won't effect next instruction so when you run last Line no PATH are saved from ROS.
You need Source .bashrc or every environment you need using source first.
You can wrap everything you want (source command and roslaunch command) inside a sh file then just run that file at the end
If you review the convention of ros_entrypoint.sh you can see how best to source the workspace you would like in the docker. We're all so busy learning how to make docker and ros do the real things, it's easy to skip over some of the nuance of this interplay. This sucked forever for me; hope this is helpful for you.
I looked forever and found what seemed like only bad advice, and in the absence of an explicit standard or clear guidance I've settled into what seems like a sane approach that also allows you to control what launches at runtime with environment variables. I now consider this as the right solution for my needs.
In the Dockerfile for the image you want to set the start/launch behavior;
towards the end; you should use ADD line to insert your own ros_entrypoint.sh (example included); Set it as the ENTRYPOINT and then a CMD to run by default run something when the docker start.
note: you'll (obviously?) need to run the docker build process for these changes to be effective
Dockerfile looks like this:
all your other dockerfile ^^
.....
# towards the end
COPY ./ros_entrypoint.sh /
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]
Example ros_entryppoint.sh:
#!/bin/bash
set -e
# setup ros environment
if [ -z "${SETUP}" ]; then
# basic ros environment
source "/opt/ros/$ROS_DISTRO/setup.bash"
else
#from environment variable; should be a absolute path to the appropriate workspaces's setup.bash
source $SETUP
fi
exec "$#"
Used in this way the docker will automatically source either the basic ros bits... or if you provide another workspace's setup.bash path in the $SETUP environment variable, it will be used in the container.
So a few ways to work with this:
From the command line prior to running docker
export SETUP=/absolute/path/to/the/setup.bash
docker run -it your-docker-image
From the command line (inline)
docker run --env SETUP=/absolute/path/to/the/setup.bash your-docker-image
From docker-compose
service-name:
network_mode: host
environment:
- SETUP=/absolute/path/to/the_workspace/devel/setup.bash #or whatever
command: roslaunch package_name launchfile_that_needed_to_be_sourced.launch
#command: /bin/bash # wake up and do something else
When I try to use an environment variable($HOME) that I set in the Dockerfile, in the script that runs at start up, $HOME is not set. If I run printenv in the container, $HOME is set. So I am confused, and not sure what is going on.
I am using the phusion/passenger-customizable image, so that I can run a custom node server via pm2. I need a different version of Node then what is bundled in the node specific passenger image.
Dockerfile
# Simplified
FROM phusion/passenger-customizable:0.9.27
RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
# Set environment variables needed for the docker image.
ARG HOME=/opt/var/app
ENV HOME $HOME
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
RUN mkdir /etc/service/app
ADD start.sh /etc/service/app/run
RUN chmod a+x /etc/service/app/run
start.sh
echo $HOME
# run some scripts that reference the $HOME directory
What do I need to do to be able to reference a environment variable, set in the Dockerfile, in my start up scripts? Or do I just need to hardcode the paths in that start up script and call it a day?
$HOME is reserved, in some fashion. When running printenv, per #Sebastian, all my other variables where there but not $HOME. I prepended it with the initials of my company and it is working as intended.
Scenario
I'm trying to setup a simple docker image (I'm quite new to docker, so please correct my possible misconceptions) based on the public continuumio/anaconda3 container.
The Dockerfile:
FROM continuumio/anaconda3:latest
# update conda and setup environment
RUN conda update conda -y \
&& conda env list \
&& conda create -n testenv pip -y \
&& source activate testenv \
&& conda env list
Building and image from this by docker build -t test . ends with the error:
/bin/sh: 1: source: not found
when activating the new virtual environment.
Suggestion 1:
Following this answer I tried:
FROM continuumio/anaconda3:latest
# update conda and setup environment
RUN conda update conda -y \
&& conda env list \
&& conda create -y -n testenv pip \
&& /bin/bash -c "source activate testenv" \
&& conda env list
This seems to work at first, as it outputs: prepending /opt/conda/envs/testenv/bin to PATH, but conda env list as well ass echo $PATH clearly show that it doesn't:
[...]
# conda environments:
#
testenv /opt/conda/envs/testenv
root * /opt/conda
---> 80a77e55a11f
Removing intermediate container 33982c006f94
Step 3 : RUN echo $PATH
---> Running in a30bb3706731
/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
The docker files work out of the box as a MWE.
I appreciate any ideas. Thanks!
Using the docker ENV instruction it is possible to add the virtual environment path persistently to PATH. Although this does not solve the selected environment listed under conda env list.
See the MWE:
FROM continuumio/anaconda3:latest
# update conda and setup environment
RUN conda update conda -y \
&& conda create -y -n testenv pip
ENV PATH /opt/conda/envs/testenv/bin:$PATH
RUN echo $PATH
RUN conda env list
Method 1: use SHELL with a custom entrypoint script
EDIT: I have developed a new, improved approach which better than the "conda", "run" syntax.
Sample dockerfile available at this gist. It works by leveraging a custom entrypoint script to set up the environment before execing the arguments of the RUN stanza.
Why does this work?
A shell is (put very simply) a process which can act as an entrypoint for arbitrary programs. exec "$#" allows us to launch a new process, inheriting all of the environment of the parent process. In this case, this means we activate conda (which basically mangles a bunch of environment variables), then run /bin/bash -c CONTENTS_OF_DOCKER_RUN.
Method 2: SHELL with arguments
Here is my previous approach, courtesy of Itamar Turner-Trauring; many thanks to them!
# Create the environment:
COPY environment.yml .
RUN conda env create -f environment.yml
# Set the default docker build shell to run as the conda wrapped process
SHELL ["conda", "run", "-n", "vigilant_detect", "/bin/bash", "-c"]
# Set your entrypoint to use the conda environment as well
ENTRYPOINT ["conda", "run", "-n", "myenv", "python", "run.py"]
Modifying ENV may not be the best approach since conda likes to take control of environment variables itself. Additionally, your custom conda env may activate other scripts to further modulate the environment.
Why does this work?
This leverages conda run to "add entries to PATH for the environment and run any activation scripts that the environment may contain" before starting the new bash shell.
Using conda can be a frustrating experience, since both tools effectively want to monopolize the environment, and theoretically, you shouldn't ever need conda inside a container. But deadlines and technical debt being a thing, sometimes you just gotta get it done, and sometimes conda is the easiest way to provision dependencies (looking at you, GDAL).
Piggybacking on ccauet's answer (which I couldn't get to work), and Charles Duffey's comment about there being more to it than just PATH, here's what will take care of the issue.
When activating an environment, conda sets the following variables, as well as a few that backup default values that can be referenced when deactivating the environment. These variables have been omitted from the Dockerfile, as the root conda environment need never be used again. For reference, these are CONDA_PATH_BACKUP, CONDA_PS1_BACKUP, and _CONDA_SET_PROJ_LIB. It also sets PS1 in order to show (testenv) at the left of the terminal prompt line, which was also omitted. The following statements will do what you want.
ENV PATH /opt/conda/envs/testenv/bin:$PATH
ENV CONDA_DEFAULT_ENV testenv
ENV CONDA_PREFIX /opt/conda/envs/testenv
In order to shrink the number of layers created, you can combine these commands into a single ENV command setting all the variables at once as well.
There may be some other variables that need to be set, based on the package. For example,
ENV GDAL_DATA /opt/conda/envs/testenv/share/gdal
ENV CPL_ZIP_ENCODING UTF-8
ENV PROJ_LIB /opt/conda/envs/testenv/share/proj
The easy way to get this information is to call printenv > root_env.txt in the root environment, activate testenv, then call printenv > test_env.txt, and examine
diff root_env.txt test_env.txt.
I am working on a task that involves building a docker image with centOs as its base using a Dockerfile . One of the steps inside the dockerfile needs http_proxy and https_proxy ENV variables to be set in order to work behind the proxy.
As this Dockerfile will be used by multiple teams having different proxies, I want to avoid having to edit the Dockerfile for each team. Instead I am looking for a solution which allows me to pass ENV variables at build time, e.g.,
sudo docker build -e http_proxy=somevalue .
I'm not sure if there is already an option that provides this. Am I missing something?
Containers can be built using build arguments (in Docker 1.9+) which work like environment variables.
Here is the method:
FROM php:7.0-fpm
ARG APP_ENV=local
ENV APP_ENV=${APP_ENV}
RUN cd /usr/local/etc/php && ln -sf php.ini-${APP_ENV} php.ini
and then build a production container:
docker build --build-arg APP_ENV=prod .
For your particular problem:
FROM debian
ENV http_proxy=${http_proxy}
and then run:
docker build --build-arg http_proxy=10.11.24.31 .
Note that if you build your containers with docker-compose, you can specify these build-args in the docker-compose.yml file, but not on the command-line. However, you can use variable substitution in the docker-compose.yml file, which uses environment variables.
So I had to hunt this down by trial and error as many people explain that you can pass ARG -> ENV but it doesn't always work as it highly matters whether the ARG is defined before or after the FROM tag.
The below example should explain this clearly. My main problem originally was that all of my ARGS were defined prior to FROM which resulted all the ENV to be undefined always.
# ARGS PRIOR TO FROM TAG ARE AVAIL ONLY TO FROM for dynamic a FROM tag
ARG NODE_VERSION
FROM node:${NODE_VERSION}-alpine
# ARGS POST FROM can bond/link args to env to make the containers environment dynamic
ARG NPM_AUTH_TOKEN
ARG EMAIL
ARG NPM_REPO
ENV NPM_AUTH_TOKEN=${NPM_AUTH_TOKEN}
ENV EMAIL=${EMAIL}
ENV NPM_REPO=${NPM_REPO}
# for good measure, what do we really have
RUN echo NPM_AUTH_TOKEN: $NPM_AUTH_TOKEN && \
echo EMAIL: $EMAIL && \
echo NPM_REPO: $NPM_REPO && \
echo $HI_5
# remember to change HI_5 every build to break `docker build`'s cache if you want to debug the stdout
..... # rest of whatever you want RUN, CMD, ENTRYPOINT etc..
I faced the same situation.
According to Sin30's answer pretty solution is using shell,
CMD ["sh", "-c", "cd /usr/local/etc/php && ln -sf php.ini-$APP_ENV php.ini"]