I wrote this docker-compose project. The docker-compose.yml looks like this:
version: '3.1'
services:
db:
image: mysql
restart: always
environment:
- MYSQL_DATABASE=mgsv
- MYSQL_USER=mgsv_user
- MYSQL_PASSWORD=mgsvpass
- MYSQL_ROOT_PASSWORD=mysql123
volumes:
- ./mysql:/docker-entrypoint-initdb.d
www:
build: ./mGSV
restart: always
ports:
- 8080:8080
And the Dockerfile is based on a PHP container and looks like this.
FROM php:5-apache
RUN apt-get update && apt-get install -y --no-install-recommends \
openjdk-7-jdk \
maven \
git \
&& rm -rf /var/lib/apt/lists/*
RUN git clone https://github.com/qunfengdong/mGSV.git
# Move the folder 'mgsv' to DocumentRoot of Apache web server. By default, the DocumentRoot of Apache is /var/www/ (speak to the system administrator to know the exact DocumentRoot).
RUN cd /var/www/html/mGSV \
&& mkdir tmp \
&& chmod -R 777 tmp
RUN cd /var/www/html/mGSV && sed -i.bak "s|'gsv'|'mgsv_user'|" lib/settings.php \
&& sed -i.bak "s|$database_pass = ''|$database_pass = 'mgsvpass'|" lib/settings.php \
&& sed -i.bak "s|cas-qshare.cas.unt.edu|localhost|" lib/settings.php
RUN cp /var/www/html/mGSV/Arial.ttf /usr/share/fonts/truetype/
RUN cd /var/www/html/mGSV/ws \
&& tar -xzf mgsv-ws-server.tar.gz
RUN cd /var/www/html/mGSV/ws/mgsv-ws-server \
&& mvn package
RUN cp -f /var/www/html/mGSV/ws/mgsv-ws-server/target/ws-server-1.0RC1-jar-with-dependencies.jar /var/www/html/mGSV/ws/
RUN cd /var/www/html/mGSV/ws \
&& echo "mgsv_upload_url=http://localhost/mgsv" > config.properties \
&& echo "ws_publish_url=http\://localhost\:8081/MGSVService" >> config.properties \
&& java -jar ws-server-1.0RC1-jar-with-dependencies.jar &
This is the output which I got:
Step 1/11 : FROM php:5-apache
---> 8f4a38cf4542
Step 2/11 : RUN apt-get update && apt-get install -y --no-install-recommends openjdk-7-jdk maven git && rm -rf /var/lib/apt/lists/*
---> Using cache
---> f194797b9362
Step 3/11 : RUN git clone https://github.com/qunfengdong/mGSV.git
---> Using cache
---> 4acd066da444
Step 4/11 : RUN cd /var/www/html/mGSV && mkdir tmp && chmod -R 777 tmp
---> Using cache
---> f766f9ceb7d3
Step 5/11 : RUN cd /var/www/html/mGSV && sed -i.bak "s|'gsv'|'mgsv_user'|" lib/settings.php && sed -i.bak "s|$database_pass = ''|$database_pass = 'mgsvpass'|" lib/settings.php && sed -i.bak "s|cas-qshare.cas.unt.edu|localhost|" lib/settings.php
---> Using cache
---> 007dff8907f4
Step 6/11 : RUN cp /var/www/html/mGSV/Arial.ttf /usr/share/fonts/truetype/
---> Using cache
---> 026049ca32d8
Step 7/11 : RUN cd /var/www/html/mGSV/ws && tar -xzf mgsv-ws-server.tar.gz
---> Using cache
---> 92a0f85b27a0
Step 8/11 : RUN cd /var/www/html/mGSV/ws/mgsv-ws-server && mvn package
---> Using cache
---> 5aa1723f255f
Step 9/11 : RUN cp -f /var/www/html/mGSV/ws/mgsv-ws-server/target/ws-server-1.0RC1-jar-with-dependencies.jar /var/www/html/mGSV/ws/
---> Using cache
---> f0dbd0ac1ddb
Step 10/11 : RUN cd /var/www/html/mGSV/ws && echo "mgsv_upload_url=http://localhost/mgsv" > config.properties && echo "ws_publish_url=http\://localhost\:8081/MGSVService" >> config.properties && java -jar ws-server-1.0RC1-jar-with-dependencies.jar &
---> Using cache
---> 0c86c0adddd5
However, when I create an interactive session the /var/www/html/ is empty:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
php 5-apache 8f4a38cf4542 7 days ago 374MB
$ sudo docker run --entrypoint /bin/bash -i -t 8f4a38cf4542
root#a3908e297bcf:/var/www/html# ls
Why can't I see the /var/www/html/mGSV folder inside the docker container?
Thank you in advance.
Michal
The 8f4a38cf4542 image is the php:5-apache base image you are building FROM before all your additions.
The docker-compose build output should include a line: Successfully built eccdcc9a9534 at the end, which is the image ID you need to copy from your output and use. You should be able to find this image in the complete output:
docker images -a
To make life easier, add an image name to the www service so compose tags the build and it's easily accessable:
www:
build: ./mGSV
image: user3523406/www
restart: always
Then
sudo docker run --entrypoint /bin/bash -it user3523406/www
Related
I am creating a nvidia-docker image with the following included in the Dockerfile:
RUN curl -so /miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && chmod +x /miniconda.sh && /miniconda.sh -b -p /miniconda && rm /miniconda.sh
ENV PATH=/miniconda/bin:$PATH
#this is stored in cache ---> fa383a2e1344
# check path
RUN /miniconda/bin/conda
I get the following error:
/bin/sh: 1: /miniconda/bin/conda: not found
The command '/bin/sh -c /miniconda/bin/conda' returned a non-zero code: 127
When I test the path using:
nvidia-docker run --rm fa383a2e1344 ls
then /miniconda does not exist hence the error.
I then altered the Dockerfile to replace /miniconda with a env var path ie:
ENV CONDA_DIR $HOME/miniconda
# Install Miniconda
RUN curl -so /miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& chmod +x /miniconda.sh \
&& /miniconda.sh -b -p CONDA_DIR \
&& rm /miniconda.sh
ENV PATH=$CONDA_DIR:$PATH
# check path
RUN $CONDA_DIR/conda
And get the error:
/bin/sh: 1: /miniconda/conda: not found
The command '/bin/sh -c $CONDA_DIR/conda' returned a non-zero code: 127
I was able to get it working by setting the path to current dir rather than hitting /
WORKDIR /miniconda
RUN curl -so ./miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& chmod +x ./miniconda.sh \
&& ./miniconda.sh -b -p CONDA_DIR
Here is the build result for reference
docker build - < Dockerfile
Sending build context to Docker daemon 3.072kB
Step 1/5 : FROM node:12.16.0-alpine
---> 466593119d17
Step 2/5 : RUN apk update && apk add --no-cache curl
---> Using cache
---> 1d6830c38dfa
Step 3/5 : WORKDIR /miniconda
---> Using cache
---> 8ee9890a7109
Step 4/5 : WORKDIR /miniconda
---> Running in 63238c179aea
Removing intermediate container 63238c179aea
---> 52f571393bf6
Step 5/5 : RUN curl -so ./miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && chmod +x ./miniconda.sh && ./miniconda.sh -b -p CONDA_DIR
---> Running in b59e945ad7a9
Removing intermediate container b59e945ad7a9
---> 74ce06c9af66
Successfully built 74ce06c9af66
Hi I have a docker file which is failing on the COPY command. It was running fine initially but then it suddenly crashed during the build process. The Docker file basically sets up the dev environment and authenticate with GCP.
FROM ubuntu:16.04
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# Update and Install packages
RUN apt-get update -y \
&& apt-get install -y \
curl \
wget \
tar \
xz-utils \
bc \
build-essential \
cmake \
curl \
zlib1g-dev \
libssl-dev \
libsqlite3-dev \
python3-pip \
python3-setuptools \
unzip \
g++ \
git \
python-tk
# Install Python 3.6.5
RUN wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz \
&& tar -xvf Python-${PYTHON_VERSION}.tar.xz \
&& rm -rf Python-${PYTHON_VERSION}.tar.xz \
&& cd Python-${PYTHON_VERSION} \
&& ./configure \
&& make install \
&& cd / \
&& rm -rf Python-${PYTHON_VERSION}
# Install pip
RUN curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py \
&& rm get-pip.py
# Add SNI support to Python
RUN pip --no-cache-dir install \
pyopenssl \
ndg-httpsclient \
pyasn1
## Download and Install Google Cloud SDK
RUN mkdir -p /usr/local/gcloud \
&& curl https://sdk.cloud.google.com > install.sh \
&& bash install.sh --disable-prompts --install-dir=${DIRECTORY}
# Adding the package path to directory
ENV PATH $PATH:${DIRECTORY}/google-cloud-sdk/bin
# working directory
WORKDIR /usr/src/app
COPY requirements.txt ./ \
testproject-264512-9de8b1b35153.json ./
It fails at this step :
Step 13/21 : COPY requirements.txt ./ testproject-264512-9de8b1b35153.json ./
COPY failed: stat /var/lib/docker/tmp/docker-builder942576416/testproject-264512-9de8b1b35153.json: no such file or directory
Any leads in this would be helpful.
How are you running docker build command?
In docker best practices I've read that docker fails if you try to build your image from stdin using -
Attempting to build a Dockerfile that uses COPY or ADD will fail if this syntax is used. The following example illustrates this:
# create a directory to work in
mkdir example
cd example
# create an example file
touch somefile.txt
docker build -t myimage:latest -<<EOF
FROM busybox
COPY somefile.txt .
RUN cat /somefile.txt
EOF
# observe that the build fails
...
Step 2/3 : COPY somefile.txt .
COPY failed: stat /var/lib/docker/tmp/docker-builder249218248/somefile.txt: no such file or directory
I've reproduced issue... Here is my Dockerfile:
FROM alpine:3.7
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# working directory
WORKDIR /usr/src/app
COPY kk.txt ./ \
kk.2.txt ./
If I create image by running docker build -t testImage:1 [DOCKERFILE_FOLDER], docker creates image and works correctly.
However if I try the same command from stdin as:
docker build -t test:2 - <<EOF
FROM alpine:3.7
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
WORKDIR /usr/src/app
COPY kk.txt ./ kk.2.txt ./
EOF
I get the following error:
Step 1/6 : FROM alpine:3.7
---> 6d1ef012b567
Step 2/6 : ENV PYTHON_VERSION="3.6.5"
---> Using cache
---> 734d2a106144
Step 3/6 : ENV BUCKET_NAME='detection-sandbox'
---> Using cache
---> 18fba29fffdc
Step 4/6 : ENV DIRECTORY='/usr/local/gcloud'
---> Using cache
---> d926a3b4bc85
Step 5/6 : WORKDIR /usr/src/app
---> Using cache
---> 57a1868f5f27
Step 6/6 : COPY kk.txt ./ kk.2.txt ./
COPY failed: stat /var/lib/docker/tmp/docker-builder518467298/kk.txt: no such file or directory
It seems that docker build images from /var/lib/docker/tmp/ if you build image from stdin, thus ADD or COPY commands don't work.
Incorrect path in source is a common error.
Use
COPY ./directory/testproject-264512-9de8b1b35153.json /dir/
instead of
COPY testproject-264512-9de8b1b35153.json /dir/
Trying to make a simple GitLab pipeline that builds a Docker image for Alpine Linux + Openshift CLI.
This is the code:
FROM frolvlad/alpine-glibc:latest
MAINTAINER Daniel Widerin <daniel#widerin.net>
ENV OC_VERSION=v3.11.0 \
OC_TAG_SHA=0cbc58b \
BUILD_DEPS='tar gzip' \
RUN_DEPS='curl ca-certificates gettext'
RUN apk --no-cache add $BUILD_DEPS $RUN_DEPS && \
curl -sLo /tmp/oc.tar.gz https://github.com/openshift/origin/releases/download/${OC_VERSION}/openshift-origin-client-tools-${OC_VERSION}-${OC_TAG_SHA}-linux-64bit.tar.gz && \
tar xzvf /tmp/oc.tar.gz -C /tmp/ && \
mv /tmp/openshift-origin-client-tools-${OC_VERSION}-${OC_TAG_SHA}-linux-64bit/oc /usr/local/bin/ && \
rm -rf /tmp/oc.tar.gz /tmp/openshift-origin-client-tools-${OC_VERSION}-${OC_TAG_SHA}-linux-64bit && \
apk del $BUILD_DEPS
CMD ["/bin/sh"]
Now for some reason when running the pipeline it gets stuck on the curl part that downloads the openshift archive.
Status: Downloaded newer image for frolvlad/alpine-glibc:latest
---> 38dd85a430e8
Step 2/5 : MAINTAINER Daniel Widerin <daniel#widerin.net>
---> Running in bdacc7e92e79
Removing intermediate container bdacc7e92e79
---> c56da0a68f7f
Step 3/5 : ENV OC_VERSION=v3.11.0 OC_TAG_SHA=0cbc58b BUILD_DEPS='tar gzip' RUN_DEPS='curl ca-certificates gettext'
---> Running in cb1e6cdb39ca
Removing intermediate container cb1e6cdb39ca
---> 727952120e67
Step 4/5 : RUN apk --no-cache add $BUILD_DEPS $RUN_DEPS && curl -sLo /tmp/oc.tar.gz https://github.com/openshift/origin/releases/download/${OC_VERSION}/openshift-origin-client-tools-${OC_VERSION}-${OC_TAG_SHA}-linux-64bit.tar.gz && tar xzvf /tmp/oc.tar.gz -C /tmp/ && mv /tmp/openshift-origin-client-tools-${OC_VERSION}-${OC_TAG_SHA}-linux-64bit/oc /usr/local/bin/ && rm -rf /tmp/oc.tar.gz /tmp/openshift-origin-client-tools-${OC_VERSION}-${OC_TAG_SHA}-linux-64bit && apk del $BUILD_DEPS
---> Running in ef344ef4a96b
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
It stays like this for an hour until the pipeline times out.
Tried this same Dockerfile manually and it works fine.
How can I diagnose this issue? How can I find any logs for this?
Found that this issue is related to Alpine image having networking issues when run in Docker-in-Docker configuration on Kubernetes/OpenShift based runner. Adding --network host to Docker build helps to fix this issue.
Docker build --network host .
Related GitHub issue: github.com/gliderlabs/docker-alpine/issues/307
My problem is that I get an error when running my container on an ARM arch system(RaspberryPI with Raspbian). Image was built on that same Raspberry.
This is my dockerfile:
FROM arm32v7/golang
COPY qemu-arm-static /usr/bin
ENV STATUSOK_VERSION 0.1.1
RUN apt-get update \
&& apt-get install -y unzip \
&& wget https://github.com/sanathp/statusok/releases/download/$STATUSOK_VERSION/statusok_linux.zip \
&& unzip statusok_linux.zip \
&& mv ./statusok_linux/statusok /go/bin/StatusOk \
&& rm -rf ./statusok_linux* \
&& apt-get remove -y unzip git \
&& apt-get autoremove -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
VOLUME /config
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT /docker-entrypoint.sh
I'm able to succesfully build this on a RaspberryPI running Raspbian:
root#raspberrypi:~/armstatusok# docker build . -t armstatusok
Sending build context to Docker daemon 6.656kB
Step 1/7 : FROM arm32v7/golang
---> 8bbfdfd01a06
Step 2/7 : COPY qemu-arm-static /usr/bin
---> Using cache
---> 2572fd1e03a0
Step 3/7 : ENV STATUSOK_VERSION 0.1.1
---> Using cache
---> 25d39a4c6eb5
Step 4/7 : RUN apt-get update && apt-get install -y unzip && wget https://github.com/sanathp/statusok/releases/download/$STATUSOK_VERSION/statusok_linux.zip && unzip statusok_linux.zip && mv ./statusok_linux/statusok /go/bin/StatusOk && rm -rf ./statusok_linux* && apt-get remove -y unzip git && apt-get autoremove -y && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
---> Using cache
---> bfb1cfa9a985
Step 5/7 : VOLUME /config
---> Using cache
---> 3bfbce28329b
Step 6/7 : COPY ./docker-entrypoint.sh /docker-entrypoint.sh
---> Using cache
---> a1795ca4f40c
Step 7/7 : ENTRYPOINT /docker-entrypoint.sh
---> Using cache
---> d0ce74911ba3
Successfully built d0ce74911ba3
Successfully tagged armstatusok:latest
Next step is to run it, and where I get into trouble:
root#raspberrypi:~/armstatusok# docker run --name=armstatusok -v $PWD:/config armstatusok
/docker-entrypoint.sh: 1: /docker-entrypoint.sh: /go/bin/StatusOk: not found
I went into the container commenting line one of the docker-entrypoint.sh and checked if /go/bin/StatusOk was actually there, and it was.
My docker-entrypoint.sh:
root#raspberrypi:~/armstatusok# cat docker-entrypoint.sh
/go/bin/StatusOk --config /config/config.json
Now my question is, does anybody have a clue where to start? I also tested this dockerfile on x86 arch, and there it worked. I only changed the FROM line to the x86 flavour and removed the COPY qemu-arm-static /usr/bin since that line is there to make it work on ARM arch, according to documentation.
I copied this Dockerfile and start script verbatim and it builds and runs perfectly for me. I get
Config file not present at the given location: /config/config.json give correct file location using --config parameter
because I don't have access to the config file you're using. But the fact I get that message means that StatusOk is running. So I don't know what to suggest.
The only difference I made was to add a shebang #!/bin/sh to the start of the docker-entrypoint.sh file, and ensure it has execute permission, by running ls -al, and if it doesn't have x in the permissions, running chmod +rwx. Don't know if that made any difference as to how the script tried to access /go/bin/StatusOk.
Full docker-entrypoint.sh contents:
#!/bin/sh
/go/bin/StatusOk --config /config/config.json
I am trying configure and run a certain program using Docker. I am a beginner in Docker, so beware of newbie mistakes!
FROM ubuntu:16.04
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
RUN apt-get update && apt-get install --assume-yes wget sudo && \
wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_kinetic.sh && \
chmod 755 ./install_ros_kinetic.sh && \
bash ./install_ros_kinetic.sh
RUN apt-get install --assume-yes ros-kinetic-joy ros-kinetic-teleop-twist-joy ros-kinetic-teleop-twist-keyboard ros-kinetic-laser-proc ros-kinetic-rgbd-launch ros-kinetic-depthimage-to-laserscan ros-kinetic-rosserial-arduino ros-kinetic-rosserial-python ros-kinetic-rosserial-server ros-kinetic-rosserial-client ros-kinetic-rosserial-msgs ros-kinetic-amcl ros-kinetic-map-server ros-kinetic-move-base ros-kinetic-urdf ros-kinetic-xacro ros-kinetic-compressed-image-transport ros-kinetic-rqt-image-view ros-kinetic-gmapping ros-kinetic-navigation ros-kinetic-interactive-markers
RUN cd /home/$USERNAME/catkin_ws/src/
RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs.git
RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3.git
USER $USERNAME
WORKDIR /home/$USERNAME
# add catkin env
RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
RUN /bin/bash -c "source /home/ros/.bashrc && cd /home/$USERNAME/catkin_ws && catkin_make"
Gave the following output:
~/m/rosdocker docker build --rm -f "Dockerfile" -t rosdocker:latest .
Sending build context to Docker daemon 5.632kB
Step 1/15 : FROM ubuntu:16.04
---> b0ef3016420a
Step 2/15 : ENV USERNAME ros
---> Using cache
---> 25bf14574e2b
Step 3/15 : RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
---> Using cache
---> 3a2787196745
Step 4/15 : RUN bash -c 'echo $USERNAME:ros | chpasswd'
---> Using cache
---> fa4bc1d220a8
Step 5/15 : ENV HOME /home/$USERNAME
---> Using cache
---> f987768fa3b1
Step 6/15 : RUN apt-get update && apt-get install --assume-yes wget sudo && wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_kinetic.sh && chmod 755 ./install_ros_kinetic.sh && bash ./install_ros_kinetic.sh
---> Using cache
---> 9c26b8318f2e
Step 7/15 : RUN apt-get install --assume-yes ros-kinetic-joy ros-kinetic-teleop-twist-joy ros-kinetic-teleop-twist-keyboard ros-kinetic-laser-proc ros-kinetic-rgbd-launch ros-kinetic-depthimage-to-laserscan ros-kinetic-rosserial-arduino ros-kinetic-rosserial-python ros-kinetic-rosserial-server ros-kinetic-rosserial-client ros-kinetic-rosserial-msgs ros-kinetic-amcl ros-kinetic-map-server ros-kinetic-move-base ros-kinetic-urdf ros-kinetic-xacro ros-kinetic-compressed-image-transport ros-kinetic-rqt-image-view ros-kinetic-gmapping ros-kinetic-navigation ros-kinetic-interactive-markers
---> Using cache
---> 4b4c0abace7f
Step 8/15 : RUN cd /home/$USERNAME/catkin_ws/src/
---> Using cache
---> fb87caedbef8
Step 9/15 : RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs.git
---> Using cache
---> d2d7f198e018
Step 10/15 : RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3.git
---> Using cache
---> 42ddcbbc19e1
Step 11/15 : USER $USERNAME
---> Using cache
---> 4526fd7b5d75
Step 12/15 : WORKDIR /home/$USERNAME
---> Using cache
---> 0543c327b994
Step 13/15 : RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
---> Using cache
---> dff40263114a
Step 14/15 : RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
---> Using cache
---> fff611e9d9db
Step 15/15 : RUN /bin/bash -c "source /home/ros/.bashrc && cd /home/$USERNAME/catkin_ws && catkin_make"
---> Running in 7f26a34419a3
/bin/bash: catkin_make: command not found
The command '/bin/sh -c /bin/bash -c "source /home/ros/.bashrc && cd /home/$USERNAME/catkin_ws && catkin_make"' returned a non-zero code: 127
~/m/rosdocker
I need it to run catkin_make (which is on the path set up by .bashrc)
Exit code 127 from shell commands means "command not found". Is .bashrc executable? Normally it is not, probably you want to source it?
source ./home/$USERNAME/.bashrc
As Dan Farrel pointed out in his comment, sourcing the file in a RUN command will only have effect within that shell.
To source .bashrc during the build
If you want it to have effect for later commands in the build you need to run them all in the same RUN statement. In the below .bashrcis sourced in the same shell as catkin_make is run.
RUN . /home/ros/.bashrc && \
cd /home/$USERNAME/catkin_ws && \
catkin_make
To source the .bashrc file when the container starts
What should happen when the container is run using docker runis specified using the ENTRYPOINTstatement. If you just want a plain bash prompt, specify /bin/bash. The shell will be run with the user specified in the USER statement.
So in summary if you add the following to the end of your Dockerfile
USER ros
ENTRYPOINT /bin/bash
When someone runs the container using docker run -it <containerName> they will land in a bash shell as the user ros. Bash will automatically source the /home/ros/.bashrc file and all definitions inside will be available in the shell. (Your RUN statement containing the .bashrc file canbe removed