I have a question.
When I add to the $PATH variable a directory, dooes the system recognizes as commands only the executables inside that directory or does it also recognizes the executables in subdirectories of that directory and executables in subdirectories of the first subdirectory of the directory.
Thanks in advance!
You can test that yourself
$ mkdir play
$ cd play
$ echo 'echo "I am the outer script"' > outer_script
$ chmod +x outer_script
$ mkdir inner
$ echo 'echo "I am the inner script"' > inner/inner_script
$ chmod +x inner/inner_script
$ export PATH=$(pwd):$PATH
$ cd ~
To see what we did:
$ tree
.
├── inner
│ └── inner_script
└── outer_script
Try running both of them
$ outer_script
I am the outer script
$ inner_script
No such file or directory
So, the answer is no.
Related
I have this very simple image:
FROM node:11-alpine
WORKDIR /app
COPY src /app/src
RUN cd src \
&& npm i --no-cache \
&& npm run build
CMD cd src \
&& npm run start
Everything is ok during the build, e.g. a simple ls -R / reveals the following tree:
/:
app/
/app:
src/
/app/src:
package.json ...
But when I try to start it I find the following structure:
/:
app/
/app/:
src/
/app/src/:
src/ ... more files from the context dir that I never COPYed
/app/src/src/:
package.json ...
If I RUN ls -R / just after npm run build I get the 'good' tree, even running ls -R / just one layer before CMD I get the same 'good' tree, but any layer after CMD (including CMD itself) gets me the 'wrong' tree, e.g:
CMD ls -R / && cd src && npm run start
It shows /app/src/src, just as if it was taking all the contents of the context dir and putting them below the WORKDIR/src (i.e. /app/src)
Why is docker doing this?
I'm running
Docker version 18.09.3, build 774a1f4
docker-compose version 1.23.2, build 1110ad0
After fiddling around in a somewhat "cleaner" environment at home, I found out that, as a comment suggests, the culprit was a stuck volume, it was mounting ./myapp:/app/src even though such thing was no longer around in the volumes section of my docker-compose.yaml file, and a simple yes|docker system prune did the trick.
I have been trying to make the searchguard setup script init_sg.sh to run after elasticsearch automatically. I don't want to do it manually with docker exec. Here's what I have tried.
entrypoint.sh:
#! /bin/sh
elasticsearch
source init_sg.sh
Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.0
COPY config/ config/
COPY bin/ bin/
# Search Guard plugin
# https://github.com/floragunncom/search-guard/wiki
RUN elasticsearch-plugin install --batch com.floragunn:search-guard-6:6.1.0-20.1 \
&& chmod +x \
plugins/search-guard-6/tools/hash.sh \
plugins/search-guard-6/tools/sgadmin.sh \
&& chown -R elasticsearch config/sg/ \
&& chmod -R go= config/sg/
# This custom entrypoint script is used instead of
# the original's /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["bash","-c","entrypoint.sh"]
However, it'd throw cannot run elasticsearch as root error:
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
So I guess I cannot run elasticsearch directly in entrypoint.sh, which is confusing because there's no problem when the Dockerfile is like this:
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.0
COPY config/ config/
COPY bin/ bin/
....
CMD ["elasticsearch"]
This thread's accepted answer doesn't work. There's no "/run/entrypoint.sh" in the container.
Solution:
Finally I've managed to get it done. Here's my custom entrypoint script that will run the searchguard setup script automatically:
source init_sg.sh
while [ $? -ne 0 ]; do
sleep 10
source init_sg.sh
done &
/bin/bash -c "source /usr/local/bin/docker-entrypoint.sh;"
If you have any alternative solutions, please feel free to answer.
I have the following in my dockerfile. (There is much more. but I have pasted the relevant part here)
RUN useradd jenkins
USER jenkins
# Maven settings
RUN mkdir ~/.m2
COPY settings.xml ~/.m2/settings.xml
The docker build goes through fine and when I run docker image, I see NO errors.
but I do not see .m2 directory created at /home/jenkins/.m2 in the host filesystem.
I also tried replacing ~ with /home/jenkins and still I do not see .m2 being created.
what am I doing wrong?
Thanks
I tried something similar and got
Step 4 : RUN mkdir ~/.m2
---> Running in 9216915b2463
mkdir: cannot create directory '/home/jenkins/.m2': No such file or directory
your useradd is not enough to create /home/jenkins
I do for my user gg
RUN useradd -d /home/gg -m -s /bin/bash gg
RUN echo gg:gg | chpasswd
RUN echo 'gg ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers.d/gg
RUN chmod 0440 /etc/sudoers.d/gg
USER gg
ENV HOME /home/gg
WORKDIR /home/gg
This creates the directory of the user gg
`
My docker file has following entry
ENV SCPATH /etc/supervisor/conf.d
RUN apt-get -y update
# The daemons
RUN apt-get -y install supervisor
RUN mkdir -p /var/log/supervisor
# Supervisor Configuration
ADD ./supervisord/conf.d/* $SCPATH/
The directory structure looks like this
├── .dockerignore
├── .gitignore
├── Dockerfile
├── Makefile
├── README.md
├── Vagrantfile
├── index.js
├── package.json
└── supervisord
└── conf.d
├── node.conf
└── supervisord.conf
As per my understanding this should work fine as
ADD ./supervisord/conf.d/* $SCPATH/
Points to a relative path in terms of dockerfile build context.
Still it fails with
./supervisord/conf.d : no such file or directory exists.
I am new to docker so might be a very basic thing I am missing. Really appreciate help
What are your .dockerignore file contents? Are you sure you did not accidentally exclude something below your supervisord directory that the docker daemon needs to build your image?
And: in which folder are you executing the docker build command? Make sure you execute it within the folder that holds the Dockerfile so that the relative paths match.
Update: I tried to reproduce your problem. What I did from within a temp folder:
mkdir -p a/b/c
echo "test" > a/b/c/test.txt
cat <<EOF > Dockerfile
FROM debian
ENV MYPATH /newdir
RUN mkdir $MYPATH
ADD ./a/b/c/* $MYPATH/
CMD cat $MYPATH/test.txt
EOF
docker build -t test .
docker run --rm -it test
That prints test as expected. The important part works: the ADD ./a/b/c* $MYPATH. The file at the end is found as its content test is displayed during runtime.
When I now change the path ./a/b/c/* to something else, I get the no such file or directory exists error. When I leave the path as is and invoke docker build from a different folder than the temp folder where I placed the files the error is shown, too.
There is the following code In my Dockerfile :
ENV GOVERS 073fc578434b
RUN cd /usr/local && curl -O http://go.googlecode.com/archive/$GOVERS.zip
RUN cd /usr/local && unzip -q $GOVERS.zip
the above code downloads the zip file to the /usr/local directory and all is ok. But now i do not want to download the zip file, I want to get the zip file from my local PC to the /usr/local at the docker container.
Let say the zip file is named test.zip and is located in the same directory as your Dockerfile, then you can make use of the COPY instruction in your dockerfile. You will then have:
COPY test.zip /usr/local/
RUN cd /usr/local && unzip -q test.zip
Furthermore you can use the ADD instruction instead of COPY to also uncompress the zip file and you won't need to RUN the unzip command:
ADD test.zip /usr/local/
Another approach that I am starting to like is using volumes and runtime commands. I don't like to have to rebuild the image if I change something incidental.
So I create a volume in the docker file to transfer stuff back and forth:
In Docker file:
RUN mkdir -p /someplace/dropbox
....
CMD ..... ; $POST_START ; tail -f /var/log/messages
on host:
mkdir /dropbox && cp ~/Downloads/fap.zip /dropbox \
docker run -v /dropbox:/someplace/dropbox \
-e POST_START="cp /someplace/dropbox/fap.zip /someplace-else; cd /someplace-else;unzip fap.zip" ..... <image>