I'm inside my docker container:
This is my working directory
root#19b84a014662:/usr/src/ghost#
I have a script in:
root#19b84a014662:/
I'm able to cd to / and execute the script. But I need to execute the script from my docker-compose file. I tried
./test.sh
But this actually means it's searching in /usr/src/ghost/ for the script instead of /
How can I execute the script, in the / of my container?
Example: I ssh into my container:
root#19b84a014662:/usr/src/ghost# ls
Gruntfile.js LICENSE PRIVACY.md README.md config.example.js config.js content core index.js node_modules npm-shrinkwrap.json package.json
I have a script in the root of my container. I want to execute it with:
./test.sh
Than it show me only folders/scripts which are in /usr/src/ghost and not in /
root#19b84a014662:/usr/src/ghost# ./
content/ core/ node_modules/
Just replace your ./test.sh by
/test.sh
Because ./ means you're starting from current directory (which is the working dir /usr/src/ghost/ in this case). Inspite of this / means you're starting from root directory and that's what you want to do.
Alternatively you could switch to root dir and execute your script in one command using the && concatenator below. But I'll recommend the above.
cd / && ./test.sh
Related
I have a large set of test files (3.2 gb) that I only want to add to the container if an environment variable (DEBUG) is set. For testing locally I set these in a docker-compose file.
So far, I've added the test data folder to a .dockerignore file and tried the solution mentioned here in my Dockerfile without any success.
I've also tried running the cp command from within a run_app.sh which i call in my docker file:
cp local/folder app/testdata
but get cp: cannot stat 'local/folder': No such file or directory, i guess because it's trying to find a folder that exists on my local machine inside the container?
This is my docker file:
RUN mkdir /app
WORKDIR /app
ADD . /app/
ARG DEBUG
RUN if [ "x$DEBUG" = "True" ] ; echo "Argument not provided" ; echo "Argument is $arg" ; fi
RUN pip install -r requirements.txt
USER nobody
ENV PORT 5000
EXPOSE ${PORT}
CMD /uus/run_app.sh
If it's really just for testing, and it's in a clearly named isolated directory like testdata, you can inject it using a bind mount.
Remove the ARG DEBUG and the build-time option to copy the content into the image. When you run the container, run it with
docker run \
-v $PWD/local/folder:/app/testdata:ro \
...
This makes that host folder appear in that container directory, read-only so you don't accidentally overwrite the test data for later runs.
Note that this hides whatever was in the image on that path before; hence the "if it's in a separate directory, then..." disclaimer.
I have created an image, which is an automation project. when I run container it executes all test inside the container then it generates the test report. I want to take this report out before deleting container.
FROM maven:3.6.0-ibmjava-8-alpine
COPY ./pom.xml .
ADD ./src $HOME/src
COPY ./test-execution.sh /
RUN mvn clean install -Dmaven.test.skip=true -Dassembly.skipAssembly=true
ENTRYPOINT ["/test-execution.sh"]
CMD []
Below is shell file
#!/bin/bash
echo parameters you provided : "$#"
mvn test "$#"
cp api-automation:target/*.zip /Users/abcd/Desktop/docker_report
You will want to use the docker cp command. See here for more details.
However, it appears docker cp does not support standard unix globbing patterns (i.e * in your src path).
So instead you will want to run:
docker cp api-automation:target/ /Users/abcd/Desktop/docker_report
However, then you will have to have a final step to remove all the non-zip files from your docker_report directory.
I'm looking for the best way to copy a folder from a localhost to Docker container, then launch bash command inside the container?
I proceed as following instruction inside Dockerfile :
WORKDIR /workspace/
COPY /path_in_localhost /Project
RUN ["/bin/bash", "-c", " cd /workspace/Project/ && make"]
the issue is when Docker come to the last instruction, it can find the folder, it's like the copy doesn't work?
/bin/bash: line 0: cd: /workspace/Project: No such file or directory
any suggestion ?
If you want to take advantage of WORKDIR you need to use relative path, thus specifying Project without the / as destination.
I have a simple web application that I would like to place in a docker container. The angular application exists in the frontend/ folder, which is withing the application/ folder.
When the Dockerfile is in the application/ folder and reads as follows:
FROM node
ADD frontend/ frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
everything runs correctly.
However, when I move the Dockerfile into the frontend/ folder and change it to read
FROM node
ADD . frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
no files are copied and the project does not run.
How can I add every file and folder recursively in the current directory to my docker image?
The Dockerfile that ended up working was
FROM node
ADD . / frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
Shoutout to #Matt for the lead on . / ./, but I think the only reason that didn't work was because for some reason my application will only run when it is inside a directory, not in the 'root'. This might have something to do with #VonC's observation that the node image doesn't have a WORKDIR.
First, try COPY just to test if the issue persists.
Second, make sure that no files are copied by changing your CMD to a ls frontend
I do not see a WORKDIR in node/7.5/Dockerfile, so frontend could be in /frontend: check ls /frontend too.
I would like to run a bash file every time when I start to run the docker image, but couldn't figure how to do this with a few hours already. my bash file looks like,
#!/bin/bash
while true;
do
echo "hello world"
sleep 10
done
So what I am thinking about is, when I start running the docker, the bash file will also be running inside the docker image continuously, in this case, the bash file will do its job as long as the docker is on.
How to set this up? should I build this inside the docker image? or I can put bash file in run.sh so it happens when docker runs?
Just copy your script file with COPY or ADD in the docker file and then use the CMD command to run it..
For example, if u copy run.sh to /
Then in you dockerfile last line just add:
CMD run.sh
for more info please refer to: https://docs.docker.com/engine/reference/builder/ and search for 'CMD'
make sure that the file has the right privileges for running (otherwise, after COPY/ADD the file make RUN chmod +x run.sh
Summary:
//Dockerfile
// source image, for example node.js
FROM some_image
// copy run.sh script from your local file system into the image (to root dir)
ADD run.sh /
// add execute privillages to the script file (just in case it doesn't have)
RUN chmod +x run.sh
// default behaviour when containter will start
CMD run.sh
hope that's help.
Your Dockerfile needs look looke this
FROM <base_image>
ADD yourfile.sh /
CMD ['/yourfile.sh']