I need to compile gem5 with the environment inside docker. This is not frequent, and once the compilation is done, I don't need the docker environment anymore.
I have a docker image named gerrie/gem5. I want to perform the following process.
Use this image to create a container, mount the local gem5 source code, compile and generate an executable file(Executables are by default in the build directory.), exit the container and delete it. And I want to be able to see the compilation process so that if the code goes wrong, I can fix it.
But I ran into some problems.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 bash -c "scons build/X86/gem5.opt"
When I execute the above command, I will go to the docker terminal. Then the command to compile gem5(scons build/X86/gem5.opt) is not executed. I think it might be because of the -it option. When I remove this option, I don't see any output anymore.
I replaced the command with the following sentence.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 bash -c "echo 'hello'"
But I still don't see any output.
When I went into the docker container and tried to compile it myself, the build directory was generated. I found that outside docker, I can't delete it.
What should I do? Thanks!
dockerfile
FROM matthewfeickert/docker-python3-ubuntu:latest
LABEL maintainer="Yujie YujieCui#pku.edu.cn"
USER root
# get dependencies
RUN set -x; \
sudo apt-get update \
&& DEBIAN_FRONTEND=noninteractive sudo apt-get install -y build-essential git-core m4 zlib1g zlib1g-dev libprotobuf-dev protobuf-compiler libprotoc-dev libgoogle-perftools-dev swig \
&& sudo -H python -m pip install scons==3.0.1 \
&& sudo -H python -m pip install six
RUN apt-get clean
# checkout repo with mercurial
# WORKDIR /usr/local/src
# RUN git clone https://github.com/gem5/gem5.git
# build it
WORKDIR /usr/local/src/gem5
ENTRYPOINT bash
I found that when downloading gem5, it may be because gem5 is too big, and it keeps showing "fatal: unable to access 'https://github.com/gem5/gem5.git/': GnuTLS recv error (-110): The TLS connection was non-properly terminated." mistake
So I commented out the
RUN git clone https://github.com/gem5/gem5.git command
You could make the entrypoint scons itself.
ENTRYPOINT ["scons"]
Or absolute path to the bin. I don't know where it will be installed to, you need to check.
ENTRYPOINT ["/usr/local/bin/scons"]
Then you can run
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 build/X86/gem5.opt
If the sole purpose of the image is to invoke scons, it would be kind of idiomatic.
Otherwise, remove the entrypoint. Also note, you don't need to wrap it in bash -c
If you have removed the entrypoint you can run it like this.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 scons build/X86/gem5.opt
Related
I've written the following Dockerfile which is supposed to run an arbitrary command (by providing one through arguments of docker run):
FROM ubuntu:20.04
RUN apt -y update && apt-get -y update
RUN apt install -y python3 git
CMD bash
But when I'm trying to pass the command, e.g. cd workspace I get the following:
C:\Users\user>docker run -it cloudbuildtoolset:latest cd workspace
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "cd": executable file not found in $PATH: unknown.
What am I doing wrong?
Please don't suggest me to restart my machine/docker/whatever
cd is a special built-in utility, in the language of the POSIX shell specification. It's something that changes the behavior of the running shell and not a standalone program. The error message means what it says: there is no /bin/cd or similar executable you can run.
Remember that a Docker container runs a single process, then exits, losing whatever state it has. It might not make sense for that single command to just change the container's working directory.
If you want to run a process inside a container but in a different working directory, you can use the docker run -w option
docker run -it \
-w /workspace \
cloudbuildtoolset:latest \
the command you want to run
or, equivalently, add a WORKDIR directive to your Dockerfile.
You can also launch a shell wrapper as the main container process. This would be able to use built-in commands like cd, but it's more complex to use and can introduce quoting issues.
docker run -it cloudbuildtoolset:latest \
/bin/sh -c 'cd /workspace && the command you want to run'
I'm trying to use a Docker container to build a project that uses rust; I'm trying to build as my user. I have a Dockerfile that installs rust in $HOME/.cargo, and then I'm trying to docker run the container, map the sources from $HOME/<some/subdirs/to/project> on the host in the same subfolder in the container. The Dockerfile looks like this:
FROM ubuntu:16.04
ARG RUST_VERSION
RUN \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
# install library dependencies
apt-get install [... a bunch of stuff ...] && \
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION && \
echo 'source $HOME/.cargo/env' >> $HOME/.bashrc && \
echo apt-get DONE
The build container is run something like this:
docker run -i -t -d --net host --privileged -v /mnt:/mnt -v /dev:/dev --volume /home/stefan/<path/to/project>:/home/stefan/<path/to/project>:rw --workdir /home/stefan/<path/to/project> --name <container-name> -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -u 1000 <image-name>
And then I try to exec into it and run the build script, but it can't find rust or $HOME/.cargo:
docker exec -it <container-name> bash
$ ls ~/.cargo
ls: cannot access '/home/stefan/.cargo': No such file or directory
It looks like the /home/stefan/<path/to/project> volume is masking the contents of /home/stefan in the container. Is this expected? Is there a workaround possible to be able to map the source code from a folder under $HOME on the host, but keep $HOME from the container?
I'm un Ubuntu 18.04, docker 19.03.12, on x86-64.
Dockerfile read variable in physical machine. So you user don't have in virtual machine.
Try change: $HOME to /root
echo 'source /root/.cargo/env' >> /root/.bashrc && \
I'll post this as an answer, since I seem to have figured it out.
When the Dockerfile is expanded, $HOME is /root, and the user is root. I couldn't find a way to reliably introduce my user in the build step / Dockerfile. I tried something like:
ARG BUILD_USER
ARG BUILD_GROUP
RUN mkdir /home/$BUILD_USER
ENV HOME=/home/$BUILD_USER
USER $BUILD_USER:$BUILD_GROUP
RUN \
echo "HOME is $HOME" && \
[...]
But didn't get very far, because inside the container, the user doesn't exist:
unable to find user stefan: no matching entries in passwd file
So what I ended up doing was to docker run as my user, and run the rust install from there - that is, from the script that does the actual build.
I also realized why writing to /home/$USER doesn't work - there is no /home/$USER in the container; mapping /etc/passwd and /etc/group in the container teaches it about the user, but does not create any directory. I could've mapped $HOME from the host, but then the container would control the rust versions on the host, and would not be that self contained. I also ended up needing to install rust in a non-standard location, since I don't have a writable $HOME in the container: I had to set CARGO_HOME and RUSTUP_HOME to do that.
While launching a command on my docker image (run), I get the following error :
C:\Program Files\Docker\Docker\resources\bin\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-n\": executable file not found in $PATH": unknown.
The image is an image for Jmeter, that I have created myself :
FROM hauptmedia/java:oracle-java8
MAINTAINER maisie
ENV JMETER_VERSION 5.2.1
ENV JMETER_HOME /opt/jmeter
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
RUN apt-get clean
RUN apt-get update
RUN apt-get -y install ca-certificates
RUN mkdir -p ${JMETER_HOME}
RUN cd ${JMETER_HOME}
RUN wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.2.1.tgz
RUN tar -xvzf apache-jmeter-5.2.1.tgz
RUN rm apache-jmeter-5.2.1.tgz
The command that I am launching is :
#!/bin/bash
export volume_path=$(pwd)
export jmeter_path="/opt/apache-jmeter-5.2.1/bin"
docker run --volume ${volume_path}:${jmeter_path} my/jmeter -n -t ${jmeter_path}/TEST.jmx -l ${jmeter_path}/res.jtl
I really can't find any answer to my problem ...
Thank you in advance for any help.
The general form of the docker run command is
docker run [docker options] <image name> [command]
So you are running an image named amos/jmeter, and the command you are having it run is -n -t .... You're getting the error you are because you've only given a list of options and not an actual command.
The first part of this is to include the actual command in your docker run line:
docker run --rm amos/jmeter \
jmeter -n ...
There's also going to be a problem with how you install the software in the Dockerfile. (You do not need a docker run --volume to supply software that's already in the image.) Each RUN command starts in a new shell in a new environment (in a new container even), so saying e.g. RUN cd ... in its own line doesn't do anything. You need to use Dockerfile directives like WORKDIR and ENV to change the environment. The jmeter command isn't in a standard binary directory so you'll also have a little trouble running it. I might change:
# ...
# Run all APT commands in a single command
# (Layer caching can break an install if the list of packages changes)
RUN apt-get clean \
&& apt-get update \
&& apt-get -y install ca-certificates
# Download and unpack the JMeter tar file
# This is all in a single RUN command, so
# (1) the `cd` at the effect has (temporary) effect, and
# (2) the tar file isn't committed to an image before you `rm` it
RUN cd /opt \
&& wget ${JMETER_DOWNLOAD_URL} \
&& tar xzf apache-jmeter-${JMETER_VERSION}.tgz \
&& rm apache-jmeter-${JMETER_VERSION}.tgz
# Create a symlink to the jmeter process in a normal bin directory
RUN ln -s /opt/apache-jmeter-${JMETER_VERSION}/bin/jmeter /usr/local/bin
# Indicate the default command to run
CMD jmeter
Finally, there will be questions around where to store data files. It's better to store data outside the application directory; in a Docker context it's common enough to use short (if non-standard) directory paths like /data. Remember that any file path in a docker run command refers to a path in the container, but you need a docker run -v bind-mount option (your original --volume is equivalent) to make it visible on the host. That would give you a final command like:
docker run -v "$PWD:/data" atos/jmeter \
jmeter -n -t /data/TEST.jmx -l /data/res.jtl
I'm new in docker. I want to create a docker container with Newman, Jenkins, Jenkins-job-builder. Please help me.
I built a docker image which bases on official Jenkins image https://hub.docker.com/r/jenkins/jenkins.
I used DockerFile. The build was successful, Jenkins app also runs successfully.
After running Jenkins I opened container as root
docker exec -u 0 -it jenkins bash and tryed to add new job with jenkins-job-builder
jenkins-jobs --conf ./jenkins_jobs.ini update ./jobs.yaml
but I got bash: jenkins-jobs: command not found
There is my Dockerfile
FROM jenkins/jenkins
USER root
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash
RUN apt-get -y install nodejs
RUN npm install -g newman
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python get-pip.py
RUN pip install --user jenkins-job-builder
USER jenkins
When building your image, you get some warnings. Especially this one is interesting:
WARNING: The script jenkins-jobs is installed in '/root/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Simply remove the --user flag from RUN pip install --user jenkins-job-builder and you're fine.
In the Dockerfile builder, ENTRYPOINT and CMD run in one time by using /bin/sh -c in back.
Are there any simple solution to run two command inside without extra script
In my case, I want to setup docker in docker in jenkins slave node, so I pass the docker.sock into container, and I want to change the permission to be executed by normal user, so it shall be done before sshd command.
The normal is like jenkins, which will be login into container via ssh command.
$ docker run -d -v /var/run/docker.sock:/docker.sock larrycai/jenkins-slave
In larrycai/jenkins-slave Dockerfile, I hope to run
CMD chmod o+rw /docker.sock && /usr/sbin/sshd -D
Currently jenkins is given sudo permission, see larrycai/jenkins-slave
I run docker in docker in jenkins slave:
First: my slave know run docker.
Second: I prepare one docker image who knows run docker in docker. See one fragment of dockerfile
RUN echo 'deb [trusted=yes] http://myrepo:3142/get.docker.io/ubuntu docker main' > /etc/apt/sources.list.d/docker.list
RUN apt-get update -qq
RUN apt-get install -qqy iptables ca-certificates lxc apt-transport-https lxc-docker
ADD src/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
VOLUME /var/lib/docker
Third: The jenkins job running in this slave contain one .sh file with a set of command to run over app code like:
export RAILS_ENV=test
# Bundle install
bundle install
# spec_no_rails
bundle exec rspec spec_no_rails -I spec_no_rails
bundle exec rake db:migrate:reset
bundle exec rake db:test:prepare
etc...
Fourth: one run shell step job with something like this
docker run --privileged -v /etc/localtime:/etc/localtime:ro -v `pwd`:/code myimagewhorundockerindocker /bin/bash -xec 'cd /code && ./myfile.sh'
--privileged necessary for run docker in docker
-v /etc/localtime:/etc/localtime:ro for synchronize host clock vs container clock
-v pwd:/code for share jenkins workspace (app-code) previously cloned from VCS with /code inside container
note: If you have service dependencies you can use fig with similar strategy.