I have a question about Oracle OCI CLI. I create an image (LINK) that contain all necessary to start working with oracle cloud. You have to modify config file just to start using all from Oracle cloud.
Or you can set your ora cloud environment with:
oci setup config
but i want to to execute some OCI commands with dockerfile, but is really complex, i can't do it work. I'm doing this
FROM juliovg/oracle-oci-19
ENV HOME_DIR=/root \
CODE_DIR=/root/sample/code \
BUCKET_NAME=Code
WORKDIR $HOME_DIR
RUN rm -rf $HOME_DIR/.oci
RUN wget "<.OCI_FILE_URL_UPLOADED_INTO_A_BUCKET>/my_key.tar.gz"
RUN tar -xvf my_key.tar.gz && rm -rf my_key.tar.gz
RUN mkdir -p $CODE_DIR
RUN cd $CODE_DIR
RUN touch my_file.txt
RUN oci os bucket create -c <MY_COPARTMENT> --name <NEW_BUCKET_NAME>
And this is the error -->
I need execute some OCI commands at the begging with RUN or CMD (i try both)
Notes: OCI_FILE_URL_UPLOADED_INTO_A_BUCKET is a zip file that contains the configuration made with another computer, the idea is share the same key with several users , when the use the juliovg/oracle-oci-19 with another thinks
Seems like path issue, dockerfile RUN command call executable like /bin/sh -c executable. so better to give the full path of the executable like RUN /root/bin/oci -v.
FROM juliovg/oracle-oci-19
ENV HOME_DIR=/root \
CODE_DIR=/root/sample/code \
BUCKET_NAME=Code
WORKDIR $HOME_DIR
RUN rm -rf $HOME_DIR/.oci
RUN mkdir -p $CODE_DIR
RUN cd $CODE_DIR
RUN touch my_file.txt
ENV LC_ALL=en_US.utf-8
ENV LANG=en_US.utf-8
RUN /root/bin/oci -v
RUN /root/bin/oci os bucket create -c MY_COPARTMENT --name NEW_BUCKET_NAME
You should consider that the container may not be aware of the location of the executable of OCI CLI. You should give it the full path of OCI CLI in the Dockerfile
Related
I've been trying to create a docker image that executes kubectl with custom OCI variables. It creates the OCI configuration file automatically and then generates the kube/.config file.
I thought of using this because of we have more than one cluster and jumping from them each time is time consuming and it's easy to make mistakes or confuse them.
Basically I created the Dockerfile with the following entrypoint:
FROM private.repo/oci-image:latest
# Install Kubectl client
RUN apt update && apt install -y curl gettext-base
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.24.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
...
...
...
ENTRYPOINT ["/bin/bash", "-c", "./script.sh"]
This is the script.sh file
#!/bin/sh
set -e
cat $HOME/.oci/config-template | envsubst > $HOME/.oci/config
yes | oci ce cluster create-kubeconfig --profile DEFAULT --cluster-id ${K8S_CLUSTER_ID} --file $HOME/.kube/config --region ${REGION} --token-version 2.0.0 --kube-endpoint PUBLIC_ENDPOINT
yes | oci ce cluster create-kubeconfig --profile DEFAULT --cluster-id $K8S_CLUSTER_ID --file $HOME/.kube/config --region $REGION --token-version 2.0.0 --kube-endpoint PUBLIC_ENDPOINT
exec "$#"
"
And I have been trying to run the container and pass to it kubectl commands:
docker run -e ... oci-agent:v21 kubectl get nodes
But I am not getting no response. I tried replaceing the exec "$#" with exec "kubectl $#" but I obtain the kubectl help instructions, so it's only executing kubectl and is not reading my command.
How do I do this properly please ?
Remove the bash -c wrapper from the ENTRYPOINT
ENTRYPOINT ["./script.sh"]
You're already aware that the CMD is passed as arguments to the ENTRYPOINT. With the bash -c wrapper, these arguments are passed as additional arguments to the wrapper shell, not to your script. The wrapper shell always runs ./script.sh with no arguments, because that's the command it was asked to run; the CMD arguments could be accessed as positional arguments $0, $1, ... in that command but this is pretty unusual.
This is the same reason ENTRYPOINT must be a JSON array for it to actually receive the CMD as arguments: Docker turns a string-form ENTRYPOINT (or CMD or RUN) into ["/bin/sh", "-c", "the string"] and arguments aren't actually passed on to the wrapper script. (You should almost never need to use sh -c inside a Dockerfile.)
I've written the following Dockerfile which is supposed to run an arbitrary command (by providing one through arguments of docker run):
FROM ubuntu:20.04
RUN apt -y update && apt-get -y update
RUN apt install -y python3 git
CMD bash
But when I'm trying to pass the command, e.g. cd workspace I get the following:
C:\Users\user>docker run -it cloudbuildtoolset:latest cd workspace
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "cd": executable file not found in $PATH: unknown.
What am I doing wrong?
Please don't suggest me to restart my machine/docker/whatever
cd is a special built-in utility, in the language of the POSIX shell specification. It's something that changes the behavior of the running shell and not a standalone program. The error message means what it says: there is no /bin/cd or similar executable you can run.
Remember that a Docker container runs a single process, then exits, losing whatever state it has. It might not make sense for that single command to just change the container's working directory.
If you want to run a process inside a container but in a different working directory, you can use the docker run -w option
docker run -it \
-w /workspace \
cloudbuildtoolset:latest \
the command you want to run
or, equivalently, add a WORKDIR directive to your Dockerfile.
You can also launch a shell wrapper as the main container process. This would be able to use built-in commands like cd, but it's more complex to use and can introduce quoting issues.
docker run -it cloudbuildtoolset:latest \
/bin/sh -c 'cd /workspace && the command you want to run'
I'm trying to use a Docker container to build a project that uses rust; I'm trying to build as my user. I have a Dockerfile that installs rust in $HOME/.cargo, and then I'm trying to docker run the container, map the sources from $HOME/<some/subdirs/to/project> on the host in the same subfolder in the container. The Dockerfile looks like this:
FROM ubuntu:16.04
ARG RUST_VERSION
RUN \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
# install library dependencies
apt-get install [... a bunch of stuff ...] && \
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION && \
echo 'source $HOME/.cargo/env' >> $HOME/.bashrc && \
echo apt-get DONE
The build container is run something like this:
docker run -i -t -d --net host --privileged -v /mnt:/mnt -v /dev:/dev --volume /home/stefan/<path/to/project>:/home/stefan/<path/to/project>:rw --workdir /home/stefan/<path/to/project> --name <container-name> -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -u 1000 <image-name>
And then I try to exec into it and run the build script, but it can't find rust or $HOME/.cargo:
docker exec -it <container-name> bash
$ ls ~/.cargo
ls: cannot access '/home/stefan/.cargo': No such file or directory
It looks like the /home/stefan/<path/to/project> volume is masking the contents of /home/stefan in the container. Is this expected? Is there a workaround possible to be able to map the source code from a folder under $HOME on the host, but keep $HOME from the container?
I'm un Ubuntu 18.04, docker 19.03.12, on x86-64.
Dockerfile read variable in physical machine. So you user don't have in virtual machine.
Try change: $HOME to /root
echo 'source /root/.cargo/env' >> /root/.bashrc && \
I'll post this as an answer, since I seem to have figured it out.
When the Dockerfile is expanded, $HOME is /root, and the user is root. I couldn't find a way to reliably introduce my user in the build step / Dockerfile. I tried something like:
ARG BUILD_USER
ARG BUILD_GROUP
RUN mkdir /home/$BUILD_USER
ENV HOME=/home/$BUILD_USER
USER $BUILD_USER:$BUILD_GROUP
RUN \
echo "HOME is $HOME" && \
[...]
But didn't get very far, because inside the container, the user doesn't exist:
unable to find user stefan: no matching entries in passwd file
So what I ended up doing was to docker run as my user, and run the rust install from there - that is, from the script that does the actual build.
I also realized why writing to /home/$USER doesn't work - there is no /home/$USER in the container; mapping /etc/passwd and /etc/group in the container teaches it about the user, but does not create any directory. I could've mapped $HOME from the host, but then the container would control the rust versions on the host, and would not be that self contained. I also ended up needing to install rust in a non-standard location, since I don't have a writable $HOME in the container: I had to set CARGO_HOME and RUSTUP_HOME to do that.
While launching a command on my docker image (run), I get the following error :
C:\Program Files\Docker\Docker\resources\bin\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-n\": executable file not found in $PATH": unknown.
The image is an image for Jmeter, that I have created myself :
FROM hauptmedia/java:oracle-java8
MAINTAINER maisie
ENV JMETER_VERSION 5.2.1
ENV JMETER_HOME /opt/jmeter
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
RUN apt-get clean
RUN apt-get update
RUN apt-get -y install ca-certificates
RUN mkdir -p ${JMETER_HOME}
RUN cd ${JMETER_HOME}
RUN wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.2.1.tgz
RUN tar -xvzf apache-jmeter-5.2.1.tgz
RUN rm apache-jmeter-5.2.1.tgz
The command that I am launching is :
#!/bin/bash
export volume_path=$(pwd)
export jmeter_path="/opt/apache-jmeter-5.2.1/bin"
docker run --volume ${volume_path}:${jmeter_path} my/jmeter -n -t ${jmeter_path}/TEST.jmx -l ${jmeter_path}/res.jtl
I really can't find any answer to my problem ...
Thank you in advance for any help.
The general form of the docker run command is
docker run [docker options] <image name> [command]
So you are running an image named amos/jmeter, and the command you are having it run is -n -t .... You're getting the error you are because you've only given a list of options and not an actual command.
The first part of this is to include the actual command in your docker run line:
docker run --rm amos/jmeter \
jmeter -n ...
There's also going to be a problem with how you install the software in the Dockerfile. (You do not need a docker run --volume to supply software that's already in the image.) Each RUN command starts in a new shell in a new environment (in a new container even), so saying e.g. RUN cd ... in its own line doesn't do anything. You need to use Dockerfile directives like WORKDIR and ENV to change the environment. The jmeter command isn't in a standard binary directory so you'll also have a little trouble running it. I might change:
# ...
# Run all APT commands in a single command
# (Layer caching can break an install if the list of packages changes)
RUN apt-get clean \
&& apt-get update \
&& apt-get -y install ca-certificates
# Download and unpack the JMeter tar file
# This is all in a single RUN command, so
# (1) the `cd` at the effect has (temporary) effect, and
# (2) the tar file isn't committed to an image before you `rm` it
RUN cd /opt \
&& wget ${JMETER_DOWNLOAD_URL} \
&& tar xzf apache-jmeter-${JMETER_VERSION}.tgz \
&& rm apache-jmeter-${JMETER_VERSION}.tgz
# Create a symlink to the jmeter process in a normal bin directory
RUN ln -s /opt/apache-jmeter-${JMETER_VERSION}/bin/jmeter /usr/local/bin
# Indicate the default command to run
CMD jmeter
Finally, there will be questions around where to store data files. It's better to store data outside the application directory; in a Docker context it's common enough to use short (if non-standard) directory paths like /data. Remember that any file path in a docker run command refers to a path in the container, but you need a docker run -v bind-mount option (your original --volume is equivalent) to make it visible on the host. That would give you a final command like:
docker run -v "$PWD:/data" atos/jmeter \
jmeter -n -t /data/TEST.jmx -l /data/res.jtl
I have containers for multiple Atlassian products; JIRA, Bitbucket and Confluence. When I'm trying to access the running containers I'm usually using:
docker exec -it -u root ${DOCKER_CONTAINER} bash
With this command I'm able to access as usual, but after running a script to extract and compress log files, I can't access that one container anymore.
Excerpt from the 'clean up script'
This is the first point of failure, and the script is running once each week (scheduled by Jenkins).
docker cp ${CLEAN_UP_SCRIPT} ${DOCKER_CONTAINER}:/tmp/${CLEAN_UP_SCRIPT}
if [ $? -eq 0 ]; then
docker exec -it -u root ${DOCKER_CONTAINER} bash -c "cd ${LOG_DIR} && /tmp/compressOldLogs.sh ${ARCHIVE_FILE}"
fi
When the script executes these two lines towards the Bitbucket container the result is:
unable to find user root: no matching entries in passwd file
It's failing on the 'docker cp'-command, but only towards the Bitbucket container. After the script has ran, the container is unaccessible with both the 'bitbucket' (defined in Dockerfile) and 'root' users.
I was able to copy /etc/passwd out of the container, and it contains all of the users as expected. When trying to access by uid, I get the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: process_linux.go:75: starting setns process caused "fork/exec /proc/self/exe: no such file or directory"
Dockerfile for Bitbucket image:
FROM java:openjdk-8-jre
ENV BITBUCKET_HOME /var/atlassian/application-data/bitbucket
ENV BITBUCKET_INSTALL_DIR /opt/atlassian/bitbucket
ENV BITBUCKET_VERSION 4.12.0
ENV DOWNLOAD_URL https://downloads.atlassian.com/software/stash/downloads/atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz
ARG user=bitbucket
ARG group=bitbucket
ARG uid=1000
ARG gid=1000
RUN mkdir -p $(dirname $BITBUCKET_HOME) \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$BITBUCKET_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN mkdir -p ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_HOME}/shared \
&& chmod -R 700 ${BITBUCKET_HOME} \
&& chown -R ${user}:${group} ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_INSTALL_DIR}/conf/Catalina \
&& curl -L --silent ${DOWNLOAD_URL} | tar -xz --strip=1 -C "$BITBUCKET_INSTALL_DIR" \
&& chmod -R 700 ${BITBUCKET_INSTALL_DIR}/ \
&& chown -R ${user}:${group} ${BITBUCKET_INSTALL_DIR}/
${BITBUCKET_INSTALL_DIR}/bin/setenv.sh
USER ${user}:${group}
EXPOSE 7990
EXPOSE 7999
WORKDIR $BITBUCKET_INSTALL_DIR
CMD ["bin/start-bitbucket.sh", "-fg"]
Additional info:
Docker version 1.12.0, build 8eab29e
docker-compose version 1.8.0, build f3628c7
All containers are running at all times, even Bitbucket works as usual after the issue occurres
The issue disappears after a restart of the container
You can use this command to access to the container with root user:
docker exec -u 0 -i -t {container_name_or_hash} /bin/bash
try debug with that. i think the script maybe remove or disable root user.
This issue is caused by a docker engine bug but which is tracked privately, Docker is asking users to restart the engine!
It seems that the bug is likely to be older than two years!
https://success.docker.com/article/ucp-health-checks-fail-unable-to-find-user-nobody-no-matching-entries-in-passwd-file-observed
https://forums.docker.com/t/unable-to-find-user-root-no-matching-entries-in-passwd-file/26545/7
... what can I say, someone is doing his best to get more funding.
Its a Long standing issue, replicated on my old version 1.10.3 to at least 1.17
As mentioned by #sorin the the docker forum says Running docker stop and then docker start fixes the problem but is hardly a long-term solution...
The docker exec -u 0 -i -t {container_name_or_hash} /bin/bash solution also in the same forum post mentioned here by #ObranZoltan might work for you, but does not work for many. See my output below
$ sudo docker exec -u 0 -it berserk_nobel /bin/bash
exec: "/bin/bash": stat /bin/bash: input/output error