Given a Dockerfile with the content below:
FROM sauloefo/my-ubuntu-22.04:v1
RUN git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.10.2
RUN echo 'source ~/.asdf/asdf.fish' >> ~/.config/fish/config.fish
The following error is happening on docker build call:
> [3/3] RUN echo 'source ~/.asdf/asdf.fish' >> ~/.config/fish/config.fish:
#6 0.372 /bin/sh: 1: cannot create /root/.config/fish/config.fish: Directory nonexistent
But if I create a container directly from the base image (sauloefo/my-ubuntu-22.04:v1) the file exists.
Does anybody know what am I missing?
Related
I'm building a docker image as follows:
TEMP_FILE="/home/username/any/directory/temp"
touch $TEMP_FILE
<secrets> > $TEMP_FILE
export DOCKER_BUILDKIT=1
pushd $PROJECT_ROOT
docker build -t $DOCKER_IMAGE_NAME \
--secret id=netrc,src=$TEMP_FILE \
--build-arg=<...> \
-f Dockerfile .
rm $TEMP_FILE
Currently this works.
I'd now like to use $(mktemp) to create the TEMP_FILE in the /tmp directory. However, when I point TEMP_FILE outside of /home, I get the following error:
could not parse secrets: [id=netrc,src=/tmp/temp-file-name]: failed to stat /tmp/temp-file-name: stat /tmp/temp-file-name: no such file or directory
The script itself has no issue, I can easily find and view the temporary file for example with cat $TEMP_FILE.
How do I give docker build access to /tmp?
I'm building a docker image and getting the error:
=> ERROR [14/36] RUN --mount=type=secret,id=jfrog-cfg,target=/root/.jfrog/jfrog-cli.conf jfrog rt dl --flat artifact 0.7s
------
> [14/36] RUN --mount=type=secret,id=jfrog-cfg,target=/root/.jfrog/jfrog-cli.conf jfrog rt dl --flat artifact/artifact.tar.gz; set -eux; mkdir -p /usr/local/artifact; tar xzf artifact.tar.gz -C /usr/local/; ln -s /usr/local/artifact /usr/local/artifact;:
#22 0.524 [Error] open /root/.jfrog/jfrog-cli.conf: read-only file system
------
failed to solve with frontend dockerfile.v0: failed to solve with frontend gateway.v0: rpc error: code = Unknown desc = failed to build LLB: executor failed running [/bin/bash -eo pipefail -c jfrog rt dl --flat artifact/${ART_TAG}.tar.gz; set -eux; mkdir -p /usr/local/${ART_TAG}; tar xzf ${ART_TAG}.tar.gz -C /usr/local/; ln -s /usr/local/${ART_VERSION} /usr/local/artifact;]: runc did not terminate sucessfully
The command I use to build the docker image is
DOCKER_BUILDKIT=1 docker build -t imagename . --secret id=jfrog-cfg,src=${HOME}/.jfrog/jfrog-cli.conf (jfrog config exists at ${HOME}/.jfrog/jfrog-cli.conf)
JFrog is working and the artifact I'm downloading exists as I can manually download it outside of using docker.
On Linux, docker is run using the root user, so ${HOME} is /root and not /home/your-user-name or whatever your usual home folder is. Try using explicit full pathnames instead of the env var.
Building docker image.......
/usr/bin/env: ‘sh\r’: No such file or directory
execution failed
complete script
#!/bin/bash
echo "Building docker image......."
. gradle.properties
IMAGE="$dockerRegistry/$1"
IMAGE_TAG="$releaseVersion-$(git log -1 --pretty=%h)"
if ./gradlew clean :$1:jibDockerBuild -x test; then
echo "Pushing docker image...."
docker tag $IMAGE $IMAGE:$IMAGE_TAG
docker push $IMAGE:$IMAGE_TAG
else
echo "execution failed"
exit 1
fi
exit 0
Try to adjust your file endings for gradlew file:
dos2unix .\gradlew
When running a sh script in docker file, i got the following error:
./upload.sh: 5: ./upload.sh: sudo: not found ./upload.sh: 21:
./upload.sh: Bad substitution
sudo chmod 755 upload.sh # line 5
version=$(git rev-parse --short HEAD)
echo "version $version"
echo "Uploading file"
for path in $(find public/files -name "*.txt"); do
echo "path $path"
WORDTOREMOVE="public/"
echo "WORDTOREMOVE $WORDTOREMOVE"
# cause of the error
newpath=${path//$WORDTOREMOVE/} # Line 21
echo "new path $path"
url=http://localhost:3000/${newpath}
...
echo "Uploading file"
...
done
DockerFile
FROM node:10-slim
EXPOSE 3000 4001
WORKDIR /prod/code
...
COPY . .
RUN ./upload.sh
RUN npm run build
CMD ./DockerRun.sh
Any idea?
If anyone faces the same issue, here how I fixed it
chmod +x upload.sh
git update-index --chmod=+x upload.sh (mandatory if you pushed the file to remote branch before changing its permission)
The docker image you are using (node:10-slim) has no sudo installed on it because this docker image runs processes as user root:
docker run -it node:10-slim bash
root#68dcffceb88c:/# id
uid=0(root) gid=0(root) groups=0(root)
root#68dcffceb88c:/# which sudo
root#68dcffceb88c:/#
When your Dockerfile runs RUN ./upload.sh it will run:
sudo chmod 755 upload.sh
Using sudo inside the docker fails because sudo is not installed, there is no need to use sudo inside the docker because all of the commands inside the docker run as user root.
Simply remove the sudo from line number 5.
If you wish to update the running PATH variable run:
PATH=$PATH:/directorytoadd/bin
This will append the directory "/directorytoadd/bin" to the current path.
This is what I'm getting when I run AWS terraform plan with Jenkins. Below code that we are using
Error: error: cannot delete old terraform
Is a directory
Code :
sh '''set +x
curl -L 'https://releases.hashicorp.com/terraform/0.11.10/terraform_0.11.10_linux_amd64.zip' --output terraform.zip
unzip -o terraform.zip
echo "Using $(terraform -version) from: $(which terraform)"
'''
sh "terraform init -backend-config='bucket=${bucketName}'"
Jenkins Error:
+ set +x
after terraform download
Archive: terraform.zip
error: cannot delete old terraform
Is a directory
[Pipeline] End of Pipeline
ERROR: script returned exit code 50
Finished: FAILURE
Please suggest some better solution.
Unzip refuses to overwrite the terraform/ directory that seems to be still lying around in your workspace from the previous run.
Run either a sh "rm -rf terraform/" before the unzip (or cleanWs())
unzip -f terraform.zip
Use -f instead of -o
-f freshen existing files, create none i.e unzip to replace the new files only
-n never overwrite existing files
-q quiet mode (-qq => quieter)
-o overwrite files WITHOUT prompting