I have this docker file:
FROM alpine/git AS git
RUN apk fix && apk --no-cache --update add zip
VOLUME /root
WORKDIR /root
FROM minio/mc AS minio
COPY --from=git /usr/bin/git /usr/bin/git
COPY --from=git /usr/bin/zip /usr/bin/zip
ENTRYPOINT ["sh"]
Basically I need the following tools:
git
minio client
zip
Which, per the docs I read, the above should work - it builds without and error
But when I try to execute anything I get:
Sending build context to Docker daemon 8.192kB
Step 1/8 : FROM alpine/git AS git
---> 22d84a66cda4
Step 2/8 : RUN apk fix && apk --no-cache --update add zip
---> Using cache
---> 5ce4d94085d9
Step 3/8 : VOLUME /root
---> Using cache
---> 89329e40cbba
Step 4/8 : WORKDIR /root
---> Using cache
---> 37f2c9216bb1
Step 5/8 : FROM minio/mc AS minio
---> 396036e5ac42
Step 6/8 : COPY --from=git /usr/bin/git /usr/bin/git
---> 543676360b6d
Step 7/8 : COPY --from=git /usr/bin/zip /usr/bin/zip
---> 936165b36d2e
Step 8/8 : ENTRYPOINT ["sh"]
---> Running in 4ee0c4b491f9
Removing intermediate container 4ee0c4b491f9
---> 1279af2dd755
Successfully built 1279af2dd755
Successfully tagged fabric:latest
[centos#ip-10-6-5-12 ~]$ sudo docker run -it fabric
sh-4.4# which zip
sh: which: command not found
sh-4.4# zip
sh: /usr/bin/zip: No such file or directory
sh-4.4# which git
sh: which: command not found
sh-4.4# git
sh: /usr/bin/git: No such file or directory
sh-4.4# git
sh: /usr/bin/git: No such file or directory
sh-4.4# apt
sh: apt: command not found
sh-4.4# ls -al /usr/bin/git
-rwxr-xr-x. 1 root root 2911912 Oct 19 04:51 /usr/bin/git
So, the file is there. Root (me) owns it and yet it says No such file or directory ? I am perplexed
Anyone know what's up?
Conclusion
Don't use copy to add git or zip, because they have dependencies, just copying git or zip is not enough.
Dockerfile
FROM minio/mc AS minio
RUN \
microdnf update --nodocs && \
microdnf install git zip --nodocs && \
microdnf clean all
ENTRYPOINT ["sh"]
Install Log
...
Installing: gzip;1.9-13.el8_5;x86_64;ubi-8-baseos-rpms
Installing: cracklib;2.9.6-15.el8;x86_64;ubi-8-baseos-rpms
Installing: cracklib-dicts;2.9.6-15.el8;x86_64;ubi-8-baseos-rpms
Installing: libpwquality;1.4.4-5.el8;x86_64;ubi-8-baseos-rpms
Installing: pam;1.3.1-22.el8;x86_64;ubi-8-baseos-rpms
Installing: util-linux;2.32.1-38.el8;x86_64;ubi-8-baseos-rpms
Installing: openssh;8.0p1-16.el8;x86_64;ubi-8-baseos-rpms
Installing: openssh-clients;8.0p1-16.el8;x86_64;ubi-8-baseos-rpms
Installing: less;530-1.el8;x86_64;ubi-8-baseos-rpms
Installing: git-core;2.31.1-2.el8;x86_64;ubi-8-appstream-rpms
Installing: git-core-doc;2.31.1-2.el8;noarch;ubi-8-appstream-rpms
Installing: perl-Git;2.31.1-2.el8;noarch;ubi-8-appstream-rpms
Installing: git;2.31.1-2.el8;x86_64;ubi-8-appstream-rpms
Installing: zip;3.0-23.el8;x86_64;ubi-8-baseos-rpms
From the installation log, you can see that git or zip has other dependent packages that need to be installed.
Run it
$ docker run -it fabric
sh-4.4# git
usage: git [--version] [--help] [-C <path>] [-c <name>=<value>]
[--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
[--super-prefix=<path>] [--config-env=<name>=<envvar>]
<command> [<args>]
These are common Git commands used in various situations:
start a working area (see also: git help tutorial)
clone Clone a repository into a new directory
init Create an empty Git repository or reinitialize an existing one
work on the current change (see also: git help everyday)
add Add file contents to the index
mv Move or rename a file, a directory, or a symlink
restore Restore working tree files
rm Remove files from the working tree and from the index
sparse-checkout Initialize and modify the sparse-checkout
examine the history and state (see also: git help revisions)
bisect Use binary search to find the commit that introduced a bug
diff Show changes between commits, commit and working tree, etc
grep Print lines matching a pattern
log Show commit logs
show Show various types of objects
status Show the working tree status
grow, mark and tweak your common history
branch List, create, or delete branches
commit Record changes to the repository
merge Join two or more development histories together
rebase Reapply commits on top of another base tip
reset Reset current HEAD to the specified state
switch Switch branches
tag Create, list, delete or verify a tag object signed with GPG
collaborate (see also: git help workflows)
fetch Download objects and refs from another repository
pull Fetch from and integrate with another repository or a local branch
push Update remote refs along with associated objects
'git help -a' and 'git help -g' list available subcommands and some
concept guides. See 'git help <command>' or 'git help <concept>'
to read about a specific subcommand or concept.
See 'git help git' for an overview of the system.
sh-4.4# zip
Copyright (c) 1990-2008 Info-ZIP - Type 'zip "-L"' for software license.
Zip 3.0 (July 5th 2008). Usage:
zip [-options] [-b path] [-t mmddyyyy] [-n suffixes] [zipfile list] [-xi list]
The default action is to add or replace zipfile entries from list, which
can include the special name - to compress standard input.
If zipfile and list are omitted, zip compresses stdin to stdout.
-f freshen: only changed files -u update: only changed or new files
-d delete entries in zipfile -m move into zipfile (delete OS files)
-r recurse into directories -j junk (don't record) directory names
-0 store only -l convert LF to CR LF (-ll CR LF to LF)
-1 compress faster -9 compress better
-q quiet operation -v verbose operation/print version info
-c add one-line comments -z add zipfile comment
-# read names from stdin -o make zipfile as old as latest entry
-x exclude the following names -i include only the following names
-F fix zipfile (-FF try harder) -D do not add directory entries
-A adjust self-extracting exe -J junk zipfile prefix (unzipsfx)
-T test zipfile integrity -X eXclude eXtra file attributes
-y store symbolic links as the link instead of the referenced file
-e encrypt -n don't compress these suffixes
-h2 show more help
sh-4.4# exit
Dokcer image size
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
fabric 1.0.0 d2415a18eb37 8 minutes ago 264MB
fabric latest d2415a18eb37 8 minutes ago 264MB
You've copied binaries that are linked against dynamic libraries that don't exist in your target image:
sh-4.4# type zip
zip is /usr/bin/zip
sh-4.4# ldd /usr/bin/zip
linux-vdso.so.1 (0x00007ffed7553000)
libc.musl-x86_64.so.1 => not found
sh-4.4# type git
git is /usr/bin/git
sh-4.4# ldd /usr/bin/git
linux-vdso.so.1 (0x00007ffdd8da8000)
libpcre2-8.so.0 => /lib64/libpcre2-8.so.0 (0x00007f56c752e000)
libz.so.1 => /lib64/libz.so.1 (0x00007f56c7316000)
libc.musl-x86_64.so.1 => not found
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f56c70f6000)
libc.so.6 => /lib64/libc.so.6 (0x00007f56c6d30000)
/lib/ld-musl-x86_64.so.1 => /lib64/ld-linux-x86-64.so.2 (0x00007f56c7aa6000)
When copying binaries into an image, they need to either be statically linked, or you need to ensure all of the libraries already exist in the target image. Linux package managers do this for you, which in the target image you've picked is microdnf:
microdnf install zip git
Related
I am trying to fork this docker image so that if anything changes on the original it won't affect me.
I have forked the repo corresponding to that image to my own repo.
I have cloned the repo and am trying to build it:
docker build . -t davcal/gcc-cross-x86_64-elf
I am getting this error:
+ cd /usr/local/src
+ ./build-binutils.sh 2.31.1
/bin/sh: 1: ./build-binutils.sh: not found
The command '/bin/sh -c set -x && cd /usr/local/src && ./build-binutils.sh ${BINUTILS_VERSION} && ./build-gcc.sh ${GCC_VERSION}' returned a non-zero code: 127
What makes no sense to me is that if I use the original image, it builds successfully:
FROM randomdude/gcc-cross-x86_64-elf
...
Maybe Docker Hub stores a pre-built image?
How do I fix this?
Note: I am using Windows. This shouldn't make a difference since the error originates within the container.
Edit
I tried patching the Dockerfile to chmod executable permissions to the sh files in case that was causing problems on Windows. Unfortunately, the exact same error occurs.
RUN set -x \
&& chmod +x /usr/local/src/build-binutils.sh \
&& chmod +x /usr/local/src/build-gcc.sh \
&& cd /usr/local/src \
&& ./build-binutils.sh ${BINUTILS_VERSION} \
&& ./build-gcc.sh ${GCC_VERSION}
Edit 2
Following this method, I inspected the container to see if the sh files actually exist. Here is the output.
I ran docker run --rm -it c53693f11514 bash, including the hash of the intermediate container of the previous successful step of the Dockerfile.
This is the output showing that the files do exist:
root#9b8a64ac2090:/# cd usr/local/src
root#9b8a64ac2090:/usr/local/src# ls
binutils-2.31.1 build-binutils.sh build-gcc.sh gcc-8.2.0
From the described symptoms, file exists, is a shell script, and works on other machines, the "file not found" error is most likely from Winidows linefeeds being added to the file. When the Linux kernel processes a shell script, it looks at the first line, the #!/bin/sh or similar, and then finds that interpreter to run the shell script. If that interpreter isn't found, you'll get a "file not found" error.
In this case, the file it's looking for won't be /bin/sh, but instead /bin/sh\r or /bin/sh^M depending on how you want to represent the carriage return character. You can fix that for single files with a tool like dos2unix but in general, you'll want to fix git itself since there are likely other files that have had their linefeeds corrupted. For details on adjusting the behavior of git, see this post.
I am trying to build using:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1 AS builder
COPY pythonnet/src/ pythonnet/src
WORKDIR /pythonnet/src/runtime
RUN dotnet build -f netstandard2.0 -p:DefineConstants=\"MONO_LINUX\;XPLAT\;PYTHON3\;PYTHON37\;UCS4\;NETSTANDARD\" Python.Runtime.15.csproj
# copy myApp csproj and restore
COPY src/myApp/*.csproj /src/myApp/
WORKDIR /src/myApp
RUN dotnet restore
# now copy everything else as separate docker step
# (copy to staging folder, remove csproj, and copy down - so we don't overwrite project above)
WORKDIR /
COPY src/myApp/ ./staging/src/myApp
RUN rm ./staging/src/myApp/*.csproj \
&& cp -r ./staging/* ./ \
&& rm -rf ./staging
This was working fine, and in Windows 10 still does, but in CentOS 7 I get:
Step 10/40 : RUN rm ./staging/src/myApp/*.csproj && cp -r ./staging/* ./ && rm -rf ./staging
---> Running in 6b17ae0fae89
cp: cannot stat './staging/src/myApp/myApp.csproj': No such file or directory
Using ls instead of cp throws a similar file not found error, so it looks like Docker still knows about myApp.csproj but cannot see it since it has been removed.
Is there a way around this? I have tried using rsync but similar problems.
I simply ignored the issue by tacking on ;exit 0 on the offending lines. Not great, but does the job.
EDIT: This worked for me as I cannot upgrade the version of CemtOS. If you can, check out Alexander Block's answer.
I don't know specifically how to solve this problem as there's a lot of context in the filesystem that you haven't (and probably can't) share with us.
My suggestion on a strategy is that you:
comment out all lines from the failing one 'til the end of the Dockerfile
build the partial image
docker exec -it [image] bash to jump into the image
poke around and figure out what's going wrong
repeat 1-4 until things work as expected
It's not as fun as a perfectly insightful answer of course but this is a relentlessly effective algorithm even if it's tedious and annoying.
EDIT
My wild guess is that somehow, someway the linux machine doesn't have the file where it's expected for some reason and so it doesn't get copied into the image at all and that's why the docker build process can't find it. But there's no way to know without debugging the build process.
cp -r will stop and fail with that cannot stat <file> message whenever the source is a symbolic link and the target of the link does not exist. It will not copy links to non-existent files.
So my guess is that after you run COPY src/myApp/ ./staging/src/myApp your file ./staging/src/myApp/myApp.csproj is a symbolic link to a non-existent file. Why the following RUN rm ./staging/src/*.csproj doesn't remove it and stays silent about that, I don't know the answer to that.
To help demonstrate my theory, see below showing cp failing on a symlink on Centos 7.
[547] $ docker run --rm -it centos:7
Unable to find image 'centos:7' locally
7: Pulling from library/centos
524b0c1e57f8: Pull complete
Digest: sha256:e9ce0b76f29f942502facd849f3e468232492b259b9d9f076f71b392293f1582
Status: Downloaded newer image for centos:7
[root#a47b77cf2800 /]# ln -s /tmp/foo /tmp/bar
[root#a47b77cf2800 /]# ls -l /tmp/foo
ls: cannot access /tmp/foo: No such file or directory
[root#a47b77cf2800 /]# ls -l /tmp/bar
lrwxrwxrwx 1 root root 8 Jul 6 05:44 /tmp/bar -> /tmp/foo
[root#a47b77cf2800 /]# cp /tmp/foo /tmp/1
cp: cannot stat '/tmp/foo': No such file or directory
[root#a47b77cf2800 /]# cp /tmp/bar /tmp/2
cp: cannot stat '/tmp/bar': No such file or directory
Notice how you copy reports that it cannot stat either the source or destination of the symbolic link. It's the exact symptom you are seeing.
If you just want to get past this, you can try tar instead of cp or rsync.
Instead of
cp -r ./staging/* ./
use this instead:
tar -C ./staging -cf - . | tar -xf -
tar will happily copy symlinks that don't exist.
You've very likely encountered a kernel bug that has been fixed a long time ago in more recent kernels. As of https://de.wikipedia.org/wiki/CentOS, CentOS 7 is based on the Linux Kernel 3.10, which is pretty old already and does not have good Docker support in regard to the storage backend (overlay filesystem).
CentOS tried to backport needed fixes and features into 3.10, but seems to not have succeeded fully when it comes to overlay support. There are multiple (slightly different) issues regarding this which you can find when searching for "CentOS 7 overlay driver" on the internet. All of them have in common that removing of files from parent overlays does not work as expected.
For me it looks like rm calls on files return success, even though the files are not fully removed. Directory listings (e.g. by ls or shell expansion as in your case) then still list the file, while accessing the file then fails (no matter if read, write or deletion of the file).
I assume that what you've seen is just another incarnation of these issues. You should either switch to CentOS 8 or upgrade your Kernel (which is not officially supported by CentOS as far as I understand). Or even more radical, switch to a distribution which is used more often in combination with Docker and generally offers more recent Kernels, e.g. Debian or Ubuntu.
I'm new to Docker and ran into the following problem:
In my Dodckerfile I have these lines:
ADD dir/archive.tgz /dir/
RUN tar -xzf /dir/archive2.tar.gz -C /dir/
RUN ls -l /dir/
RUN ls -l /dir/dir1/
The first ls prints out files correctly and I can see that dir1 was created inside dir by the archive, with permissions drwxr-xr-x. But the second ls gives me:
ls: "cannot access /dir/dir1/: No such file or directory"
I thought that if the Docker can see a file, it can access it. Do I need to do some special magic here?
I thought that if the Docker can see a file, it can access it.
In a way you are right, but also missing a piece of info. Those RUN commands are not necessarily sequentially executed since docker operates in layers, and your third RUN command is executed while your first might be skipped. In order to preserve proper execution order you need to put them in same RUN command as such so they end up on the same layer (and are updated together):
RUN tar -xzf /dir/archive2.tar.gz -C /dir/ && \
ls -l /dir/ && \
ls -l /dir/dir1/
This is common issue, most often when this is put in Dockerfile:
RUN apt-get update
RUN apt-get install some-package
Instead of this:
RUN apt-get update && \
apt-get install some-package
Note: This is in line with best practices for usage of RUN command in Dockerfile, documented here: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run and avoids possible confusion with caches/layes...
To recreate your problem here is small test to resemble similar setup to yours, depending on actual directory structure in your archive this may differ:
Dummy archive 2 with dir/dir1/somefile.txt created:
mkdir -p ~/test-sowf/dir/dir1 && cd ~/test-sowf && echo "Yay" | tee --append dir/dir1/somefile.txt && tar cvzf archive2.tar.gz dir && rm -rf dir
Dockerfile created in ~/test-sowf with following content
from ubuntu:latest
COPY archive2.tar.gz /dir/
RUN tar xvzf /dir/archive2.tar.gz -C /dir/ && \
ls -l /dir/ && \
ls -l /dir/dir/dir1/
Build command like so:
docker build -t test-sowf .
Gives following result:
Sending build context to Docker daemon 5.632kB
Step 1/3 : from ubuntu:latest
---> 452a96d81c30
Step 2/3 : COPY archive2.tar.gz /dir/
---> Using cache
---> 852ef4f706d3
Step 3/3 : RUN tar xvzf /dir/archive2.tar.gz -C /dir/ && ls -l /dir/ && ls -l /dir/dir/dir1/
---> Running in b2ab281190a2
dir/
dir/dir1/
dir/dir1/somefile.txt
total 8
-rw-r--r-- 1 root root 177 May 10 15:43 archive2.tar.gz
drwxr-xr-x 3 1000 1000 4096 May 10 15:43 dir
total 4
-rw-r--r-- 1 1000 1000 4 May 10 15:43 somefile.txt
Removing intermediate container b2ab281190a2
---> 05b7dfe52e36
Successfully built 05b7dfe52e36
Successfully tagged test-sowf:latest
Note that extracted files are with 1000:1000 as opposed to root:root for the archive, so unless you are not running from some other user (non root) you should not have problems with user, but, depending on your archive you might run into path problems (/dir/dir/dir1 as shown here).
test that file is correct, and contains 'Yay' inside:
docker run --rm --name test-sowf test-sowf:latest cat /dir/dir/dir1/somefile.txt
clean the test mess afterwards (deliberatelynot using rm -rf but cleaning individual files):
docker rmi test-sowf && cd && rm ~/test-sowf/archive2.tar.gz && rm ~/test-sowf/Dockerfile && rmdir ~/test-sowf
For those using docker-compose:
Sometimes when you volume mount a folder/file from one container to another before it exists, it can have weird permissions after it's created
For example if one container is certbot and another is your webserver, certbot will take time to generate the /etc/letsencrypt folder and its contents
From the webserver you might be able to see the folder or its contents with an ls, but not open them. You can see the behavior with a cat * and you'll get back
cat: <files in question>: No such file or directory
One solution is generating the folder at build time with a RUN mkdir -p /directory/of/choice in your dockerfile for the container generating the folder/files. Then the folder will exist and docker will happily mount it to your other container or host machine the way you want it to
I would like to generate pdfs from my working directory with docker file where is texlive included
/***Dockerfile***/
FROM alpine
RUN sed -i -e 's/v3\.4/edge/g' /etc/apk/repositories
RUN echo 'http://dl-cdn.alpinelinux.org/alpine/edge/testing' >> /etc/apk/repositories
RUN apk update\
&& apk add texlive-full
WORKDIR ./mountvolume
/**Build Image**/
docker build -t texlive .
The Docker Image is working like expectetd but when ich try
docker run -v $PWD:/mountvolume texlive /bin/sh -c 'pdflatex article.tex'
i get the error:
This is pdfTeX, Version 3.14159265-2.6-1.40.17 (TeX Live 2016/Alpine Linux) (preloaded format=pdflatex)
restricted \write18 enabled.
kpathsea: Running mktexfmt pdflatex.fmt
Can't locate mktexlsr.pl in #INC (#INC contains: /usr/share/tlpkg /usr/share/texmf-dist/scripts/texlive /usr/local/lib/perl5/site_perl /usr/local/share/perl5/site_perl /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5/core_perl /usr/share/perl5/core_perl .) at /usr/bin/mktexfmt line 23.
BEGIN failed--compilation aborted at /usr/bin/mktexfmt line 25.
I can't find the format file pdflatex.fmt!
Your Dockerfile needs to be a bit more elaborate: see issue 4514
Looks like that texlive package isn't enough to get fully functional TeX system (2015: fmtutil.pl depends on mktexlsr.pl)
texlive has been on testing repo for a long time.
But it's missing some stuff, most important being texlive-texmf.
If you want to use texlive today you must install the package from testing, download texlive-20160523b-texmf.tar.xz, copy contents to /usr/share/texmf-dist and reinstall the package from testing.
But:
What alpine needs is to package texlive-texmf too.
But no alpine linux developer has looked into this yet
I am building docker from this version of this source code:
https://github.com/boucher/docker/tree/cr-combined
after cloning the code :
git clone -b cr-combined --single-branch https://github.com/boucher/docker.git
cd docker
#make build
#make binary
And then copied the resulting file #./bundles/../docker to the usr/bin directory
After reopening the terminal and starting the docker engine again.
its shows that i am using my own built version but
This version should have two main docker commands that won't show up in my built one
1- checkpoint
2- restore
could you please help me and tell me where it went wrong
Here is what I do:
$ git clone https://github.com/boucher/docker
$ cd docker
$ git checkout cr-combined
$ env AUTO_GOPATH=1 DOCKER_EXPERIMENTAL=1 \
DOCKER_BUILDTAGS='exclude_graphdriver_btrfs \
exclude_graphdriver_devicemapper' ./hack/make.sh binary
$ ./bundles/1.10.0-dev/binary/docker-1.10.0-dev --help | grep checkpoint
checkpoint Checkpoint one or more running containers
restore Restore one or more checkpointed containers
Hope this helps.