I am having trouble making a simple build run on my RHEL server.
When running docker build I get
Step 2/2 : RUN echo "Hello there!"
---> Running in 0d0fd7f69a5f
/bin/sh: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
The command '/bin/sh -c echo "Hello there!"' returned a non-zero code: 127
Dockerfile:
FROM ubuntu:latest
RUN echo "Hello there!"
RHEL 7.7 - Linux 3.10.0-1062.9.1.el7.x86_64
Docker version 1.13.1, build 4ef4b30/1.13.1
The Dockerfile is fine - I can build the image on any other machine so I am wondering where the issue actually is. Thanks!
Thanks to everyone for the suggestions!
Turns out it was an SELinux issue. Seems Docker needs SELinux to be configured properly to run (in my case I chose to disable SELinux using sudo setenforce 0).
Related
Problem: I am attempting to create and run a Docker container of an Edge Service on a Raspberry Pi using 64 bit Raspbian. Due to other limitations, I am attempting to containerize the program (in Golang) by containerizing the build executable file, rather than building the .go in the Dockerfile.
My Dockerfile looks like this
FROM golang:alpine
WORKDIR /build
ADD server .
EXPOSE 50051
CMD ["./server"]
Running the built executable on it's own works correctly, and when creating the Docker Image using the command "sudo docker build -t server:v7 .", the Docker Daemon gives no errors, with the finished image being displayed on the list of Docker Images as if it were correctly working. However, when I try to run the Image, I receive the error "standard_init_linux.go:219: exec user process caused: no such file or directory", rather than the file running.
Some additional notes as per the comments: All of the programs were written in Ubuntu, they were built to an executable on Raspberry Pi, and the Dockerfile was also written on the the Raspberry Pi. When using the command "sudo docker run -it -d : sh", the image will run, and when using the command "sudo docker exec -it sh", terminal will enter the shell. Attempts to run the program from within the shell similarly fail, yielding the error "sh: error not found"
ldd yields the following results:
/lib/ld-linux-aarch64.so.1 (0x7f8731f000)
libpthread.so.0 => /lib/ld-linux-aarch64.so.1 (0x7f8731f000)
libc.so.6 => /lib/ld-linux-aarch64.so.1 (0x7f8731f000)
And there seems to be no problem with Alpine or with $PATH
Do you build on Raspberry or a different machine? If you build on PC, you need to set the build target architecture to match the Raspberry.
The image "golang:alpine" is available for Raspberry, so this should not be the problem.
The error "user process caused: no such file or directory" is then probably related to the file you are trying to execute.
Make sure the file exists in the directory
Make sure the file is executable
Maybe change the CMD to "ls -la" to see in the docker logs if the file is there and which attributes it has.
I try to build a docker image with pip RUN pip3 install *package* --index-url=*url* --trusted-host=*url*. However, it fails with the following error:
Could not find a version that satisfies the requirement *package* (from versions: )
No matching distribution found for *package*.
However, after I removed the package and successfully build the image, I could successfully install the package from docker container!
The bash I used to build image is: sudo docker build --network=host -t adelai:deploy . -f bernard.Dockerfile.
Please try
docker run --rm -ti python bash
Then run your pip ... inside this container.
The problem is solved: I set the environment variable during build (ARG http_proxy="*url*") and unset it (ENV http_proxy=) just before the installation.
I am not an expert in docker, but guess the reason is that the environment variables are discarded after the build, which cause the environments are different between dockerfile and docker container.
#Matthias Reissner gives a solid guide, but this answer absolutely provide a more detailed way to debug problems during docker building.
I'm following the instructions on Docker's website on building a parent image. I'm very new to Docker. I'm on a CentOS 7.5.
I ran the mkimage-yum.sh script suggested on the Docker website for CentOS. I didn't understand why the last line of the script, rm -rf "$target" was there because it seems to delete all the work done by the script. So I commented it out and it leaves a directory /tmp/mkimage-yum.sh.ahE8xx, which looks like a minimal linux image with the typical linux file structure (e.g. /usr/,/etc/)
In my home directory, I compiled the program,
main.c :
#include <stdio.h>
#include <stdlib.h>
int main(void){
printf("Hello Docker World!\n");
return 0;
}
Using gcc -static -static-libgcc -static-libstdc++ -o hello main.c, I compiled the code to a statically linked executable as prescribed in the docker webpage.
I created the Dockerfile,
e.g.
FROM scratch
ADD hello /
CMD ["/hello"]
I start up the dockerd server and in a separate terminal I run docker build --tag hello .
The output is :
Sending build context to Docker daemon 864.8 kB
Step 1/3 : FROM scratch
--->
Step 2/3 : ADD hello /
---> Using cache
---> a38d49d40e50
Step 3/3 : CMD /hello
---> Using cache
---> 3bcbb04c367f
Successfully built 3bcbb04c367f
Gee whiz, it looks like it worked! However, I still only see Dockerfile hello main.c in the directory I did this. Docker clearly thinks it did something, but what? It didn't create any new files.
Now I run docker run --rm hello and it outputs Hello Docker World!.
However, I get disconcerting errors from the dockerd server:
ERRO[502548] containerd: deleting container error=exit status 1: "container f336b3a5505879453b4f7a00c06acf274d0a5f8b3d260762273a2d7c0a846141 does not exist\none or more of the container deletions failed\n"
WARN[502548] f336b3a5505879453b4f7a00c06acf274d0a5f8b3d260762273a2d7c0a846141 cleanup: failed to unmount secrets: invalid argument
QUESTIONS :
What exactly did docker build --tag hello . do? I see no output from this.
What are the dockerd errors all about? Maybe looking for the docker image not created by docker build?
How does the mkimage-yum.sh fit into this? Why does it delete all the work that it does at the end.
When you run --rm with docker run, the container will run then delete itself. If you want to keep the container, remove the --rm from the command.
The command docker build will grab the dockerfile and create a local image, in you case, from scratch with the image name of hello.
You will not see any additional files created in your folder. To see the created image, run command docker images. Here you should be able to see your image built with tag hello.
When you run docker run <imagename> it starts up a container with the provided image. That's why you see your C program's output
Getting error while running docker image. It seems to look like the problem is on my pc.
I'm using MacOS 10.13.6.
I have followed steps to create a docker image.
Sanjeet:server-api sanjeet$ docker build -t apicontainer .
Sending build context to Docker daemon 24.01MB
Step 1/2 : FROM alpine:3.6
---> da579b235e92
Step 2/2 : CMD ["/bin/bash"]
---> Running in f43fa95302d4
Removing intermediate container f43fa95302d4
---> 64d0b47af4df
Successfully built 64d0b47af4df
Successfully tagged apicontainer:latest
Sanjeet:server-api sanjeet$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
apicontainer latest 64d0b47af4df 3 minutes ago 4.03MB
alpine 3.6 da579b235e92 2 weeks ago 4.03MB
Sanjeet:server-api sanjeet$ docker run -it apicontainer
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown.
Sanjeet:server-api sanjeet$ ERRO[0001] error waiting for container: context canceled
Inside Dockerfile
FROM alpine:3.6
CMD ["/bin/bash"]
alpine does not include bash by default.
If you want to include bash, you should add RUN apk add --no-cache bash in your Dockerfile.
Issues while running/re-initialize Docker
ERRO[0000] error waiting for the container: context canceled
Steps to Stop the Running Docker Container Forcefully
docker ps
//copy the CONTAINER ID of the running process, ex: CONTAINER ID: 37146513b713
docker kill 37146513b713
docker rm 37146513b713
alpine does not provide glibc. alpine is that small because it uses a stripped down version of libstdc called musl.libc.org.
So we'll check statically linked dependencies using ldd command.
$ docker run -it <image name> /bin/sh
$ cd /go/bin
$ ldd scratch
Check the linked static files, do they exist on that version of alpine? If the do not from, the binaries perspective, it did not find the file-- and will report File not found.
The following step depends on what binaries were missing, you can look it up on the internet on how to install them.
Add RUN apk add --no-cache libc6-compat to your Dockerfile to add libstdc in some Golang alpine image based Dockerfiles.
In you case the solution is to either
disable CGO : use CGO_ENABLED=0 while building
or add
RUN apk add --no-cache libc6-compat
to your Dockerfile
or do not use golang:alpine
I get this error when the network doesn't exist.
docker: Error response from daemon: network xxx not found.
ERRO[0000] error waiting for container: context canceled
It's quite easy to miss the output line after a long docker command and before the red error message.
I have a simple Java server app with a Gradle build. It works perfectly with gradle run on my host machine. However, I want to build this in a docker image and run as a docker container.
I'm using docker-machine (version 0.13.0):
docker-machine create --driver virtualbox --virtualbox-memory 6000 default
docker-machine start
eval $(docker-machine env default)
I have the following Dockerfile image build script in ./serverapp/Dockerfile:
FROM gradle:4.3-jdk-alpine
ADD . /code
WORKDIR /code
CMD ["gradle", "--stacktrace", "run"]
I can build perfectly:
➜ docker build -t my-server-app .
Sending build context to Docker daemon 310.3kB
Step 1/4 : FROM gradle:4.3-jdk-alpine
---> b803ec92baec
Step 2/4 : ADD . /code
---> Using cache
---> f458b0be79dc
Step 3/4 : WORKDIR /code
---> Using cache
---> d98d04eda627
Step 4/4 : CMD ["gradle", "--stacktrace", "run"]
---> Using cache
---> 869262257870
Successfully built 869262257870
Successfully tagged my-server-app:latest
When I try to run this image:
➜ docker run --rm my-server-app
FAILURE: Build failed with an exception.
* What went wrong:
Could not create service of type ScriptPluginFactory using BuildScopeServices.createScriptPluginFactory().
> Could not create service of type CrossBuildFileHashCache using BuildSessionScopeServices.createCrossBuildFileHashCache().
* Try:
Run with --info or --debug option to get more log output.
* Exception is:
org.gradle.internal.service.ServiceCreationException: Could not create service of type ScriptPluginFactory using BuildScopeServices.createScriptPluginFactory().
at org.gradle.internal.service.DefaultServiceRegistry$FactoryMethodService.invokeMethod(DefaultServiceRegistry.java:797)
<snip>
... 60 more
Caused by: org.gradle.api.UncheckedIOException: Failed to create parent directory '/code/.gradle/4.3' when creating directory '/code/.gradle/4.3/fileHashes'
at org.gradle.util.GFileUtils.mkdirs(GFileUtils.java:271)
at org.gradle.cache.internal.DefaultPersistentDirectoryStore.open(DefaultPersistentDirectoryStore.java:56)
Why would it have trouble creating that directory?
This should be a very easy task, can anyone tell me how they get this simple scenario working?
FYI, running current versions of everything. I'm using Gradle 4.3.1 on my host, and the official Gradle 4.3 base image from docker hub, I'm using the current version of JDK 8 on my host and the current version of docker, docker-machine, and docker-compose as well.
The fix was to specify --chown=gradle permissions on the /code directory in the Dockerfile. Many Docker images are designed to run as root, the base Gradle image runs as user gradle.
FROM gradle:4.3-jdk-alpine
ADD --chown=gradle . /code
WORKDIR /code
CMD ["gradle", "--stacktrace", "run"]
Ethan Davis suggested using /home/gradle rather than code. That would probably work as well, but I didn't think of that.
The docker image maintainer should have a simple getting started type reference example that shows the recommended way to get basic usage.
Based on the openjdk base image to the gradle image we can see that gradle projects are setup to run in /home/gradle. Check the code out here. gradle run is having trouble running in your new working directory, /code, because the .gradle folder is in the /home/gradle folder. If you copy/add your code into /home/gradle you should be able to run gradle run. This worked for me.