I am writing a Dockerfile based on wildfly image. I have isolated these lines where I am having some headache. The curl command doesn't work during build process. I have already uninstalled and installed Docker again but the error persists. My system is a Linux Mint.
In addition I tried to build that same Dockerfile in a RHEL and it worked like a charm.
Here's the Dockerfile:
FROM jboss/wildfly
RUN cd $HOME \
&& curl -O "http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.44/mysql-connector-java-5.1.44.jar"
Here's the error output:
Sending build context to Docker daemon 1.03MB
Step 1/6 : FROM jboss/wildfly
---> b695bdcce374
Step 2/6 : RUN cd $HOME && curl -O "http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.44/mysql-connector-java-5.1.44.jar"
---> Running in 4fdcef7dbda1
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:39 --:--:-- 0
curl: (6) Could not resolve host: central.maven.org; Unknown error
The command '/bin/sh -c cd $HOME && curl -O "http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.44/mysql-connector-java-5.1.44.jar"' returned a non-zero code: 6
I could workaround the problem doing something like this:
docker build --add-host central.maven.org:151.101.56.209 .
but I'm not happy with that. I would like to say Docker to use my DNS instead of set fixed IP. It would be more elegant.
The error is a DNS problem resolving the hostname. You could try running docker with the --dns option (https://docs.docker.com/engine/reference/run/#network-settings)
docker run --dns=8.8.8.8
which will set the container /etc/resolv.conf for you.
Related
I tried to install a project from docker and after running the command:
docker-compose -f docker-compose.yml up -d --build
at step 5 I'm receiving a Connection timed out saying:
curl: (7) Failed to connect to download.icu-project.org port 80: Connection timed out
ERROR: Service 'app' failed to build: The command '/bin/sh -c curl -sS -o /tmp/icu.tar.gz -L http://download.icuproject.org/files/icu4c/60.1/icu4c-60_1-src.tgz && tar -zxf /tmp/icu.tar.gz -C /tmp && cd /tmp/icu/source && ./configure --prefix=/usr/local && make && make install' returned a non-zero code: 7
I tried running into bash, but when I type 'docker-compose ps' I got no containers so I don't know how to properly fix this.
Have any of you encounter this issue and want to share with me ?
As is evident from the ERROR, the url that's being accessed as part of the docker-compose i.e. http://ftp.lfs-matrix.net/pub/blfs/conglomeration/icu/icu4c-60_1-src.tgz is inaccessible which is why the docker-compose fails with timeout. You can try accessing the URL from the browser and yes its unreachable.
You must change it to something else something like http://ftp.lfs-matrix.net/pub/blfs/conglomeration/icu/. While you do that please make sure that you are indeed downloading the zip from a reliable source.
Using Ubuntu 16.04 and docker 18.03.1-ce
In a Dockerfile I have:
RUN curl -k -o app.jar -kfSL https://mynexus:9006/repository/legacy/1.0.2/app.jar
But when I build that with:
sudo docker build -t myimage .
I get:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: mynexus
The command '/bin/sh -c curl -k -o app.jar -kfSL https://mynexus:9006/repository/legacy/1.0.2/app.jar' returned a non-zero code: 6
If I run the exact same command line from a terminal it works fine:
$ curl -k -o app.jar -kfSL https://mynexus:9006/repository/legacy/1.0.2/app.jar
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3613k 100 3613k 0 0 1413k 0 0:00:02 0:00:02 --:--:-- 1413k
Why can I curl that file from command line but not from the DockerFile?
You can see the curl error message when running the last commited image from the build with curl command:
docker run -it --rm lastCommitedImageHash curl -k -o app.jar -kfSL https://mynexus:9006/repository/legacy/1.0.2/app.jar
I had the same problem last week - curl worked directly from host but not when ran during build from Dockerfile.
Restart of Docker daemon helped to me.
I haven't found what was the reason of that state. It was on the machine which is also as a Kubernetes master and I did Kubernetes cluster upgrade last week and probably this was the source of that problem.
Another cause of that problem in your case can be if you have mynexus hostname defined only in the /etc/hosts file on the host machine and then it is unknown for the running container during the build.
I'm developing behind a company proxy, using Linux Mint Sylvia (Docker was installed via the Ubuntu 16.04.3 Xenial source).
$ docker -v
Docker version 17.12.1-ce, build 7390fc6
I've followed these steps to actually download some images via docker pull.
Control Docker with systemd (HTTP/HTTPS proxy)
My http-proxy.conf:
$ cat /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://my_user:my_pass#company_proxy:3128/"
Environment="HTTPS_PROXY=https://my_user:my_pass#company_proxy:3128/"
Environment="NO_PROXY=localhost,127.0.0.0/8"
My /etc/default/docker:
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/"
export http_proxy="http://my_user:my_pass#company_proxy:3128"
export https_proxy="https://my_user:my_pass#company_proxy:3128"
export HTTP_PROXY="http://my_user:my_pass#company_proxy:3128"
export HTTPS_PROXY="https://my_user:my_pass#company_proxy:3128"
I need to run curl inside a multistage Alpine container, for simplicity purposes I've build this simple image that is similar to what I'm trying to accomplish and has the same error.
FROM alpine:3.7
ENV HTTP_PROXY http://my_user:my_pass#company_proxy:3128
ENV HTTPS_PROXY https://my_user:my_pass#company_proxy:3128
RUN apk add --no-cache curl
CMD ["curl","-v","--tlsv1","https://www.docker.io/"]
Built with
$ docker build --network host --rm -t test/alpine:curl .
Running without --network host.
$ docker run --rm test/alpine:curl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Could not resolve proxy: company_proxy
* Closing connection 0
curl: (5) Could not resolve proxy: company_proxy
Running with --network host.
$ docker run --network host --rm test/alpine:curl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.2.255.0...
* TCP_NODELAY set
* Connected to company_proxy (10.2.255.0) port 3128 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
} [233 bytes data]
* error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* Closing connection 0
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
I'm a beginner with Docker and have tested this image in 2 wifi networks (both without proxy), the containers runned fine. Any hints on what might be causing this SSL error?
Edit: This is my original problem, I have a multi-stage docker image that runs go code to curl something from firebase.
// main.go
package main
import (
"os/exec"
"os"
"log"
)
func main() {
c := exec.Command("curl","--tlsv1","-kv","-X","PATCH","-d",`{"something" : "something"}`, `https://<firebase-link>`);
c.Stdout = os.Stdout
c.Stderr = os.Stderr
err := c.Run()
checkerr(err)
}
func checkerr(err error) {
if err != nil{
log.Fatal(err.Error())
panic(err)
}
}
The original Dockerfile:
# This image only builds the go binaries
FROM golang:1.10-alpine as goalpine-image
ENV HTTP_PROXY http://my_user:my_pass#company_proxy:3128
ENV HTTPS_PROXY https://my_user:my_pass#company_proxy:3128
ENV FULL_PATH /go/src/<project-name>
WORKDIR $FULL_PATH
# Add the source code:
ADD . $FULL_PATH
# Build it:
RUN cd $FULL_PATH \
&& CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o bin/<project-name>
# This image holds the binaries from the previous
FROM alpine
RUN apk add --no-cache bash curl\
&& mkdir build
ENV WORD_DIR=/build
WORKDIR WORK_DIR
COPY --from=goalpine-image /go/src/<project-name>/bin ./
CMD ["./<project-name>"]
I've edited my question to contain more info about my original problem, oddly the problem still persists in the toy image. So, if someone ever has this problem again this is what solved for me.
The multi stage Dockerfile. It seems both stages need to have access of the proxy envs.
# This image only builds the go binaries
FROM golang:1.10-alpine as goalpine-image
ARG http_proxy
ARG https_proxy
ENV HTTP_PROXY $http_proxy
ENV HTTPS_PROXY $https_proxy
# Build envs
ENV FULL_PATH /go/src/<project-name>
WORKDIR $FULL_PATH
# Add the source code:
ADD . $FULL_PATH
# Build it:
RUN cd $FULL_PATH \
&& apk update \
&& apk add --no-cache curl \
&& CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o bin/<project-name>
# This image holds the binaries from the previous
FROM alpine:3.7
ENV HTTP_PROXY $http_proxy
ENV HTTPS_PROXY $https_proxy
RUN apk update \
&& apk add --no-cache bash curl\
&& mkdir build
ENV WORD_DIR=/build
WORKDIR WORK_DIR
COPY --from=goalpine-image /go/src/<project-name>/bin ./
CMD ["./<project-name>"]
Building:
Make sure to set http_proxy and https_proxy as environment
variables, mine are in /etc/profile.
docker build --rm --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy --network host -t <project-name>:multi-stage .
Running:
docker container run --rm --network host <project-name>:multi-stage
I need to get some containers to dead state, as I want to check if a script of mine is working. Any advice is welcome. Thank you.
You've asked for a dead container.
TL;DR: This is how to create a dead container
Don't do this at home:
ID=$(docker run --name dead-experiment -d -t alpine sh)
docker kill dead-experiment
test "$ID" != "" && chattr +i -R /var/lib/docker/containers/$ID
docker rm -f dead-experiment
And voila, docker could not delete the container root directory, so it falls to a status=dead:
docker ps -a -f status=dead
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
616c2e79b75a alpine "sh" 6 minutes ago Dead dead-experiment
Explanation
I've inspected the source code of docker and saw this state transition:
container.SetDead()
// (...)
if err := system.EnsureRemoveAll(container.Root); err != nil {
return errors.Wrapf(err, "unable to remove filesystem for %s", container.ID)
}
// (...)
container.SetRemoved()
So, if docker cannot remove the container root directory, it remain as dead and does not continue to the Removed state. So I've forced the file permissions to not permit root remove files (chattr -i).
PS: to revert the directory permissions do this: chattr -i -R /var/lib/docker/containers/$ID
For docker-1.12+, instruction HEALTHCHECK can help you.
The HEALTHCHECK instruction tells Docker how to test a container to check that it is still working. This can detect cases such as a web server that is stuck in an infinite loop and unable to handle new connections, even though the server process is still running.
For example, we have a Dockerfile to define my own webapp:
FROM nginx:1.13.1
RUN apt-get update \
&& apt-get install -y curl \
&& rm -rf /var/lib/apt/lists/*
HEALTHCHECK --interval=15s --timeout=3s \
CMD curl -fs http://localhost:80/ || exit 1
check every five minutes or so that a web-server is able to serve the site’s main page within three seconds.
The command’s exit status indicates the health status of the container. The possible values are:
0: success - the container is healthy and ready for use
1: unhealthy - the container is not working correctly
2: reserved - do not use this exit code
Then use docker build command to build an image:
$ docker build -t myapp:v1 .
And run a container using this image:
$ docker run -d --name healthcheck-demo -p 80:80 myapp:v1
check the status of container:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b812c8d6f43a myapp:v1 "nginx -g 'daemon ..." 3 seconds ago Up 2 seconds (health: starting) 0.0.0.0:80->80/tcp healthcheck-demo
At the beginning, the status of container is (health: starting); after a while, it changes to be healthy:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2bb640a6036 myapp:v1 "nginx -g 'daemon ..." 2 minutes ago Up 5 minutes (healthy) 0.0.0.0:80->80/tcp healthcheck-demo
It takes retries consecutive failures of the health check for the container to be considered unhealthy.
You can use your own script to replace the command curl -fs http://localhost:80/ || exit 1. What's more, stdout and stderr of your script can be fetched from docker inspect command:
$ docker inspect --format '{{json .State.Health}}' healthcheck-demo |python -m json.tool
{
"FailingStreak": 0,
"Log": [
{
"End": "2017-06-09T19:39:58.379906565+08:00",
"ExitCode": 0,
"Output": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 612 100 612 0 0 97297 0 --:--:-- --:--:-- --:--:-- 99k\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\nnginx.org.<br/>\nCommercial support is available at\nnginx.com.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n",
"Start": "2017-06-09T19:39:58.229550952+08:00"
}
],
"Status": "healthy"
}
Hope this helps!
I am trying to install Kapacitor over Cent-OS base, but, I am facing problems with executing the localinstall command (or so I think) when I build the dockerfile.
My dockerfile is as follows:
FROM centos-base:7
ENV CONFIG_HOME /usr/local/bin
RUN curl -O https://dl.influxdata.com/kapacitor/releases/kapacitor-0.13.1.x86_64.rpm
RUN yum localinstall kapacitor-0.13.1.x86_64.rpm
COPY kapacitor.conf $CONFIG_HOME
ENTRYPOINT["/bin/bash"]
When I build it, I get the following response:
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM centos-base:7
---> 9ab68a0dd16a
Step 2 : ENV CONFIG_HOME /usr/local/bin
---> Running in ef5b7206e55d
---> 7c1b42d279db
Removing intermediate container ef5b7206e55d
Step 3 : RUN curl -O https://dl.influxdata.com/kapacitor/releases/kapacitor-0.13.1.x86_64.rpm
---> Running in 681bb29474f9
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10.8M 100 10.8M 0 0 123k 0 0:01:29 0:01:29 --:--:-- 224k
---> 99b4e77c89f2
Removing intermediate container 681bb29474f9
Step 4 : RUN yum localinstall kapacitor-0.13.1.x86_64.rpm
---> Running in d67ad03f4830
Loaded plugins: fastestmirror, ovl
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Examining kapacitor-0.13.1.x86_64.rpm: kapacitor-0.13.1-1.x86_64
Marking kapacitor-0.13.1.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package kapacitor.x86_64 0:0.13.1-1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
kapacitor x86_64 0.13.1-1 /kapacitor-0.13.1.x86_64 41 M
Transaction Summary
================================================================================
Install 1 Package
Total size: 41 M
Installed size: 41 M
Is this ok [y/d/N]: Exiting on user command
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2016-08-31.04-00.gvfpqf.yumtx
The command '/bin/sh -c yum localinstall kapacitor-0.13.1.x86_64.rpm' returned a non-zero code: 1
Where am I going wrong? Can't I execute a localinstall inside Dockerfile? Thanks!
Replace
RUN yum localinstall kapacitor-0.13.1.x86_64.rpm
with
RUN yum -y localinstall kapacitor-0.13.1.x86_64.rpm