I'm developing behind a company proxy, using Linux Mint Sylvia (Docker was installed via the Ubuntu 16.04.3 Xenial source).
$ docker -v
Docker version 17.12.1-ce, build 7390fc6
I've followed these steps to actually download some images via docker pull.
Control Docker with systemd (HTTP/HTTPS proxy)
My http-proxy.conf:
$ cat /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://my_user:my_pass#company_proxy:3128/"
Environment="HTTPS_PROXY=https://my_user:my_pass#company_proxy:3128/"
Environment="NO_PROXY=localhost,127.0.0.0/8"
My /etc/default/docker:
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/"
export http_proxy="http://my_user:my_pass#company_proxy:3128"
export https_proxy="https://my_user:my_pass#company_proxy:3128"
export HTTP_PROXY="http://my_user:my_pass#company_proxy:3128"
export HTTPS_PROXY="https://my_user:my_pass#company_proxy:3128"
I need to run curl inside a multistage Alpine container, for simplicity purposes I've build this simple image that is similar to what I'm trying to accomplish and has the same error.
FROM alpine:3.7
ENV HTTP_PROXY http://my_user:my_pass#company_proxy:3128
ENV HTTPS_PROXY https://my_user:my_pass#company_proxy:3128
RUN apk add --no-cache curl
CMD ["curl","-v","--tlsv1","https://www.docker.io/"]
Built with
$ docker build --network host --rm -t test/alpine:curl .
Running without --network host.
$ docker run --rm test/alpine:curl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Could not resolve proxy: company_proxy
* Closing connection 0
curl: (5) Could not resolve proxy: company_proxy
Running with --network host.
$ docker run --network host --rm test/alpine:curl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.2.255.0...
* TCP_NODELAY set
* Connected to company_proxy (10.2.255.0) port 3128 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
} [233 bytes data]
* error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* Closing connection 0
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
I'm a beginner with Docker and have tested this image in 2 wifi networks (both without proxy), the containers runned fine. Any hints on what might be causing this SSL error?
Edit: This is my original problem, I have a multi-stage docker image that runs go code to curl something from firebase.
// main.go
package main
import (
"os/exec"
"os"
"log"
)
func main() {
c := exec.Command("curl","--tlsv1","-kv","-X","PATCH","-d",`{"something" : "something"}`, `https://<firebase-link>`);
c.Stdout = os.Stdout
c.Stderr = os.Stderr
err := c.Run()
checkerr(err)
}
func checkerr(err error) {
if err != nil{
log.Fatal(err.Error())
panic(err)
}
}
The original Dockerfile:
# This image only builds the go binaries
FROM golang:1.10-alpine as goalpine-image
ENV HTTP_PROXY http://my_user:my_pass#company_proxy:3128
ENV HTTPS_PROXY https://my_user:my_pass#company_proxy:3128
ENV FULL_PATH /go/src/<project-name>
WORKDIR $FULL_PATH
# Add the source code:
ADD . $FULL_PATH
# Build it:
RUN cd $FULL_PATH \
&& CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o bin/<project-name>
# This image holds the binaries from the previous
FROM alpine
RUN apk add --no-cache bash curl\
&& mkdir build
ENV WORD_DIR=/build
WORKDIR WORK_DIR
COPY --from=goalpine-image /go/src/<project-name>/bin ./
CMD ["./<project-name>"]
I've edited my question to contain more info about my original problem, oddly the problem still persists in the toy image. So, if someone ever has this problem again this is what solved for me.
The multi stage Dockerfile. It seems both stages need to have access of the proxy envs.
# This image only builds the go binaries
FROM golang:1.10-alpine as goalpine-image
ARG http_proxy
ARG https_proxy
ENV HTTP_PROXY $http_proxy
ENV HTTPS_PROXY $https_proxy
# Build envs
ENV FULL_PATH /go/src/<project-name>
WORKDIR $FULL_PATH
# Add the source code:
ADD . $FULL_PATH
# Build it:
RUN cd $FULL_PATH \
&& apk update \
&& apk add --no-cache curl \
&& CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o bin/<project-name>
# This image holds the binaries from the previous
FROM alpine:3.7
ENV HTTP_PROXY $http_proxy
ENV HTTPS_PROXY $https_proxy
RUN apk update \
&& apk add --no-cache bash curl\
&& mkdir build
ENV WORD_DIR=/build
WORKDIR WORK_DIR
COPY --from=goalpine-image /go/src/<project-name>/bin ./
CMD ["./<project-name>"]
Building:
Make sure to set http_proxy and https_proxy as environment
variables, mine are in /etc/profile.
docker build --rm --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy --network host -t <project-name>:multi-stage .
Running:
docker container run --rm --network host <project-name>:multi-stage
Related
I'm trying to deploy a simple single endpoint quarkus app to Google Cloud -> App Engine flex environment. I managed to deploy an uber-jar to standard environment, according to documentation
But I'm struggling to deploy with flex environment, as my understating, from the same documentation link as above, is that GCP will create a docker container based on the Dockerfile; In the end, my intention is to deploy a native image to GCP App Engine.
I followed the steps in the link above:
Copy the JVM Dockerfile to the root directory of your project: cp src/main/docker/Dockerfile.jvm Dockerfile
Build your application using mvn clean package
src/main/appengine/app.yaml has the following:
runtime: custom
env: flex
gcloud app deploy
The gcloud log returns the same error, both for native and normal docker file, respectively:
starting build "502dd964-0abf-470e-a4c1-44c89a67a96e"
FETCHSOURCE
Fetching storage object: gs://staging.os-xxxx-quarkus.appspot.com/eu.gcr.io/os-xxxx-quarkus/appengine/default.20210322t183731:latest#1616431057582767
Copying gs://staging.os-xxxx-quarkus.appspot.com/eu.gcr.io/os-xxxx-quarkus/appengine/default.20210322t183731:latest#1616431057582767...
/ [0 files][ 0.0 B/ 230.0 B]
/ [1 files][ 230.0 B/ 230.0 B]
Operation completed over 1 objects/230.0 B.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /Users: no such file or directory
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
For brevity, I will include the dockerfile, but it is the default generated without any changes.
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3
ARG JAVA_PACKAGE=java-11-openjdk-headless
ARG RUN_JAVA_VERSION=1.3.8
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
# Install java and the run-java script
# Also set up permissions for user `1001`
RUN microdnf install curl ca-certificates ${JAVA_PACKAGE} \
&& microdnf update \
&& microdnf clean all \
&& mkdir /deployments \
&& chown 1001 /deployments \
&& chmod "g+rwX" /deployments \
&& chown 1001:root /deployments \
&& curl https://repo1.maven.org/maven2/io/fabric8/run-java-sh/${RUN_JAVA_VERSION}/run-java-sh-${RUN_JAVA_VERSION}-sh.sh -o /deployments/run-java.sh \
&& chown 1001 /deployments/run-java.sh \
&& chmod 540 /deployments/run-java.sh \
&& echo "securerandom.source=file:/dev/urandom" >> /etc/alternatives/jre/lib/security/java.security
# Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size.
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
# We make four distinct layers so if there are application changes the library layers can be re-used
COPY --chown=1001 target/quarkus-app/lib/ /deployments/lib/
COPY --chown=1001 target/quarkus-app/*.jar /deployments/
COPY --chown=1001 target/quarkus-app/app/ /deployments/app/
COPY --chown=1001 target/quarkus-app/quarkus/ /deployments/quarkus/
EXPOSE 8080
USER 1001
ENTRYPOINT [ "/deployments/run-java.sh" ]
Thank you for your time
It seems i polluted the space with an embarrassing mistake; for Google Cloud App Engine ,flex environment, the app.yaml file should be placed in the root folder; it is also documented
Moreover, for some reason, the max_num_instances should be explicitly set to a value of max 8, according to account quota google documentation otherwise a EXAHUSTED_RESOURCE exception is thrown.
I'd like to serve Tensorfow Model by using OpenFaaS. Basically, I'd like to invoke the "serve" function in such a way that tensorflow serving is going to expose my model.
OpenFaaS is running correctly on Kubernetes and I am able to invoke functions via curl or from the UI.
I used the incubator-flask as example, but I keep receiving 502 Bad Gateway all the time.
The OpenFaaS project looks like the following
serve/
- Dockerfile
stack.yaml
The inner Dockerfile is the following
FROM tensorflow/serving
RUN mkdir -p /home/app
RUN apt-get update \
&& apt-get install curl -yy
RUN echo "Pulling watchdog binary from Github." \
&& curl -sSLf https://github.com/openfaas-incubator/of-watchdog/releases/download/0.4.6/of-watchdog > /usr/bin/fwatchdog \
&& chmod +x /usr/bin/fwatchdog
WORKDIR /root/
# remove unecessery logs from S3
ENV TF_CPP_MIN_LOG_LEVEL=3
ENV AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
ENV AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
ENV AWS_REGION=${AWS_REGION}
ENV S3_ENDPOINT=${S3_ENDPOINT}
ENV fprocess="tensorflow_model_server --rest_api_port=8501 \
--model_name=${MODEL_NAME} \
--model_base_path=${MODEL_BASE_PATH}"
# Set to true to see request in function logs
ENV write_debug="true"
ENV cgi_headers="true"
ENV mode="http"
ENV upstream_url="http://127.0.0.1:8501"
# gRPC tensorflow serving
# EXPOSE 8500
# REST tensorflow serving
# EXPOSE 8501
RUN touch /tmp/.lock
HEALTHCHECK --interval=5s CMD [ -e /tmp/.lock ] || exit 1
CMD [ "fwatchdog" ]
the stack.yaml file looks like the following
provider:
name: faas
gateway: https://gateway-url:8080
functions:
serve:
lang: dockerfile
handler: ./serve
image: repo/serve-model:latest
imagePullPolicy: always
I build the image with faas-cli build -f stack.yaml and then I push it to my docker registry with faas-cli push -f stack.yaml.
When I execute faas-cli deploy -f stack.yaml -e AWS_ACCESS_KEY_ID=... I get Accepted 202 and it appears correctly among my functions. Now, I want to invoke the tensorflow serving on the model I specified in my ENV.
The way I try to make it work is to use curl in this way
curl -d '{"inputs": ["1.0", "2.0", "5.0"]}' -X POST https://gateway-url:8080/function/deploy-model/v1/models/mnist:predict
but I always obtain 502 Bad Gateway.
Does anybody have experience with OpenFaaS and Tensorflow Serving? Thanks in advance
P.S.
If I run tensorflow serving without of-watchdog (basically without the openfaas stuff), the model is served correctly.
Elaborating the link mentioned by #viveksyngh.
tensorflow-serving-openfaas:
Example of packaging TensorFlow Serving with OpenFaaS to be deployed and managed through OpenFaaS with auto-scaling, scale-from-zero and a sane configuration for Kubernetes.
This example was adapted from: https://www.tensorflow.org/serving
Pre-reqs:
OpenFaaS
OpenFaaS CLI
Docker
Instructions:
Clone the repo
$ mkdir -p ~/dev/
$ cd ~/dev/
$ git clone https://github.com/alexellis/tensorflow-serving-openfaas
Clone the sample model and copy it to the function's build context
$ cd ~/dev/tensorflow-serving-openfaas
$ git clone https://github.com/tensorflow/serving
$ cp -r serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu ./ts-serve/saved_model_half_plus_two_cpu
Edit the Docker Hub username
You need to edit the stack.yml file and replace alexellis2 with your Docker Hub account.
Build the function image
$ faas-cli build
You should now have a Docker image in your local library which you can deploy to a cluster with faas-cli up
Test the function locally
All OpenFaaS images can be run stand-alone without OpenFaaS installed, let's do a quick test, but replace alexellis2 with your own name.
$ docker run -p 8081:8080 -ti alexellis2/ts-serve:latest
Now in another terminal:
$ curl -d '{"instances": [1.0, 2.0, 5.0]}' \
-X POST http://127.0.0.1:8081/v1/models/half_plus_two:predict
{
"predictions": [2.5, 3.0, 4.5
]
}
From here you can run faas-cli up and then invoke your function from the OpenFaaS UI, CLI or REST API.
$ export OPENFAAS_URL=http://127.0.0.1:8080
$ curl -d '{"instances": [1.0, 2.0, 5.0]}' $OPENFAAS_URL/function/ts-serve/v1/models/half_plus_two:predict
{
"predictions": [2.5, 3.0, 4.5
]
}
I am writing a Dockerfile based on wildfly image. I have isolated these lines where I am having some headache. The curl command doesn't work during build process. I have already uninstalled and installed Docker again but the error persists. My system is a Linux Mint.
In addition I tried to build that same Dockerfile in a RHEL and it worked like a charm.
Here's the Dockerfile:
FROM jboss/wildfly
RUN cd $HOME \
&& curl -O "http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.44/mysql-connector-java-5.1.44.jar"
Here's the error output:
Sending build context to Docker daemon 1.03MB
Step 1/6 : FROM jboss/wildfly
---> b695bdcce374
Step 2/6 : RUN cd $HOME && curl -O "http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.44/mysql-connector-java-5.1.44.jar"
---> Running in 4fdcef7dbda1
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:39 --:--:-- 0
curl: (6) Could not resolve host: central.maven.org; Unknown error
The command '/bin/sh -c cd $HOME && curl -O "http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.44/mysql-connector-java-5.1.44.jar"' returned a non-zero code: 6
I could workaround the problem doing something like this:
docker build --add-host central.maven.org:151.101.56.209 .
but I'm not happy with that. I would like to say Docker to use my DNS instead of set fixed IP. It would be more elegant.
The error is a DNS problem resolving the hostname. You could try running docker with the --dns option (https://docs.docker.com/engine/reference/run/#network-settings)
docker run --dns=8.8.8.8
which will set the container /etc/resolv.conf for you.
I'm working with Hugo
Trying to run inside a Docker container to allow people to easily manage content.
My first task is to get Hugo running and people able to view the site locally.
Here's my Dockerfile:
FROM alpine:3.3
RUN apk update && apk upgrade && \
apk add --no-cache go bash git openssh && \
mkdir -p /aws && \
apk -Uuv add groff less python py-pip && \
pip install awscli && \
apk --purge -v del py-pip && \
rm /var/cache/apk/* && \
mkdir -p /go/src /go/bin && chmod -R 777 /go
ENV GOPATH /go
ENV PATH /go/bin:$PATH
RUN go get -v github.com/spf13/hugo
RUN git clone http://mygitrepo.com /app
WORKDIR /app
EXPOSE 1313
ENTRYPOINT ["hugo","server"]
I'm checking out the site repo then running Hugo - hugo server
I'm then running this container via:
docker run -d -p 1313:1313 --name app app
Which reports everything is starting OK however when I try to browse locally on localhost:1313 I see nothing.
Any ideas where I'm going wrong?
UPDATE
docker ps gives me:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e1f12849044 app "hugo server" 16 minutes ago Up 16 minutes 0.0.0.0:1313->1313/tcp app
And docker logs 9e1 gives me:
Started building sites ...
Built site for language en:
0 draft content
0 future content
0 expired content
25 pages created
0 non-page files copied
0 paginator pages created
0 tags created
0 categories created
total in 64 ms
Watching for changes in /ltec/{data,content,layouts,static,themes}
Serving pages from memory
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Press Ctrl+C to stop
I had the same problem, but following this tutorial http://ahmedalani.com/post/so-recursive-it-hurts/, says about to use the param --bind from hugo server command.
Adding that param mentioned, and the ip 0.0.0.0 we have --bind=0.0.0.0
It works to me, I think this is a natural behavior from every container taking a localhost for self scope, but if you bind with 0.0.0.0 takes a visible scope to the main host.
This is because Docker is actually running in a VM. You need to navigate to the docker-machine ip instead of localhost.
curl $(docker-machine ip):1313
Delete EXPOSE 1313 in your Dockerfile. Dockerfile reference.
I'm trying to base a Dockerfile on another local one.
$ ls -lR
total 0
-rw-r--r-- 1 me me 42 14 avr 10:42 Dockerfile
drwxr-xr-x 3 me me 42 14 avr 10:42 prod
./prod:
total 0
-rw-r--r-- 1 me me 42 14 avr 10:42 Dockerfile
$ cat prod/Dockerfile
FROM ../Dockerfile
...
$ docker build - < prod/Dockerfile
unable to process Dockerfile: unable to parse repository info: repository name component must match "a-z0-9(?:[._]a-z0-9)*"
I know that FROM expects an image and not a path.
How can I extend Dockerfile from prod/Dockerfile ?
Dockerfiles don't extend Dockerfiles but images, the FROM line is not an "include" statement.
So, if you want to "extend" another Dockerfile, you need to build the original Dockerfile as an image, and extend that image.
For example;
Dockerfile1:
FROM alpine
RUN echo "foo" > /bar
Dockerfile2:
FROM myimage
RUN echo "bar" > /baz
Build the first Dockerfile (since it's called Dockerfile1, use the -f option as docker defaults to look for a file called Dockerfile), and "tag" it as myimage
docker build -f Dockerfile1 -t myimage .
# Sending build context to Docker daemon 3.072 kB
# Step 1 : FROM alpine
# ---> d7a513a663c1
# Step 2 : RUN echo "foo" > /bar
# ---> Running in d3a3e5a18594
# ---> a42129418da3
# Removing intermediate container d3a3e5a18594
# Successfully built a42129418da3
Then build the second Dockerfile, which extends the image you just built. We tag the resulting image as "myextendedimage";
docker build -f Dockerfile2 -t myextendedimage .
# Sending build context to Docker daemon 3.072 kB
# Step 1 : FROM myimage
# ---> a42129418da3
# Step 2 : RUN echo "bar" > /baz
# ---> Running in 609ae35fe135
# ---> 4ea44560d4b7
# Removing intermediate container 609ae35fe135
# Successfully built 4ea44560d4b7
To check the results, run a container from the image and verify that both files (/bar and /baz) are in the image;
docker run -it --rm myextendedimage sh -c "ls -la ba*"
# -rw-r--r-- 1 root root 4 Apr 14 10:18 bar
# -rw-r--r-- 1 root root 4 Apr 14 10:19 baz
I suggest to read the User guide, which explains how to work with images and containers
Take a look at multi-stage builds it could help you
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
https://blog.alexellis.io/mutli-stage-docker-builds/
I wrote simple bash script for this. It works next way:
Example structure:
|
|_Dockerfile(base)
|_prod
|_Dockerfile(extended)
Dockerfile(extended):
FROM ../Dockerfile
...
Run script:
./script.sh prod
It merges your base dockerfile with extended and build merged file.
Script:
#!/bin/bash
fromLine=$(head -n 1 $1/Dockerfile)
read -a fromLineArray <<< $fromLine
extPath=${fromLineArray[1]}
tail -n +2 "$1/Dockerfile" > strippedDocker
cat $1/$extPath strippedDocker > resDocker
rm strippedDocker
docker build - < resDocker
rm resDocker
I'm using conditionals:
Dockerfile
Install sudo only on local build.
FROM ubuntu:latest
ARG APP_ENVIRONMENT=local
RUN apt-get update && bash -c "set -ex ; \
apt-get install -y $([ ${APP_ENVIRONMENT} = local ] \
&& echo 'curl sudo' \
|| echo 'curl' \
)"
CMD bash -c "set -ex ; \
[ ${APP_ENVIRONMENT} = local ] \
&& { app debug ; exit $? ; } \
|| { app start ; exit $? ; } \
"
Build
# Production
docker build \
-t my-image \
--build-arg APP_ENVIRONMENT='prod' \
.
# Local
docker build \
-t my-image \
.
Docker Compose
version: "3.7"
services:
app:
build:
context: .
args:
APP_ENVIRONMENT: "${APP_ENVIRONMENT:-local}"
If you use Docker 20.10+, you can do this:
# syntax = edrevo/dockerfile-plus
INCLUDE+ ../Dockerfile
RUN ...
The INCLUDE+ instruction gets imported by the first line in the Dockerfile. You can read more about the dockerfile-plus at https://github.com/edrevo/dockerfile-plus