I am a little stuck on my new adventures with machine learning.
I've been folloing a uDemy course where the instructure is running directly on their machine, however, I'm doing my best to run it all via Docker.
Dockerfile:
FROM tensorflow/tensorflow:2.1.1-gpu
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y espeak ffmpeg libespeak1 alsa-base alsa-utils protobuf-compiler python-pil python-lxml python-tk
RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install pandas
RUN python3 -m pip install pillow
COPY . /app
WORKDIR /app/models/research
RUN protoc object_detection/protos/anchor_generator.proto --python_out=.
RUN protoc object_detection/protos/argmax_matcher.proto --python_out=.
RUN protoc object_detection/protos/bipartite_matcher.proto --python_out=.
RUN protoc object_detection/protos/box_coder.proto --python_out=.
RUN protoc object_detection/protos/box_predictor.proto --python_out=.
RUN protoc object_detection/protos/calibration.proto --python_out=.
RUN protoc object_detection/protos/center_net.proto --python_out=.
RUN protoc object_detection/protos/eval.proto --python_out=.
RUN protoc object_detection/protos/faster_rcnn.proto --python_out=.
RUN protoc object_detection/protos/faster_rcnn_box_coder.proto --python_out=.
RUN protoc object_detection/protos/flexible_grid_anchor_generator.proto --python_out=.
RUN protoc object_detection/protos/fpn.proto --python_out=.
RUN protoc object_detection/protos/graph_rewriter.proto --python_out=.
RUN protoc object_detection/protos/grid_anchor_generator.proto --python_out=.
RUN protoc object_detection/protos/hyperparams.proto --python_out=.
RUN protoc object_detection/protos/image_resizer.proto --python_out=.
RUN protoc object_detection/protos/input_reader.proto --python_out=.
RUN protoc object_detection/protos/keypoint_box_coder.proto --python_out=.
RUN protoc object_detection/protos/losses.proto --python_out=.
RUN protoc object_detection/protos/matcher.proto --python_out=.
RUN protoc object_detection/protos/mean_stddev_box_coder.proto --python_out=.
RUN protoc object_detection/protos/model.proto --python_out=.
RUN protoc object_detection/protos/multiscale_anchor_generator.proto --python_out=.
RUN protoc object_detection/protos/optimizer.proto --python_out=.
RUN protoc object_detection/protos/pipeline.proto --python_out=.
RUN protoc object_detection/protos/post_processing.proto --python_out=.
RUN protoc object_detection/protos/preprocessor.proto --python_out=.
RUN protoc object_detection/protos/region_similarity_calculator.proto --python_out=.
RUN protoc object_detection/protos/square_box_coder.proto --python_out=.
RUN protoc object_detection/protos/ssd.proto --python_out=.
RUN protoc object_detection/protos/ssd_anchor_generator.proto --python_out=.
RUN protoc object_detection/protos/string_int_label_map.proto --python_out=.
RUN protoc object_detection/protos/target_assigner.proto --python_out=.
RUN protoc object_detection/protos/train.proto --python_out=.
RUN cp object_detection/packages/tf2/setup.py .
RUN python3 -m pip install .
WORKDIR /app
ENV LANG en_US.UTF-8
ENTRYPOINT ["/usr/bin/python3"]
I'm running this through VS Codes' Remote Container: https://code.visualstudio.com/docs/remote/containers#:~:text=The%20Visual%20Studio%20Code%20Remote,Studio%20Code's%20full%20feature%20set
devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.194.0/containers/docker-existing-dockerfile
{
"name": "Existing Dockerfile",
// Sets the run context to one level up instead of the .devcontainer folder.
"context": "..",
// Update the 'dockerFile' property if you aren't using the standard 'Dockerfile' filename.
"dockerFile": "../Dockerfile",
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance",
"streetsidesoftware.code-spell-checker",
"editorconfig.editorconfig"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
"runArgs": ["--gpus", "all", "--device=/dev/snd:/dev/snd"]
// Uncomment to use the Docker CLI from inside the container. See https://aka.ms/vscode-remote/samples/docker-from-docker.
// "mounts": [ "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" ],
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
// "remoteUser": "vscode"
}
Once I am in the container, if I run nvidia-smi, I get the following output which tells me it sees my GPU.
Tue Oct 26 19:55:24 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 207... Off | 00000000:26:00.0 On | N/A |
| 0% 45C P8 4W / 215W | 37MiB / 7974MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
However, when I go to call follow the training and evaluating sets in https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_training_and_evaluation.md#local I get the following:
Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
2021-10-26 19:05:23.611986: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
I had an outdated version of TensorFlow... which does not support CUDA 11.02
"TensorFlow supports CUDA® 11.2 (TensorFlow >= 2.5.0)" as seen here: https://www.tensorflow.org/install/gpu#software_requirements
Replaced image version with latest and all is working as expected it seems, derp.
Related
I am building a docker container for compiling a mix of rust, carbon and c.
Everything seems to work until running main.carbon and call the function of my rust library. Although the import seems valid. I think that is an issue by Rust code.
This is my Dockerfile:
#
# -------- ---------- -----
# | rust | | carbon | | C |
# -------- ---------- -----
# | | |
# | | |
FROM rust as rust
WORKDIR /usr/src/myapp
COPY ./src/lib/ .
RUN cargo build --verbose --release --all-targets --manifest-path /usr/src/myapp/Cargo.toml
# | |
# install carbon | |
# ------| |
# | | |
FROM linuxbrew/brew as brew
RUN brew update
RUN brew install python#3.9
RUN brew install bazelisk
RUN brew install llvm
# | |
# | |
# | |
# | |
FROM brew as carbon
RUN git clone https://github.com/carbon-language/carbon-lang carbon
WORKDIR /home/linuxbrew/carbon
COPY --from=rust /usr/src/myapp/target/release/librust_file_listener.so /home/linuxbrew/carbon/explorer/
SHELL ["/bin/bash", "-c"]
RUN mv -v /home/linuxbrew/carbon/explorer/BUILD /home/linuxbrew/carbon/explorer/ BUILD-old
RUN touch ./explorer/BUILD
RUN echo $(pwd)
RUN sed -n '1,17p' ./explorer/BUILD-old >> ./explorer/BUILD
RUN echo ' srcs = ["main.cpp", "librust_file_listener.so"],' >> ./explorer/BUILD
RUN sed -n '19,$p' ./explorer/BUILD-old >> ./explorer/BUILD
RUN cp ./explorer/librust_file_listener.so .
RUN bazel build --verbose_failures //explorer
COPY ./src/main.carbon .
COPY ./src/file-listener.h .
RUN bazel run //explorer -- ./main.carbon
This is my error message:
/root/.cache/bazel/_bazel_root/c2431547aff5b972703b3babc3d841cc/execroot/carbon/bazel-out/k8-fastbuild/bin/explorer/explorer:
error while loading shared libraries: libstd-69edc9ac8de4d39c.so:
cannot open shared object file: No such file or directory
Searching for this error message: the only result was this question by laurent. May be corresponding!?
FYI I am on x86_64, not on ARM.
I want to use docker 19.03 and above in order to have GPU support. I currently have docker 19.03.12 in my system. I can run this command to check that Nvidia drivers are running:
docker run -it --rm --gpus all ubuntu nvidia-smi
Wed Jul 1 14:25:55 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.64 Driver Version: 430.64 CUDA Version: N/A |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 107... Off | 00000000:01:00.0 Off | N/A |
| 26% 54C P5 13W / 180W | 734MiB / 8119MiB | 39% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
Also, if run locally my module works with GPU support just fine. But if I build a docker image and try to run it I get a message:
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
I am using cuda 9.0 with tensorflow 1.12.0 but I can switch to cuda 10.0 with tensorflow 1.15.
As I get it the problem is that I am probably using a previous dockerfile version with commands which does not make it compatible with new docker GPU enabled version (19.03 and above).
The actual commands are these:
FROM nvidia/cuda:9.0-base-ubuntu16.04
# Pick up some TF dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cuda-command-line-tools-9-0 \
cuda-cublas-9-0 \
cuda-cufft-9-0 \
cuda-curand-9-0 \
cuda-cusolver-9-0 \
cuda-cusparse-9-0 \
libcudnn7=7.0.5.15-1+cuda9.0 \
libnccl2=2.2.13-1+cuda9.0 \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
software-properties-common \
unzip \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && \
apt-get install nvinfer-runtime-trt-repo-ubuntu1604-4.0.1-ga-cuda9.0 && \
apt-get update && \
apt-get install libnvinfer4=4.1.2-1+cuda9.0
I could not find a docker base file for fundamental GPU usage either.
In this answer there was a proposal for exposing libcuda.so.1 but it did not work in my case.
So, is there any workaround for this problem or a base dockerfile to adjust to?
My system is Ubuntu 16.04.
Edit:
I just noticed that nvidia-smi from within docker does not display any cuda version:
CUDA Version: N/A
in contrast with the one locally run. So, this probably means that no cuda is loaded inside docker for some reason I guess.
tldr;
A base Dockerfile which seems to work with docker 19.03+ & cuda 10 is this:
FROM nvidia/cuda:10.0-base
which can be conbined with tf 1.14 but for some reason could not found tf 1.15.
I just used this Dockerfile to test it:
FROM nvidia/cuda:10.0-base
CMD nvidia-smi
longer answer:
Well, after a lot of trials and errors (and frustration) I managed to make it work for docker 19.03.12+cuda 10 (although with tf 1.14 not 1.15).
I used the code from this post and used the base Dockerfiles provided there.
First I tried to check the nvidia-smi from within docker using Dockerfile:
FROM nvidia/cuda:10.0-base
CMD nvidia-smi
$docker build -t gpu_test .
...
$docker run -it --gpus all gpu_test
Fri Jul 3 07:31:05 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.64 Driver Version: 430.64 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 107... Off | 00000000:01:00.0 Off | N/A |
| 45% 65C P2 142W / 180W | 8051MiB / 8119MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
which finally seems to find cuda binaries: CUDA Version: 10.1.
Then, I made a minimal Dockerfile which could test the successful loading of tensorflow binary libraries within docker:
FROM nvidia/cuda:10.0-base
# The following are just declaring variables and ultimately use
ARG USE_PYTHON_3_NOT_2=True
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}
RUN apt-get update && apt-get install -y \
${PYTHON} \
${PYTHON}-pip
RUN ${PIP} install tensorflow_gpu==1.14.0
COPY bashrc /etc/bash.bashrc
RUN chmod a+rwx /etc/bash.bashrc
WORKDIR /src
COPY *.py /src/
ENTRYPOINT ["python3", "tf_minimal.py"]
and tf_minimal.py was simply:
import tensorflow as tf
print(tf.__version__)
and for completeness I just post the bashrc file I am using:
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ==============================================================================
export PS1="\[\e[31m\]tf-docker\[\e[m\] \[\e[33m\]\w\[\e[m\] > "
export TERM=xterm-256color
alias grep="grep --color=auto"
alias ls="ls --color=auto"
echo -e "\e[1;31m"
cat<<TF
________ _______________
___ __/__________________________________ ____/__ /________ __
__ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / /
_ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ /
/_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/
TF
echo -e "\e[0;33m"
if [[ $EUID -eq 0 ]]; then
cat <<WARN
WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user's userid:
$ docker run -u \$(id -u):\$(id -g) args...
WARN
else
cat <<EXPL
You are running this container as user with ID $(id -u) and group $(id -g),
which should map to the ID and group for your user on the Docker host. Great!
EXPL
fi
# Turn off colors
echo -e "\e[m"
I am trying to figure out how to run Elixir Phoenix on Heroku using Docker. I am pretty much using this Dockerfile: (https://github.com/jpiepkow/phoenix-docker/blob/master/Dockerfile)
# ---- Build Base Stage ----
FROM elixir:1.9.1-alpine AS app_builder
RUN apk add --no-cache=true \
gcc \
g++ \
git \
make \
musl-dev
RUN mix do local.hex --force, local.rebar --force
# ---- Build Deps Stage ----
FROM app_builder as deps
COPY mix.exs mix.lock ./
ARG MIX_ENV=prod
ENV MIX_ENV=$MIX_ENV
RUN mix do deps.get --only=$MIX_ENV, deps.compile
# ---- Build Release Stage ----
FROM deps as releaser
RUN echo $MIX_ENV
COPY config ./config
COPY lib ./lib
COPY priv ./priv
RUN mix release && \
cat mix.exs | grep app: | sed -e 's/ app: ://' | tr ',' ' ' | sed 's/ //g' > app_name.txt
# ---- Final Image Stage ----
FROM alpine:3.9 as app
RUN apk add --no-cache bash libstdc++ openssl
ENV CMD=start
COPY --from=releaser ./_build .
COPY --from=releaser ./app_name.txt ./app_name.txt
CMD ["sh","-c","./prod/rel/$(cat ./app_name.txt)/bin/$(cat ./app_name.txt) $CMD"]
I have pushed to Heroku and the app is running but when I try to using database things that's when it blows up. The logs say the database needs to be migrated, which makes sense since I haven't done it. But now I realize I'm not sure how to do that when mix is not available and I'm using Docker.
Does anyone know how to create and migrate postgres heroku when deployed with Docker?
Create the lib/my_app/release.ex mentioned in https://hexdocs.pm/phoenix/releases.html#ecto-migrations-and-custom-commands
bash into Heroku release container (this is different behavior from, for example, a Rails app):
heroku run bash --app my_app
Two options:
1
./prod/rel/my_app/bin/my_app start_iex
Then run
MyApp.Release.migrate
2
./prod/rel/my_app/bin/my_app eval ""MyApp.Release.migrate"
I am trying to run chromedp in docker.
My main.go:
package main
import (
"context"
"log"
"time"
"github.com/chromedp/chromedp"
)
func main() {
log.SetFlags(log.LstdFlags | log.Llongfile)
ctx, cancel := chromedp.NewContext(
context.Background(),
chromedp.WithLogf(log.Printf),
)
defer cancel()
// create a timeout
ctx, cancel = context.WithTimeout(ctx, 15 * time.Second)
defer cancel()
u := `https://www.whatismybrowser.com/detect/what-is-my-user-agent`
selector := `#detected_value`
log.Println("requesting", u)
log.Println("selector", selector)
var result string
err := chromedp.Run(ctx,
chromedp.Navigate(u),
chromedp.WaitReady(selector),
chromedp.OuterHTML(selector, &result),
)
if err != nil {
log.Fatal(err)
}
log.Printf("result:\n%s", result)
}
Dockerfile:
FROM golang:latest as build-env
RUN mkdir $GOPATH/src/app
WORKDIR $GOPATH/src/app
ENV GO111MODULE=on
COPY go.mod .
COPY go.sum .
COPY main.go .
RUN go mod download
RUN go build -o /root/app
FROM chromedp/headless-shell
COPY --from=build-env /root/app /
CMD ["/app"]
When I run it:
docker-compose build
docker-compose up
It outputs:
app_1 | [1129/192523.576726:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.602779:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 |
app_1 | DevTools listening on ws://0.0.0.0:9222/devtools/browser/3fa247e0-e2fa-484e-8b5f-172b392701bb
app_1 | [1129/192523.836854:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.838804:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.845866:ERROR:egl_util.cc(60)] Failed to load GLES library: /headless-shell/swiftshader/libGLESv2.so: /headless-shell/swiftshader/libGLESv2.so: cannot open shared object file: No such file or directory
app_1 | [1129/192523.871796:ERROR:viz_main_impl.cc(176)] Exiting GPU process due to errors during initialization
app_1 | [1129/192523.897083:WARNING:gpu_process_host.cc(1220)] The GPU process has crashed 1 time(s)
app_1 | [1129/192523.926741:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.930111:ERROR:egl_util.cc(60)] Failed to load GLES library: /headless-shell/swiftshader/libGLESv2.so: /headless-shell/swiftshader/libGLESv2.so: cannot open shared object file: No such file or directory
app_1 | [1129/192523.943794:ERROR:viz_main_impl.cc(176)] Exiting GPU process due to errors during initialization
app_1 | [1129/192523.948757:WARNING:gpu_process_host.cc(1220)] The GPU process has crashed 2 time(s)
app_1 | [1129/192523.950107:ERROR:browser_gpu_channel_host_factory.cc(138)] Failed to launch GPU process.
app_1 | [1129/192524.013014:ERROR:browser_gpu_channel_host_factory.cc(138)] Failed to launch GPU process.
So it doesn't run my go app. I expect that chromedp/headless-shell contains google-chrome and my golang app would successfully use it over github.com/chromedp/chromedp
Update 1
I added missing directories:
RUN mkdir -p /headless-shell/swiftshader/ \
&& cd /headless-shell/swiftshader/ \
&& ln -s ../libEGL.so libEGL.so \
&& ln -s ../libGLESv2.so libGLESv2.so
and now have the following output, my app still not running:
app_1 | [1202/071210.095414:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1202/071210.112632:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 |
app_1 | DevTools listening on ws://0.0.0.0:9222/devtools/browser/86e31db1-3a17-4da6-9e2f-696647572492
app_1 | [1202/071210.166158:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1202/071210.186307:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
Update 2
Looks like CMD ["/app"] doesn't run my main.go file because it doesn't print any lines from it.
And when I run it manually:
$ /usr/local/bin/docker exec -ti chromedp_docker_app_1 /bin/bash
root#0c417fd159a2:/# /app
2019/12/02 07:40:34 app is running
2019/12/02 07:40:34 /go/src/app/main.go:26: requesting https://www.whatismybrowser.com/detect/what-is-my-user-agent
2019/12/02 07:40:34 /go/src/app/main.go:27: selector #detected_value
2019/12/02 07:40:34 /go/src/app/main.go:35: exec: "google-chrome": executable file not found in $PATH
I see that google-chrome app is still not there, hmmm....
You are missing few things here, First you need to run google-headless-chrome inside your container. you can use following Dockerfile
FROM golang:1.12.0-alpine3.9
RUN apk update && apk upgrade && apk add --no-cache bash git && apk add --no-cache chromium
# Installs latest Chromium package.
RUN echo #edge http://nl.alpinelinux.org/alpine/edge/community >> /etc/apk/repositories \
&& echo #edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories \
&& apk add --no-cache \
harfbuzz#edge \
nss#edge \
freetype#edge \
ttf-freefont#edge \
&& rm -rf /var/cache/* \
&& mkdir /var/cache/apk
RUN go get github.com/mafredri/cdp
CMD chromium-browser --headless --disable-gpu --remote-debugging-port=9222 --disable-web-security --safebrowsing-disable-auto-update --disable-sync --disable-default-apps --hide-scrollbars --metrics-recording-only --mute-audio --no-first-run --no-sandbox
I am using CDP, More robust and fun for me!
This is the link for CDP: https://github.com/mafredri/cdp
Is not pretty but here is a simple docker that worked for me
FROM golang:1.16.5 AS build-env
RUN apt update && apt -y upgrade
RUN apt -y install chromium
WORKDIR /app
ADD ./ ./
RUN go mod download
RUN go build -o /docker-gs-ping
CMD [ "/docker-gs-ping" ]
I am trying to cargo build Azure IoT edge security daemon code (edgelet) in a docker. This goes smoothly on my Ubuntu machine. However, an issue occurs when I try to compile in Docker.
Here is the issue:
Compiling k8s-openapi v0.4.0
error: inclusive range syntax is experimental (see issue #28237)
--> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/k8s-openapi-0.4.0/build.rs:10:19
|
10 | for v2 in MIN..=MAX {
| ^^^^^^^^^
error: inclusive range syntax is experimental (see issue #28237)
--> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/k8s-openapi-0.4.0/build.rs:32:14
|
32 | for v in MIN..=MAX {
| ^^^^^^^^^
error: inclusive range syntax is experimental (see issue #28237)
--> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/k8s-
openapi-0.4.0/build.rs:117:14
|
117 | for v in MIN..=MAX {
| ^^^^^^^^^
error: aborting due to 3 previous errors
error: Could not compile `k8s-openapi`.
Here is a portion of my Docker file:
RUN apt-get update && \
apt-get install -y --no-install-recommends --allow-unauthenticated\
curl\
cargo
WORKDIR /usr/app
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
COPY edgelet .
RUN cargo build
Please check rustc's version which used in your docker image and version of the compiler on your Ubuntu machine.
The only possible reason for that behavior is you have old version of rustc in your docker image.