how to run chromedp in docker - docker

I am trying to run chromedp in docker.
My main.go:
package main
import (
"context"
"log"
"time"
"github.com/chromedp/chromedp"
)
func main() {
log.SetFlags(log.LstdFlags | log.Llongfile)
ctx, cancel := chromedp.NewContext(
context.Background(),
chromedp.WithLogf(log.Printf),
)
defer cancel()
// create a timeout
ctx, cancel = context.WithTimeout(ctx, 15 * time.Second)
defer cancel()
u := `https://www.whatismybrowser.com/detect/what-is-my-user-agent`
selector := `#detected_value`
log.Println("requesting", u)
log.Println("selector", selector)
var result string
err := chromedp.Run(ctx,
chromedp.Navigate(u),
chromedp.WaitReady(selector),
chromedp.OuterHTML(selector, &result),
)
if err != nil {
log.Fatal(err)
}
log.Printf("result:\n%s", result)
}
Dockerfile:
FROM golang:latest as build-env
RUN mkdir $GOPATH/src/app
WORKDIR $GOPATH/src/app
ENV GO111MODULE=on
COPY go.mod .
COPY go.sum .
COPY main.go .
RUN go mod download
RUN go build -o /root/app
FROM chromedp/headless-shell
COPY --from=build-env /root/app /
CMD ["/app"]
When I run it:
docker-compose build
docker-compose up
It outputs:
app_1 | [1129/192523.576726:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.602779:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 |
app_1 | DevTools listening on ws://0.0.0.0:9222/devtools/browser/3fa247e0-e2fa-484e-8b5f-172b392701bb
app_1 | [1129/192523.836854:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.838804:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.845866:ERROR:egl_util.cc(60)] Failed to load GLES library: /headless-shell/swiftshader/libGLESv2.so: /headless-shell/swiftshader/libGLESv2.so: cannot open shared object file: No such file or directory
app_1 | [1129/192523.871796:ERROR:viz_main_impl.cc(176)] Exiting GPU process due to errors during initialization
app_1 | [1129/192523.897083:WARNING:gpu_process_host.cc(1220)] The GPU process has crashed 1 time(s)
app_1 | [1129/192523.926741:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.930111:ERROR:egl_util.cc(60)] Failed to load GLES library: /headless-shell/swiftshader/libGLESv2.so: /headless-shell/swiftshader/libGLESv2.so: cannot open shared object file: No such file or directory
app_1 | [1129/192523.943794:ERROR:viz_main_impl.cc(176)] Exiting GPU process due to errors during initialization
app_1 | [1129/192523.948757:WARNING:gpu_process_host.cc(1220)] The GPU process has crashed 2 time(s)
app_1 | [1129/192523.950107:ERROR:browser_gpu_channel_host_factory.cc(138)] Failed to launch GPU process.
app_1 | [1129/192524.013014:ERROR:browser_gpu_channel_host_factory.cc(138)] Failed to launch GPU process.
So it doesn't run my go app. I expect that chromedp/headless-shell contains google-chrome and my golang app would successfully use it over github.com/chromedp/chromedp
Update 1
I added missing directories:
RUN mkdir -p /headless-shell/swiftshader/ \
&& cd /headless-shell/swiftshader/ \
&& ln -s ../libEGL.so libEGL.so \
&& ln -s ../libGLESv2.so libGLESv2.so
and now have the following output, my app still not running:
app_1 | [1202/071210.095414:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1202/071210.112632:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 |
app_1 | DevTools listening on ws://0.0.0.0:9222/devtools/browser/86e31db1-3a17-4da6-9e2f-696647572492
app_1 | [1202/071210.166158:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1202/071210.186307:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
Update 2
Looks like CMD ["/app"] doesn't run my main.go file because it doesn't print any lines from it.
And when I run it manually:
$ /usr/local/bin/docker exec -ti chromedp_docker_app_1 /bin/bash
root#0c417fd159a2:/# /app
2019/12/02 07:40:34 app is running
2019/12/02 07:40:34 /go/src/app/main.go:26: requesting https://www.whatismybrowser.com/detect/what-is-my-user-agent
2019/12/02 07:40:34 /go/src/app/main.go:27: selector #detected_value
2019/12/02 07:40:34 /go/src/app/main.go:35: exec: "google-chrome": executable file not found in $PATH
I see that google-chrome app is still not there, hmmm....

You are missing few things here, First you need to run google-headless-chrome inside your container. you can use following Dockerfile
FROM golang:1.12.0-alpine3.9
RUN apk update && apk upgrade && apk add --no-cache bash git && apk add --no-cache chromium
# Installs latest Chromium package.
RUN echo #edge http://nl.alpinelinux.org/alpine/edge/community >> /etc/apk/repositories \
&& echo #edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories \
&& apk add --no-cache \
harfbuzz#edge \
nss#edge \
freetype#edge \
ttf-freefont#edge \
&& rm -rf /var/cache/* \
&& mkdir /var/cache/apk
RUN go get github.com/mafredri/cdp
CMD chromium-browser --headless --disable-gpu --remote-debugging-port=9222 --disable-web-security --safebrowsing-disable-auto-update --disable-sync --disable-default-apps --hide-scrollbars --metrics-recording-only --mute-audio --no-first-run --no-sandbox
I am using CDP, More robust and fun for me!
This is the link for CDP: https://github.com/mafredri/cdp

Is not pretty but here is a simple docker that worked for me
FROM golang:1.16.5 AS build-env
RUN apt update && apt -y upgrade
RUN apt -y install chromium
WORKDIR /app
ADD ./ ./
RUN go mod download
RUN go build -o /docker-gs-ping
CMD [ "/docker-gs-ping" ]

Related

host not found in upstream error caused by docker-compose

tldr-version: I have no idea whats causing this error. im pretty sure its not the line endings because i changed them manually with notepad++ (unless i need to change more than entrypoint.sh because thats all i changed the line endings on).
original post below.
I have no idea what is causing this error caused when i do docker-compose -f docker-compose-deploy.yml up --build into my command line i get the following output
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Starting mygoattheend-copy_app_1 ... done
Starting mygoattheend-copy_proxy_1 ... done
Attaching to mygoattheend-copy_app_1, mygoattheend-copy_proxy_1
app_1 | exec /scripts/entrypoint.sh: no such file or directory
proxy_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
proxy_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
proxy_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
proxy_1 | 10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
proxy_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
proxy_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
proxy_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
mygoattheend-copy_app_1 exited with code 1
proxy_1 | 2022/11/03 18:51:39 [emerg] 1#1: host not found in upstream "app" in /etc/nginx/conf.d/default.conf:9
proxy_1 | nginx: [emerg] host not found in upstream "app" in /etc/nginx/conf.d/default.conf:9
mygoattheend-copy_proxy_1 exited with code 1
Other examples of errors that appear when i search for nginx: [emerg] host not found in upstream "app" in /etc/nginx/conf.d/default.conf:9 online suggest the problem is due to missing -depends_on: so ive included my docker-compose file below but i followed the tutorial perfectly and his worked fine. And my docker-compose-deploy has its
-depends_on:
my full docker compose is below
version: '3.7'
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: sh -c "python manage.py runserver 0.0.0.0:8000"
environment:
- DEBUG=1
My full docker-compose-deploy.yml is below
version: '3.7'
services:
app:
build:
context: .
volumes:
- static_data:/vol/web
environment:
- SECRET_KEY=samplesecretkey123
- ALLOWED_HOSTS=127.0.0.1,localhost
proxy:
build:
context: ./proxy
volumes:
- static_data:/vol/static
ports:
- "8080:8080"
depends_on:
- app
volumes:
static_data:
the image below is my full directory
The error does mention that it can't find app, which i copy with the main dockerfile (not the one in the proxy folder)
My main dockerfile is below.
FROM python:3.8-alpine
ENV PATH="/scripts:${PATH}"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache --virtual .tmp gcc libc-dev linux-headers
RUN pip install -r /requirements.txt
RUN apk del .tmp
RUN mkdir /app
COPY ./app /app
WORKDIR /app
COPY ./scripts /scripts
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/
RUN adduser -D user
RUN chown -R user:user /vol
RUN chmod -R 755 /vol/web
USER user
CMD ["entrypoint.sh"]
what could be causing this error?
what other info do you need to work it out?
im following this tutorial. im at the very end https://www.youtube.com/watch?v=nh1ynJGJuT8
update 1 (adding extra info)
my proxy/default.conf is below
server {
listen 8080;
location /static {
alias /vol/static;
}
location / {
uwsgi_pass app:8000;
include /etc/nginx/uwsgi_params;
}
}
my proxy/dockerfile is below
FROM nginxinc/nginx-unprivileged:1-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./uwsgi_params /etc/nginx/uwsgi_params
USER root
RUN mkdir -p /vol/static
RUN chmod 755 /vol/static
USER nginx
update 2
this is my whole project uploaded to github https://github.com/tgmjack/help
update 3
editing the line endings in vscode didnt appear to work.
update 4
new dockerfile trying dos2unix
FROM python:3.8-alpine
ENV PATH="/scripts:${PATH}"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache --virtual .tmp gcc libc-dev linux-headers dos2unix
RUN pip install -r /requirements.txt
RUN apk del .tmp
RUN mkdir /app
COPY ./app /app
WORKDIR /app
COPY ./scripts /scripts
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/
RUN adduser -D user
RUN chown -R user:user /vol
RUN chmod -R 755 /vol/web
USER user
CMD ["dos2unix", "entrypoint.sh"]
but i still get the same error.
update 5
ok so i changed the eol of entrypoint.sh manually with notepad++ but i still get the same error.
do i need to apply this to more than just entrypoint.sh?
There are two problems in this setup.
DOS line endings
The first problem is the use of DOS line endings on the entrypoint. Here's what I see when I use od to get a character-by-character dump of the files:
$ od -c scripts/entrypoint.sh | head -n 2
0000000 # ! / b i n / s h \r \n \r \n s e t
0000020 - e \r \n \r \n p y t h o n m a
The issue is that after #!/bin/sh, we have \r\n, when we should have just \n. This is a DOS-style line ending, but Linux expects just a \n. More information
I used a program called dos2unix to replace those lines:
$ dos2unix scripts/entrypoint.sh
dos2unix: converting file scripts/entrypoint.sh to Unix format...
(VS code can do this too - here's how.)
When I run it, I get a new error:
app_1 | Traceback (most recent call last):
app_1 | File "manage.py", line 21, in <module>
app_1 | main()
app_1 | File "manage.py", line 17, in main
app_1 | execute_from_command_line(sys.argv)
app_1 | File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
app_1 | utility.execute()
app_1 | File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 345, in execute
app_1 | settings.INSTALLED_APPS
app_1 | File "/usr/local/lib/python3.8/site-packages/django/conf/__init__.py", line 76, in __getattr__
app_1 | self._setup(name)
app_1 | File "/usr/local/lib/python3.8/site-packages/django/conf/__init__.py", line 63, in _setup
app_1 | self._wrapped = Settings(settings_module)
app_1 | File "/usr/local/lib/python3.8/site-packages/django/conf/__init__.py", line 142, in __init__
app_1 | mod = importlib.import_module(self.SETTINGS_MODULE)
app_1 | File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
app_1 | return _bootstrap._gcd_import(name[level:], package, level)
app_1 | File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
app_1 | File "<frozen importlib._bootstrap>", line 991, in _find_and_load
app_1 | File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
app_1 | File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
app_1 | File "<frozen importlib._bootstrap_external>", line 843, in exec_module
app_1 | File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
app_1 | File "/app/app/settings.py", line 31, in <module>
app_1 | ALLOWED_HOSTS.extend(ALLOWED_HOSTS.split(',')) # .split(",") = incase multiple so we seperate them with a comma
app_1 | AttributeError: 'list' object has no attribute 'split'
Wrong variable referenced in settings file
I edited the file app/app/settings.py, and changed this line
ALLOWED_HOSTS.extend(ALLOWED_HOSTS.split(','))
to
ALLOWED_HOSTS.extend(ALLOWED_HOSTS_ENV.split(','))
After this, I found this worked.
Nginx
The nginx configuration turned out to be fine. The issue was that the app died, so nginx couldn't do a DNS lookup to find the IP address of the app container. Fixing the app container also fixes nginx.

Connecting to existing external Redis from Docker Container

I have a small fastify app that connects to an external Redis server.
I am using the fastify-redis npm package (which uses ioredis under the hood).
fastify-redis is connecting using rediss:// format
REDIS_URL='rediss://:xxxxyyyyxxxyxxxyxyxyxyxyxxy#blahblah-dev.redis.cache.windows.net:6380'
const Fastify = require('fastify')
const fastifyRedis = require('#fastify/redis')
fastify = Fastify({ logger: true, pluginTimeout: 50000 })
fastify.register(fastifyRedis, {
url: process.env.REDIS_URL,
enableAutoPipelining: true,
})
This all works fine run locally using npm start.
When I dockerise it, though, I get an error, which looks like it is caused by not being able to connect to the Redis instance
redisutils_1 | > node index.js
redisutils_1 |
redisutils_1 | /usr/src/node_modules/ioredis/built/redis/event_handler.js:175
redisutils_1 | self.flushQueue(new errors_1.MaxRetriesPerRequestError(maxRetriesPerRequest));
redisutils_1 | ^
redisutils_1 |
redisutils_1 | MaxRetriesPerRequestError: Reached the max retries per request limit (which is 20). Refer to "maxRetriesPerRequest" option for details.
redisutils_1 | at Socket.<anonymous> (/usr/src/node_modules/ioredis/built/redis/event_handler.js:175:37)
redisutils_1 | at Object.onceWrapper (node:events:628:26)
redisutils_1 | at Socket.emit (node:events:513:28)
redisutils_1 | at TCP.<anonymous> (node:net:313:12)
redisutils_1 |
redisutils_1 | Node.js v18.9.0
What have I missed?
you mostly will need to run your container with --network host as your container runnning inside private network and can't reach your network to communicate with any external services.
I discovered the issue.
2 things.
(The dumb one) make sure any environment variable settings (like the rediss URL) are actually being set!
The Redis server was refusing the connection due to certificate issues. I was using bullseye-slim and had to change to alpine and add a step to fix that
FROM node:alpine
RUN apk update && apk add ca-certificates && update-ca-certificates
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
EXPOSE 3000
RUN chown -R node /usr/src/app
USER node
CMD ["npm", "start"]
Namely
RUN apk update && apk add ca-certificates && update-ca-certificates

error while loading shared libraries: libstd-69edc9ac8de4d39c.so

I am building a docker container for compiling a mix of rust, carbon and c.
Everything seems to work until running main.carbon and call the function of my rust library. Although the import seems valid. I think that is an issue by Rust code.
This is my Dockerfile:
#
# -------- ---------- -----
# | rust | | carbon | | C |
# -------- ---------- -----
# | | |
# | | |
FROM rust as rust
WORKDIR /usr/src/myapp
COPY ./src/lib/ .
RUN cargo build --verbose --release --all-targets --manifest-path /usr/src/myapp/Cargo.toml
# | |
# install carbon | |
# ------| |
# | | |
FROM linuxbrew/brew as brew
RUN brew update
RUN brew install python#3.9
RUN brew install bazelisk
RUN brew install llvm
# | |
# | |
# | |
# | |
FROM brew as carbon
RUN git clone https://github.com/carbon-language/carbon-lang carbon
WORKDIR /home/linuxbrew/carbon
COPY --from=rust /usr/src/myapp/target/release/librust_file_listener.so /home/linuxbrew/carbon/explorer/
SHELL ["/bin/bash", "-c"]
RUN mv -v /home/linuxbrew/carbon/explorer/BUILD /home/linuxbrew/carbon/explorer/ BUILD-old
RUN touch ./explorer/BUILD
RUN echo $(pwd)
RUN sed -n '1,17p' ./explorer/BUILD-old >> ./explorer/BUILD
RUN echo ' srcs = ["main.cpp", "librust_file_listener.so"],' >> ./explorer/BUILD
RUN sed -n '19,$p' ./explorer/BUILD-old >> ./explorer/BUILD
RUN cp ./explorer/librust_file_listener.so .
RUN bazel build --verbose_failures //explorer
COPY ./src/main.carbon .
COPY ./src/file-listener.h .
RUN bazel run //explorer -- ./main.carbon
This is my error message:
/root/.cache/bazel/_bazel_root/c2431547aff5b972703b3babc3d841cc/execroot/carbon/bazel-out/k8-fastbuild/bin/explorer/explorer:
error while loading shared libraries: libstd-69edc9ac8de4d39c.so:
cannot open shared object file: No such file or directory
Searching for this error message: the only result was this question by laurent. May be corresponding!?
FYI I am on x86_64, not on ARM.

Objection Detection not detecting GPU with Tensorflow GPU Image

I am a little stuck on my new adventures with machine learning.
I've been folloing a uDemy course where the instructure is running directly on their machine, however, I'm doing my best to run it all via Docker.
Dockerfile:
FROM tensorflow/tensorflow:2.1.1-gpu
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y espeak ffmpeg libespeak1 alsa-base alsa-utils protobuf-compiler python-pil python-lxml python-tk
RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install pandas
RUN python3 -m pip install pillow
COPY . /app
WORKDIR /app/models/research
RUN protoc object_detection/protos/anchor_generator.proto --python_out=.
RUN protoc object_detection/protos/argmax_matcher.proto --python_out=.
RUN protoc object_detection/protos/bipartite_matcher.proto --python_out=.
RUN protoc object_detection/protos/box_coder.proto --python_out=.
RUN protoc object_detection/protos/box_predictor.proto --python_out=.
RUN protoc object_detection/protos/calibration.proto --python_out=.
RUN protoc object_detection/protos/center_net.proto --python_out=.
RUN protoc object_detection/protos/eval.proto --python_out=.
RUN protoc object_detection/protos/faster_rcnn.proto --python_out=.
RUN protoc object_detection/protos/faster_rcnn_box_coder.proto --python_out=.
RUN protoc object_detection/protos/flexible_grid_anchor_generator.proto --python_out=.
RUN protoc object_detection/protos/fpn.proto --python_out=.
RUN protoc object_detection/protos/graph_rewriter.proto --python_out=.
RUN protoc object_detection/protos/grid_anchor_generator.proto --python_out=.
RUN protoc object_detection/protos/hyperparams.proto --python_out=.
RUN protoc object_detection/protos/image_resizer.proto --python_out=.
RUN protoc object_detection/protos/input_reader.proto --python_out=.
RUN protoc object_detection/protos/keypoint_box_coder.proto --python_out=.
RUN protoc object_detection/protos/losses.proto --python_out=.
RUN protoc object_detection/protos/matcher.proto --python_out=.
RUN protoc object_detection/protos/mean_stddev_box_coder.proto --python_out=.
RUN protoc object_detection/protos/model.proto --python_out=.
RUN protoc object_detection/protos/multiscale_anchor_generator.proto --python_out=.
RUN protoc object_detection/protos/optimizer.proto --python_out=.
RUN protoc object_detection/protos/pipeline.proto --python_out=.
RUN protoc object_detection/protos/post_processing.proto --python_out=.
RUN protoc object_detection/protos/preprocessor.proto --python_out=.
RUN protoc object_detection/protos/region_similarity_calculator.proto --python_out=.
RUN protoc object_detection/protos/square_box_coder.proto --python_out=.
RUN protoc object_detection/protos/ssd.proto --python_out=.
RUN protoc object_detection/protos/ssd_anchor_generator.proto --python_out=.
RUN protoc object_detection/protos/string_int_label_map.proto --python_out=.
RUN protoc object_detection/protos/target_assigner.proto --python_out=.
RUN protoc object_detection/protos/train.proto --python_out=.
RUN cp object_detection/packages/tf2/setup.py .
RUN python3 -m pip install .
WORKDIR /app
ENV LANG en_US.UTF-8
ENTRYPOINT ["/usr/bin/python3"]
I'm running this through VS Codes' Remote Container: https://code.visualstudio.com/docs/remote/containers#:~:text=The%20Visual%20Studio%20Code%20Remote,Studio%20Code's%20full%20feature%20set
devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.194.0/containers/docker-existing-dockerfile
{
"name": "Existing Dockerfile",
// Sets the run context to one level up instead of the .devcontainer folder.
"context": "..",
// Update the 'dockerFile' property if you aren't using the standard 'Dockerfile' filename.
"dockerFile": "../Dockerfile",
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance",
"streetsidesoftware.code-spell-checker",
"editorconfig.editorconfig"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
"runArgs": ["--gpus", "all", "--device=/dev/snd:/dev/snd"]
// Uncomment to use the Docker CLI from inside the container. See https://aka.ms/vscode-remote/samples/docker-from-docker.
// "mounts": [ "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" ],
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
// "remoteUser": "vscode"
}
Once I am in the container, if I run nvidia-smi, I get the following output which tells me it sees my GPU.
Tue Oct 26 19:55:24 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 207... Off | 00000000:26:00.0 On | N/A |
| 0% 45C P8 4W / 215W | 37MiB / 7974MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
However, when I go to call follow the training and evaluating sets in https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_training_and_evaluation.md#local I get the following:
Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
2021-10-26 19:05:23.611986: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
I had an outdated version of TensorFlow... which does not support CUDA 11.02
"TensorFlow supports CUDA® 11.2 (TensorFlow >= 2.5.0)" as seen here: https://www.tensorflow.org/install/gpu#software_requirements
Replaced image version with latest and all is working as expected it seems, derp.

"OSError: [Errno 8] Exec format error" when trying to run simple flask app in a docker container

I'm trying to start a simple Flask "Hello world" app in a docker container but I keep getting this error: "OSError: [Errno 8] Exec format error: '/app/app.py'"
My host operating system is Windows 10.
My Dockerfile:
FROM python:3.6
ENV PYTHONBUFFERED 1
ADD . /app
WORKDIR /app
RUN pip install -r requirements.txt
I have requirements.txt with Flask==1.0.2.
app.py:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000, debug=True)
and docker-compose.yml:
version: '3'
services:
app:
build: .
command: python app.py
ports:
- "8000:8000"
Whole log of container:
app_1 | * Serving Flask app "app" (lazy loading)
app_1 | * Environment: production
app_1 | WARNING: Do not use the development server in a production environment.
app_1 | Use a production WSGI server instead.
app_1 | * Debug mode: on
app_1 | * Running on http://0.0.0.0:8000/ (Press CTRL+C to quit)
app_1 | * Restarting with stat
app_1 | Traceback (most recent call last):
app_1 | File "app.py", line 9, in <module>
app_1 | app.run(host='0.0.0.0', port=8000, debug=True)
app_1 | File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 943, in run
app_1 | run_simple(host, port, self, **options)
app_1 | File "/usr/local/lib/python3.6/site-packages/werkzeug/serving.py", line 988, in run_simple
app_1 | run_with_reloader(inner, extra_files, reloader_interval, reloader_type)
app_1 | File "/usr/local/lib/python3.6/site-packages/werkzeug/_reloader.py", line 332, in run_with_reloader
app_1 | sys.exit(reloader.restart_with_reloader())
app_1 | File "/usr/local/lib/python3.6/site-packages/werkzeug/_reloader.py", line 176, in restart_with_reloader
app_1 | exit_code = subprocess.call(args, env=new_environ, close_fds=False)
app_1 | File "/usr/local/lib/python3.6/subprocess.py", line 287, in call
app_1 | with Popen(*popenargs, **kwargs) as p:
app_1 | File "/usr/local/lib/python3.6/subprocess.py", line 729, in __init__
app_1 | restore_signals, start_new_session)
app_1 | File "/usr/local/lib/python3.6/subprocess.py", line 1364, in _execute_child
app_1 | raise child_exception_type(errno_num, err_msg, err_filename)
app_1 | OSError: [Errno 8] Exec format error: '/app/app.py'
flaskdockerproject_app_1 exited with code 1
UPDATE
After I added the shebang in app.py like #larsks said I'm getting this error:
"FileNotFoundError: [Errno 2] No such file or directory: '/app/app.py': '/app/app.py'.
All the files are in the container and in the right place.
I hit the same problem (Exec format error, then FileNotFound if I added the shebang).
Adding "RUN chmod 644 app.py" to the Dockerfile fixed it for me, as mentioned here: https://github.com/pallets/werkzeug/issues/1482
Spent the entire day yesterday putting the pieces together on this problem, forum by forum, so I want to share a more detailed answer to this question. While building my image, I was also seeing the following warning message:
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host.
"That warning was added, because the Windows filesystem does not have an option to mark a file as 'executable'. Building a linux image from a Windows machine would therefore break the image if a file has to be marked executable." - https://github.com/moby/moby/issues/20397
So, as Richard Chamberlain points out in the comment above, adding "RUN chmod 644.py" ensures that the app.py file is properly marked.
Putting all the pieces together, here is the complete Dockerfile that worked for me - Really hope it helps the next person struggling with this issue!
FROM python:3.7-alpine
COPY . /app
WORKDIR /app
RUN apk add --no-cache --virtual .build-deps \
ca-certificates gcc postgresql-dev linux-headers musl-dev \
libffi-dev jpeg-dev zlib-dev \
&& pip install --no-cache -r requirements.txt
RUN chmod 644 app.py
CMD ["python","app.py"]

Resources