Installing Rust for Dockerfile - docker

I'm new to Docker, so apologies for this poor question.
I'm using an M1 Mac, and my dockerfile looks like this:
FROM python:3.8.1-slim
ENV PYTHONUNBUFFERED 1
EXPOSE 8000
WORKDIR /app
COPY ./requirements.txt .
COPY ./src .
RUN pip install --verbose -r requirements.txt
CMD ["uvicorn", "--host", "0.0.0.0", "--port", "8000", "src.main:app"]
When I run docker build -t project . I get an error message that includes
Cargo, the Rust package manager, is not installed or is not on PATH.
I've tried adding cargo and rust to requirements.txt and playing with ENV PATH to no avail; which cargo on the host machine returns
/opt/homebrew/bin/cargo
Can someone please point me in the right direction?
Edit: I don't know why Rust is required here, but it seems like it isn't uncommon... this is where it shows up in the error:
Running command /usr/local/bin/python /usr/local/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpfzj95ykv
Checking for Rust toolchain....
Edit 2: I reduced the number of packages in requirements.txt and that seems to have fixed it for now. Still annoyed I can't tell from the error what the issue is, and curious what the fix would be...

Related

Use STAR or POSIX extensions to overcome this limit, how to enable this in pycharm

I am getting the following exception while trying to dockerize my python project, this is my docker file.
RUN mkdir /app
WORKDIR /app
ADD requirements.txt /app
ADD main.py /app
RUN pip3 install -r requirements.txt
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
One fix around is to use posix but how do I solve it in python.
Failed to deploy '<unknown> Dockerfile: Dockerfile': group id '192360288' is too big
( > 2097151 ).
Use STAR or POSIX extensions to overcome this limit

Alpine docker image __isnan: symbol not found

Here's the Dockerfile I am using to build a Golang application and a worker
FROM golang:1.15 AS build
RUN mkdir -p /go/api/proj
WORKDIR /go/api/proj
COPY go.* ./
RUN go mod download
COPY . .
RUN go mod tidy
RUN go build -o proj ./api/
RUN go build -o worker ./worker/
FROM alpine:3.14
WORKDIR /
RUN apk add libc6-compat cmake
RUN ln -s /lib/libc.musl-x86_64.so.1 /lib/ld-linux-x86-64.so.2
COPY . .
COPY --from=build /go/api/proj/proj .
COPY --from=build /go/api/proj/worker .
EXPOSE 80
CMD ["./worker"]
I had to add libc6-compat because kafka setup in worker wasn't compatible with musl library of alpine
Here's the error I received when trying to run worker in docker container
Error relocating ./worker: __strdup: symbol not found
Error relocating ./worker: __isnan: symbol not found
Error relocating ./worker: __strndup: symbol not found
Can someone suggest what's going wrong here and solution for it?
I am using confluent kafka in worker which may be the reason for this error.
Can someone suggest what's going wrong here and solution for it?
What you are doing here:
RUN ln -s /lib/libc.musl-x86_64.so.1 /lib/ld-linux-x86-64.so.2
is pretending that Musl is GLIBC. It isn't, and that doesn't work.
From Musl FAQ:
Binary compatibility is much more limited, but it will steadily increase with new versions of musl. At present, some glibc-linked shared libraries can be loaded with musl, but all but the simplest glibc-linked applications will fail if musl is dropped-in in place of /lib/ld-linux.so.2.
Instead of building the worker binary against GLIBC and then trying to run it with Musl, you should build it against Musl.

How to rebuild docker container in air gapped environment?

I have a fastapi appplication which is to be containerised. I created the docker image from a system with internet connectivity and saved the file (tar archive). This image was loaded in a system with docker installed using docker load command which has no internet connectivity and is working fine. But now I want to make changes to the application code and rebuild the image. Only the app changes have to be pushed. How can this be achieved from this isolated system?
There are two actions during the build that need internet connection.
The first one is pulling the base image for your Dockerfile.
So for example if your Dockerfile is something like:
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./app /code/app
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
Then you would need the python:3.9 docker image on the system.
This is easily achievable by moving images using docker load as you described in the question.
The second is pip installing packages (in the previous case the step RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt).
To do this install in a system with no internet connection you would need to download the .whl wheel file for each requirement and install them using --find-links /path/to/wheel/dir/ (and probably --no-index) flags.
This can become complicated but if your dependencies are more or less fixed you can do the following:
First on the system that CAN connetc to the internet you build a base image with all your requirements:
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
Then you can build this image and load it on the system with no internet. On that you can create a new Dockerfile that starts from your newly created image and just adds your code:
FROM your-base-image
COPY ./app /code/app
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
Then rebuilding this image should not need any internet.

Flask and React App in single Docker Container

Good day SO,
I know this is bad practice and that I am supposed to have one App per container, but is there a way for me to have two services running concurrently in the same container, and how would I go about writing the Dockerfile for it?
My current Dockerfile for the Flask (Backend) App:
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY ./flask_backend ./
COPY requirements.txt .
RUN pip install -r requirements.txt
CMD python3 app/webapp/app.py
My React (Frontend) Dockerfile:
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
ENV PATH /app/node_modules/.bin:$PATH
ENV NODE_OPTIONS="--max-old-space-size=8192"
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
RUN npm install react-scripts#3.4.1 -g
RUN npm install serve -g
COPY ./react_frontend ./
CMD ["serve", "-s", "build", "-l", "3000"]
My attempt to launch both apps within the same Docker Container was to merge the two Dockerfiles, but the resulting container does not have the data from the first Dockerfile, and I am unsure how to proceed.
My merged Dockerfile:
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY ./flask_backend ./
COPY requirements.txt .
RUN pip install -r requirements.txt
CMD python3 app/webapp/app.py
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
ENV PATH /app/node_modules/.bin:$PATH
ENV NODE_OPTIONS="--max-old-space-size=8192"
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
RUN npm install react-scripts#3.4.1 -g
RUN npm install serve -g
COPY ./react_frontend ./
CMD ["serve", "-s", "build", "-l", "3000"]
I am a beginner in using Docker, and hence I forsee that there will be several problems, such as communications between the two apps (Backend uses port 5000), using this method. Any guidiance will be greatly appreciated!
A React application doesn't usually have a server per se (development-only Docker setups aside). Instead, you run a tool like Webpack to compile it down to static files, which you can then serve to the browser, which then runs them.
On your host system you'd run something like
yarn build
which produces a dist directory; then you'd copy this into your Flask static directory.
If you do this entirely ahead-of-time, then you can run your application out of a Python virtual environment, which will be a much easier development and test setup, and the Dockerfile you show won't change.
If you want to build this entirely in Docker (for example to take advantage of a more Docker-native automated build system) a multi-stage build matches well here. You can use a first stage to build the front-end application, and then COPY that into the final application in the second stage. That looks roughly like:
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
COPY ./react_frontend ./
RUN npm run build
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./flask_backend ./
COPY --from=build /app/react_frontend/dist/ ./static/
CMD python3 app/webapp/app.py
This approach is not compatible with setups that overwrite Docker image contents using bind mounts. A non-Docker host Node and Python setup will be a much easier development environment, and for this particular setup isn't likely to be substantially different from the Docker setup.

docker can't run a go output file that already exist

I'm building a multi-stage Dockerfile for my go project.
FROM golang:latest as builder
COPY ./go.mod /app/go.mod
COPY ./go.sum /app/go.sum
#exporting go1.11 module support variable
ENV GO111MODULE=on
WORKDIR /app/
#create vendor directory
RUN go mod download
COPY . /app/
RUN go mod vendor
#building source code
RUN go build -mod=vendor -o main -v ./src/
FROM alpine:latest
RUN apk --no-cache add ca-certificates
COPY --from=builder /app/main /app/main
WORKDIR /app/
ARG port="80"
ENV PORT=$port
EXPOSE $PORT
CMD ["./main"]
When I'm running the image, it throws error:
standard_init_linux.go:207: exec user process caused "no such file or directory"
I've verified that the 'main' file exist in /app/main.
I also tried to give executable permission by adding
chmod +x /app/main
but still it doesn't work.
What can possibly be wrong?
The "latest" version of the golang image is debian based, which uses libc. Alpine uses musl. If you do not compile with CGO_ENABLED=0, networking libraries will link to libc and the no such file or directory error point to a missing library. You can check these shared library links with ldd /app/main. A few solutions I can think of:
compile your program with CGO_ENABLED=0
switch your build image to FROM golang:alpine
change your second stage to be FROM debian

Resources