How to update dependencies in docker using poetry? - docker

I am trying to update a dependency in docker using poetry, I have added
RUN poetry update
RUN poetry install -n
in Dockerfile but it doesn't update the package. There is an import error with an older version of tortoise ORM, which requires an upgrade (verified by running the project without docker and with virtualenv and with the newer package) which persists with the changes to the Dockerfile.
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
RUN pip install poetry
# RUN poetry config virtualenvs.create false
COPY poetry.lock pyproject.toml ./
# for poetry
RUN mkdir -p /app/app/
RUN touch /app/app/__init__.py
RUN poetry update
RUN poetry install -n
COPY ./app /app/app
EXPOSE 8000
my Dockerfile for reference.

Related

Docker image runs on heroku server but not locally

I have a docker image that runs as expected from Heroku, but breaks when I try to run the image locally. When I run the image locally, I am not able to stop the image or open the Docker CLI. When I repeatedly hit stop, the container exits with a code of 137. I run the command docker build -t generate_test_service. from the project directory and simply press play from the Docker application.
Dockerfile
# start by pulling the python image
FROM alpine:latest
# copy the requirements file into the image
COPY ./requirements.txt /app/requirements.txt
# switch working directory
WORKDIR /app
# install the dependencies and packages in the requirements file
RUN apk update
RUN apk add py-pip
RUN apk add --no-cache python3-dev
ENV VIRTUAL_ENV=./venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN pip install --upgrade pip
RUN apk add --update gcc libc-dev linux-headers && rm -rf /var/cache/apk/*
RUN pip install psutil
RUN pip --no-cache-dir install -r requirements.txt
# copy every content from the local file to the image
COPY . /app
# configure the container to run in an executed manner
ENTRYPOINT [ "python3" ]
CMD [ "app.py" ]
EXPOSE 4000
app.py
from flask import Flask
from flask import jsonify
import os
app = Flask(__name__)
if __name__ == '__main__':
port = os.environ.get("PORT", 5000)
app.run(debug=False, host='0.0.0.0', port=port)

Build NextJS Docker image with nginx server

I am new to docker and trying to learn it by it's documentation. AS i need to create a NextJS build using docker image for nginx server i have followed the below process.
Install the nginx
Seeding the port 80 to 3000 in the default config.
Symlink the out directory to base nginx directory
CMD to take care the production build and symlinking of the out directory.
FROM node:alpine AS deps
RUN apk add --no-cache libc6-compat git
RUN apt-get install nginx -y
WORKDIR /sample-app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
FROM node:alpine AS builder
WORKDIR /sample-app
COPY . .
COPY --from=deps /sample-app/node_modules ./node_modules
RUN yarn build
FROM node:alpine AS runner
WORKDIR /sample-app
ENV NODE_ENV production
RUN ls -SF /sample-app/out /usr/share/nginx/html
RUN -p 3000:80 -v /sample-app/out:/usr/share/nginx/html:ro -d nginx
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /sample-app/out
USER nextjs
CMD ["nginx -g daemon=off"]
While running the docker build shell script command as sudo docker build . -t sample-app it throws the error The command '/bin/sh -c apt-get install nginx -y' returned a non-zero code: 127
I do not have much experience with alpine images, but I think that you have to use apk (Alpine Package Keeper) for installing packages
try apk add nginx instead of apt-get install nginx -y

How to install Unix ODBC drivers to a unix docker instance?

I'm trying to connect my .net core application, hosted on a unix docker container to an external Vertica database.
It works fine when it's a windows client because there are Vertica Drivers for Windows. But there isn't a unix driver for Vertica under unix.
When I try to run a Query against Vertica I get the following error:
Dependency unixODBC with minimum version 2.3.1 is required. Unable to
load shared library 'libodbc.so.2' or one of its dependencies. In
order to help diagnose loading problems, consider setting the LD_DEBUG
environment variable: liblibodbc.so.2.so:
My docker file looks like this
FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app
ARG DEBIAN_FRONTEND=noninteractive
# Copy csproj and restore as distinct layers
COPY ./*.sln ./
COPY ./MyApp/*.csproj ./MyApp/
RUN dotnet restore MyApp.sln
COPY . ./
RUN dotnet publish MyApp.sln -c Release -f=netcoreapp2.1 -o out
RUN cp /app/MyApp/*.yml /app/MyApp/out
RUN cp /app/*.ini /app/VMyApp/out
#ODBC
FROM microsoft/dotnet:aspnetcore-runtime
RUN apt-get update
RUN apt-get install -y apt-utils
RUN curl -O -k https://www.vertica.com/client_drivers/9.1.x/9.1.1-0/vertica-client-9.1.1-0.x86_64.tar.gz
RUN tar vzxf vertica-client-9.1.1-0.x86_64.tar.gz && rm vertica-client-9.1.1-0.x86_64.tar.gz
RUN apt-get install -y unixodbc-dev
ADD odbc.ini /root/odbc.ini
ADD odbcinst.ini /root/odbcinst.ini
ADD vertica.ini /root/vertica.ini
ENV VERTICAINI=/root/vertica.ini
ENV ODBCINI=/root/odbc.ini
RUN echo "$VERTICAINI $ODBCINI"
WORKDIR /app
COPY --from=build-env /app/MyApp/out .
ENTRYPOINT ["dotnet", "MyApp.dll"]

Monolith docker application with webpack

I am running my monolith application in a docker container and k8s on GKE.
The application contains python & node dependencies also webpack for front end bundle.
We have implemented CI/CD which is taking around 5-6 min to build & deploy new version to k8s cluster.
Main goal is to reduce the build time as much possible. Written Dockerfile is multi stage.
Webpack is taking more time to generate the bundle.To buid docker image i am using already high config worker.
To reduce time i tried using the Kaniko builder.
Issue :
As docker cache layers for python code it's working perfectly. But when there is any changes in JS or CSS file we have to generate bundle.
When there is any changes in JS & CSS file instead if generate new bundle its use caching layer.
Is there any way to separate out build new bundle or use cache by passing some value to docker file.
Here is my docker file :
FROM python:3.5 AS python-build
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
ADD . /app
FROM node:10-alpine AS node-build
WORKDIR /app
COPY --from=python-build ./app/app/static/package.json app/static/
COPY --from=python-build ./app ./
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass && npm run sass && npm run build
FROM python:3.5-slim
COPY --from=python-build /root/.cache /root/.cache
WORKDIR /app
COPY --from=node-build ./app ./
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
I would suggest to create separate build pipelines for your docker images, where you know that the requirements for npm and pip aren't so frequent.
This will incredibly improve the speed, reducing the time of access to npm and pip registries.
Use a private docker registry (the official one or something like VMWare harbor or SonaType Nexus OSS).
You store those build images on your registry and use them whenever something on the project changes.
Something like this:
First Docker Builder // python-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t python-builder:YOUR_TAG -f Dockerfile.python.build .
FROM python:3.5
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
Second Docker Builder // js-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t js-builder:YOUR_TAG -f Dockerfile.js.build .
FROM node:10-alpine
WORKDIR /app
COPY app/static/package.json /app/app/static
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass
Your Application Multi-stage build:
docker build --no-cache -t app_delivery:YOUR_TAG -f Dockerfile.app .
FROM python-builder:YOUR_TAG as python-build
# Nothing, already "stoned" in another build process
FROM js-builder:YOUR_TAG AS node-build
ADD ##### YOUR JS/CSS files only here, required from npm! ###
RUN npm run sass && npm run build
FROM python:3.5-slim
COPY . /app # your original clean app
COPY --from=python-build #### only the files installed with the pip command
WORKDIR /app
COPY --from=node-build ##### Only the generated files from npm here! ###
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
A question is: why do you install curl and execute again the pip install -r requirements.txt command in the final docker image?
Triggering every time an apt-get update and install without cleaning the apt cache /var/cache/apt folder produces a bigger image.
As suggestion, use the docker build command with the option --no-cache to avoid caching result:
docker build --no-cache -t your_image:your_tag -f your_dockerfile .
Remarks:
You'll have 3 separate Dockerfiles, as I listed above.
Build the Docker images 1 and 2 only if you change your python-pip and node-npm requirements, otherwise keep them fixed for your project.
If any dependency requirement changes, then update the docker image involved and then the multistage one to point to the latest built image.
You should always build only the source code of your project (CSS, JS, python). In this way, you have also guaranteed reproducible builds.
To optimize your environment and copy files across the multi-stage builders, try to use virtualenv for python build.

Dockerising pelican project

I'm trying to dockerise my pelican site project. I've created a docker-compose.yml file and a Dockerfile.
However, every time I try to build my project (docker-compose up) I get the following errors for both pip install and npm install:
npm WARN saveError ENOENT: no such file or directory, open '/src/package.json'
...
Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
The directory structure of the project is as follows:
- **Dockerfile**
- **docker-compose.yml**
- content/
- pelican-plugins/
- src/
- Themes/
- Pelican config files
- requirements.txt
- gulpfile.js
- package.js
All the pelican makefiles etc. are in the src directory.
I'm trying to load the content, src, and pelican-plugins directories as volumes so I can modify them on my local machine for the docker container to use.
Here is my Dockerfile:
FROM python:3
WORKDIR /src
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
# Install Node.js 8 and npm 5
RUN apt-get update
RUN apt-get -qq update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash
RUN apt-get install -y nodejs
# Set the locale
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
RUN npm install
RUN python -m pip install --upgrade pip
RUN pip install -r requirements.txt
ENV SRV_DIR=/src
RUN chmod +x $SRV_DIR
RUN make clean
VOLUME /src/output
RUN make devserver
RUN gulp
And here is my docker-compose.yml file:
version: '3'
services:
web:
build: .
ports:
- "80:80"
volumes:
- ./content:/content
- ./src:/src
- ./pelican-plugins:/pelican-plugins
volumes:
logvolume01: {}
It definitely looks like I have set up my volumes directories properly in dockerfiles...
Thanks in advance!
Your Dockerfile doesn't COPY (or ADD) any files at all, so the /src directory is empty.
You can verify this yourself. When you run docker build it will print out output like:
Step 13/22 : ENV LC_ALL en_US.UTF-8
---> Running in 3ab80c3741f8
Removing intermediate container 3ab80c3741f8
---> d240226b6600
Step 14/22 : RUN npm install
---> Running in 1d31955d5b28
npm WARN saveError ENOENT: no such file or directory, open '/src/package.json'
The last line in each step with just a hex number is actually a valid image ID that's the final result of running each step, and you can then:
% docker run --rm -it d240226b6600 sh
# pwd
/src
# ls
To fix this you need a line in the Dockerfile like
COPY . .
You probably also need to change into the src subdirectory to run npm install and the like as you've shown your directory layout. This can look like:
WORKDIR /src
COPY . .
# Either put "cd" into the command itself
# (Each RUN command starts a fresh container at the current WORKDIR)
RUN cd src && npm install
# Or change WORKDIRs
WORKDIR /src/src
RUN pip install -r requirements.txt
WORKDIR /src
Remember that everything in the Dockerfile happens before any setting in docker-compose.yml outside the build: block is even considered. Environment variables, volume mounts, and networking options for a container have no effect on the image build sequence.
In terms of Dockerfile style, your VOLUME declaration will have some tricky unexpected side effects and probably is unnecessary; I'd remove it. Your Dockerfile is also missing the CMD that the container should run. You should also combine RUN apt-get update && apt-get install into single commands; the way Docker layer caching works and the way the Debian repositories work, it's very easy to wind up with a cached package index that names files from a week ago that don't exist any more.
While the setup you're describing is fairly popular, it also essentially hides everything the Dockerfile does with your local source tree. The npm install you're describing here, for example, will be a no-op because the volume mount will hide /src/src/node_modules. I generally find it easier to just run python, npm, etc. locally while I'm developing, rather than write and debug this 50-line YAML file and run sudo docker-compose up.

Resources