How to install Unix ODBC drivers to a unix docker instance? - docker

I'm trying to connect my .net core application, hosted on a unix docker container to an external Vertica database.
It works fine when it's a windows client because there are Vertica Drivers for Windows. But there isn't a unix driver for Vertica under unix.
When I try to run a Query against Vertica I get the following error:
Dependency unixODBC with minimum version 2.3.1 is required. Unable to
load shared library 'libodbc.so.2' or one of its dependencies. In
order to help diagnose loading problems, consider setting the LD_DEBUG
environment variable: liblibodbc.so.2.so:
My docker file looks like this
FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app
ARG DEBIAN_FRONTEND=noninteractive
# Copy csproj and restore as distinct layers
COPY ./*.sln ./
COPY ./MyApp/*.csproj ./MyApp/
RUN dotnet restore MyApp.sln
COPY . ./
RUN dotnet publish MyApp.sln -c Release -f=netcoreapp2.1 -o out
RUN cp /app/MyApp/*.yml /app/MyApp/out
RUN cp /app/*.ini /app/VMyApp/out
#ODBC
FROM microsoft/dotnet:aspnetcore-runtime
RUN apt-get update
RUN apt-get install -y apt-utils
RUN curl -O -k https://www.vertica.com/client_drivers/9.1.x/9.1.1-0/vertica-client-9.1.1-0.x86_64.tar.gz
RUN tar vzxf vertica-client-9.1.1-0.x86_64.tar.gz && rm vertica-client-9.1.1-0.x86_64.tar.gz
RUN apt-get install -y unixodbc-dev
ADD odbc.ini /root/odbc.ini
ADD odbcinst.ini /root/odbcinst.ini
ADD vertica.ini /root/vertica.ini
ENV VERTICAINI=/root/vertica.ini
ENV ODBCINI=/root/odbc.ini
RUN echo "$VERTICAINI $ODBCINI"
WORKDIR /app
COPY --from=build-env /app/MyApp/out .
ENTRYPOINT ["dotnet", "MyApp.dll"]

Related

How to install rpm package in aspnet 6 app docker image?

I want to host aspnet 6.0 application in docker container on AlmaLinux (or any centos compatible destro). The application also needs to have maprdrill odbc client that is available only in rpm package. Problem is the base images Microsoft provides are based on debian, alpine and ubuntu only (see here) and none of them let me install rpm package.
I also tried to convert the rpm to deb using alien, conversion was success and package was installed but due to some unknown reason the installation created a layer of 400mb that takes only 80mb on centos and the client appears to not work as well.
What other options I've to install rpm package in aspnet 6 app docker image?
Here is my docker file that understandably fails on line 29:
# base
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
# build
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["HelloWorld/HelloWorld.csproj", "HelloWorld/"]
RUN dotnet restore "HelloWorld/HelloWorld.csproj"
COPY . .
WORKDIR "/src/HelloWorld"
RUN dotnet build "HelloWorld.csproj" -c Release -r rhel-x64 --self-contained false -o /app/build
# publish
FROM build AS publish
RUN dotnet publish "HelloWorld.csproj" -c Release -r rhel-x64 --self-contained false -o /app/publish
FROM base AS final
RUN apt update
WORKDIR /app
COPY --from=publish /app/publish .
WORKDIR /mapr
RUN apt-get -yq install wget
RUN wget http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.5.1.1002/maprdrill-1.5.1.1002-1.el7.x86_64.rpm
RUN rpm -i maprdrill-1.5.1.1002-1.el7.x86_64.rpm
ENTRYPOINT ["dotnet", "HelloWorld.dll"]

How do I modify my DOCKERFILE to install wget into kubernetes pod?

Right now my DOCKERFILE builds a dotnet image that is installed/updated and run inside its own pod in a Kubernetes cluster.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
ARG DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
ARG DOTNET_CLI_TELEMETRY_OPTOUT=1
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
ARG DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
ARG DOTNET_CLI_TELEMETRY_OPTOUT=1
ARG ArtifactPAT
WORKDIR /src
RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/*
COPY /src .
RUN dotnet restore "./sourceCode.csproj" -s "https://api.nuget.org/v3/index.json"
RUN dotnet build "./sourceCode.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "./sourceCode.csproj" -c
Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "SourceCode.dll"]
EXPOSE 80
The cluster is very bare-bones and does not include either curl nor wget on it. So, I need to get wget or curl installed in the pod/cluster to execute scripted commands that are set to run automatically after deployment and startup are completed. The command to do the install:
RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/*
within the DOCKERFILE seems to do nothing to install in the Kubernetes cluster. As after the build run and deploys if I were to exec into the pod and try to run
wget --help
I get wget doesn't exist. I do not have a lot of experience build DOCKERFILEs so I am truely getting stumped. And I want this automated in the DOCKERFILE as I will not be able to log into environments above our Test to perform the install manually.
its not related to kubernetes nor pods. Actually you cant install anything to kubernetes pod. you can install packages to containers which runs on pod.
Your problem is that, you install wget to your build image. when you use this image below you lost all installed packages. because those packages belong to build image. build, base, final they are different images.you need to copy files explicitly like you did final image. like this
COPY --from=publish /app .
so add command in the below to your final image and you can use wget without no problem.
RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/*
see this link for more info && best practices.
https://www.docker.com/blog/intro-guide-to-dockerfile-best-practices/
Everything between:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
ARG DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
ARG DOTNET_CLI_TELEMETRY_OPTOUT=1
WORKDIR /app
and:
FROM base AS final
is irrelevant. With that line, you start constructing a new image from base which was defined in the first block.
(Incidentally, on the next line, you duplicate the WORKDIR statement needlessly. Also, final is the name you'll use to refer to base, it isn't a name for this finally defined image, so that doesn't really make sense - you don't want to do e.g. COPY --from=final.)
You need to install wget in either the base image, or in the last defined image which you'll actually be running, at the end.

Monolith docker application with webpack

I am running my monolith application in a docker container and k8s on GKE.
The application contains python & node dependencies also webpack for front end bundle.
We have implemented CI/CD which is taking around 5-6 min to build & deploy new version to k8s cluster.
Main goal is to reduce the build time as much possible. Written Dockerfile is multi stage.
Webpack is taking more time to generate the bundle.To buid docker image i am using already high config worker.
To reduce time i tried using the Kaniko builder.
Issue :
As docker cache layers for python code it's working perfectly. But when there is any changes in JS or CSS file we have to generate bundle.
When there is any changes in JS & CSS file instead if generate new bundle its use caching layer.
Is there any way to separate out build new bundle or use cache by passing some value to docker file.
Here is my docker file :
FROM python:3.5 AS python-build
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
ADD . /app
FROM node:10-alpine AS node-build
WORKDIR /app
COPY --from=python-build ./app/app/static/package.json app/static/
COPY --from=python-build ./app ./
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass && npm run sass && npm run build
FROM python:3.5-slim
COPY --from=python-build /root/.cache /root/.cache
WORKDIR /app
COPY --from=node-build ./app ./
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
I would suggest to create separate build pipelines for your docker images, where you know that the requirements for npm and pip aren't so frequent.
This will incredibly improve the speed, reducing the time of access to npm and pip registries.
Use a private docker registry (the official one or something like VMWare harbor or SonaType Nexus OSS).
You store those build images on your registry and use them whenever something on the project changes.
Something like this:
First Docker Builder // python-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t python-builder:YOUR_TAG -f Dockerfile.python.build .
FROM python:3.5
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
Second Docker Builder // js-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t js-builder:YOUR_TAG -f Dockerfile.js.build .
FROM node:10-alpine
WORKDIR /app
COPY app/static/package.json /app/app/static
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass
Your Application Multi-stage build:
docker build --no-cache -t app_delivery:YOUR_TAG -f Dockerfile.app .
FROM python-builder:YOUR_TAG as python-build
# Nothing, already "stoned" in another build process
FROM js-builder:YOUR_TAG AS node-build
ADD ##### YOUR JS/CSS files only here, required from npm! ###
RUN npm run sass && npm run build
FROM python:3.5-slim
COPY . /app # your original clean app
COPY --from=python-build #### only the files installed with the pip command
WORKDIR /app
COPY --from=node-build ##### Only the generated files from npm here! ###
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
A question is: why do you install curl and execute again the pip install -r requirements.txt command in the final docker image?
Triggering every time an apt-get update and install without cleaning the apt cache /var/cache/apt folder produces a bigger image.
As suggestion, use the docker build command with the option --no-cache to avoid caching result:
docker build --no-cache -t your_image:your_tag -f your_dockerfile .
Remarks:
You'll have 3 separate Dockerfiles, as I listed above.
Build the Docker images 1 and 2 only if you change your python-pip and node-npm requirements, otherwise keep them fixed for your project.
If any dependency requirement changes, then update the docker image involved and then the multistage one to point to the latest built image.
You should always build only the source code of your project (CSS, JS, python). In this way, you have also guaranteed reproducible builds.
To optimize your environment and copy files across the multi-stage builders, try to use virtualenv for python build.

Docker - "The command '/bin/sh -c apt-get install nodejs' returned a non-zero code: 100"

I am trying to build a Docker image for my API with the following Dockerfile:
FROM microsoft/dotnet AS build-env
ARG source
RUN echo "source: $source"
WORKDIR /app
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash
RUN apt-get install nodejs
RUN node -v
RUN npm -v
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
#Copy everything else & build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet
WORKDIR /app
COPY --from=build-env /app/out .
EXPOSE 80
ENTRYPOINT ["dotnet", "API_App.dll"]
However, when I run the docker build command, I keep getting the following error:
Unable to locate package nodejs
The command '/bin/sh -c apt-get install nodejs returned a non-zero code: 100
Can someone tell me why I am getting this error?
Node Version: 8.11.3
npm Version: 5.6.0
You may occasionally experience some cache issues when the live repositories you’re pulling data from have changed.
To fix this, modify the Dockerfile to do a cleanup and update of the sources before you install any new packages.
...
# clean and update sources
RUN apt-get clean && apt-get update
...
This answer is from digitalocean-issue

Dockerfile: How can I build my project before copying only the needed files to the image?

FROM golang:1.8
RUN apt-get -y update && apt-get install -y curl
RUN go get -u github.com/gorilla/mux
RUN go get github.com/mattn/go-sqlite3
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y nodejs
COPY . /go/src/beginnerapp
WORKDIR ./src/beginnerapp/beginner-app-react
RUN npm run build
RUN go install beginnerapp/
WORKDIR /go/src/beginnerapp/beginner-app-react
VOLUME /go/src/beginnerapp/local-db
WORKDIR /go/src/beginnerapp
ENTRYPOINT /go/bin/beginnerapp
EXPOSE 8080
At the start, the golang project as well as the reactjs code don't exist on the image and need to be copied over before being able to build (js) / install (golang). Is there a way I can do that build/install process before copying files over to the image? Ideally I'd only need to copy over the golang executable and reactjs production build.
Yes this is possible now using multi stage builds. The idea is that you can have multiple FROM in your docker file and your main image will be built using the last FROM. Below is a sample pseudo structure
FROM node:latest as reactbuild
WORKDIR /app
COPY . .
RUN webpack build
FROM golang:latest as gobuild
WORKDIR /app
COPY . .
RUN go build
FROM alpine
WORKDIR /app
COPY --from=gobuild /app/myapp /app/myapp
COPY --from=reactbuild /app/dist /app/dist
Please read below article for more details
https://docs.docker.com/engine/userguide/eng-image/multistage-build/

Resources