How to add GDAL in docker - docker

I am trying to setup Docker and geodjagno. Upon docker-compose up I have this following error:
django.core.exceptions.ImproperlyConfigured: Could not find the GDAL library (tried "gdal", "GDAL", "gdal2.2.0", "gdal2.1.0", "gdal2.0.0", "gdal1.11.0", "gdal1.10.0", "gdal1.9.0"). Is GDAL installed? If it is, try setting GDAL_LIBRARY_PATH in your settings.
GDAL is a library that can be found in this image wooyek/geodjango
Dockerfile
FROM wooyek/geodjango
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
docker-compose
services:
web:
build: .
container_name: web
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
db:
image: mdillon/postgis
#command: -e POSTGRES_USER=johndoe -e POSTGRES_PASSWORD=myfakedata -e POSTGRES_DB=myfakedata library/postgres
environment:
- POSTGRES_USER=johndoe
- POSTGRES_PASSWORD=myfakedata
- POSTGRES_DB=myfakedata
ports:
- "5435:5432"
adminer:
image: adminer
restart: always
ports:
- 8080:8080

Try adding the following in your Dockerfile:
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gdal-bin

You can add the following to your docker file
# Install GDAL dependencies
RUN apt-get install -y libgdal-dev g++ --no-install-recommends && \
apt-get clean -y
# Update C env vars so compiler can find gdal
ENV CPLUS_INCLUDE_PATH=/usr/include/gdal
ENV C_INCLUDE_PATH=/usr/include/gdal

Related

Docker not loading files from npm run serve

I have a development environment where I run npm run serve in my local terminal and then docker-compose up -d in a different terminal to run the services I need to start my system.
I have an instance where I am attempting to run front-end tests, which I run inside of a running container using nightwatchJS, and for some reason the test runner is not accessing the files loaded from npm run serve. Quite literally when I print out a screenshot using the test runner the page looks as if I have canceled running npm run serve, however when I go to the page 127.0.0.1 in my browser, everything is loading as usual.
I think my issue is that the test is being run inside of a docker container like so:
docker-compose exec web bash -c "npx nightwatch ...file"
where that specific instance is not running npm run serve but I am confused as to why it works when I hit the browser personally. I have tried exposing ports in the Dockerfile but that does not work.
Can anybody point me in the right direction?
Here is my Dockerfile:
FROM python:3.8.5-slim-buster
# the first 2 prevent Python from writing out pyc files or from buffering stdin/stdout
# the others are Node
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 12.7.0
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# the man1 directory is not present for slim-buster so we add that and then install all of the default system based dependencies
# NOTE...TOP LAYERS ARE CACHED FIRST!!!!
RUN mkdir -p /usr/share/man/man1 \
&& apt-get clean && apt-get update -y && apt-get install pdftk-java curl git -y \
&& curl --silent -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash \
&& apt-get install zlib1g-dev libjpeg-dev python3-pythonmagick inkscape xvfb poppler-utils libfile-mimeinfo-perl qpdf libimage-exiftool-perl ufraw-batch ffmpeg gcc procps -y \
&& apt-get clean && apt-get autoclean
# SELINUM
# get wget...
# Adding trusting keys to apt for repositories
RUN apt-get install gnupg -y && apt-get install wget -y \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list' \
&& apt-get update -y \
&& apt-get install google-chrome-stable -y \
&& apt-get install unzip -yqq
# Set up Chromedriver Env Vars
ENV CHROMEDRIVER_VERSION 87.0.4280.20
ENV CHROMEDRIVER_DIR /chromedriver
# make directory for it...
RUN mkdir $CHROMEDRIVER_DIR
# Download and install Chromedriver
RUN wget -q --continue -P $CHROMEDRIVER_DIR "http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip" \
&& unzip $CHROMEDRIVER_DIR/chromedriver* -d $CHROMEDRIVER_DIR \
&& rm "$CHROMEDRIVER_DIR/chromedriver_linux64.zip"
# Put Chromedriver into the PATH
ENV PATH $CHROMEDRIVER_DIR:$PATH
# Set display port as an environment variable
ENV DISPLAY=:99
# SELINUM
## NIGHTMARE
#RUN apt-get install wget -y && wget http://selenium-release.storage.googleapis.com/2.44/selenium-server-standalone-2.44.0.jar -P /bin/
#RUN apt install default-jre -y
#RUN apt-get install -y xvfb x11-xkb-utils xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillic x11-apps clang libdbus-1-dev libgtk2.0-dev libnotify-dev libgconf2-dev libasound2-dev libcap-dev libcups2-dev libxtst-dev libxss1 libnss3-dev gcc-multilib g++-multilib
# ensure node is installed, and at the end, make the working directory
RUN . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default \
&& mkdir /code
# set working directory to /code...it was just made for this purpose
WORKDIR /code
# possible that these will cache so separate them from COPY . /code/
COPY requirements.txt /code/
# now install, this will normally also cache
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# place this at the end because the code will always change...this will almost never cache...
COPY . /code/
EXPOSE 8001
EXPOSE 8888
Here is my compose file:
version: '3.4'
services:
redis:
image: redis
ports:
- "6379"
restart: unless-stopped
networks:
main:
aliases:
- redis
postgres:
image: postgres:12
ports:
- "5432:5432"
env_file: ./.env
restart: unless-stopped
volumes:
- pgdata:/var/lib/postgresql/data
networks:
main:
aliases:
- postgres
#access by going to localhost:16543
#when adding a server to the serve list
#the hostname is postgres
#the username is postgres
#the password is postgres
pgadmin:
image: dpage/pgadmin4
links:
- postgres
depends_on:
- postgres
env_file: ./.env
restart: unless-stopped
ports:
- "16543:80"
networks:
main:
aliases:
- pgadmin
celery:
build:
network: host
context: .
dockerfile: Dockerfile-dev # use docker-dev because production npm installs and npm builds
command: python manage.py celery
env_file: ./.env
restart: unless-stopped
volumes:
- .:/code
- tmp:/tmp
links:
- redis
depends_on:
- redis
networks:
main:
aliases:
- celery
web:
build:
network: host
context: .
dockerfile: Dockerfile-dev
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
- tmp:/tmp
ports:
- "8000:8000"
env_file: ./.env
restart: unless-stopped
links:
- postgres
- redis
- celery
- pgadmin
depends_on:
- postgres
- redis
- celery
- pgadmin
networks:
main:
aliases:
- web
volumes:
pgdata:
tmp:
networks:
main:

build path either does not exist, is not accessible, or is not a valid URL

I tryna figure out docker to run my django rest framework + vue.js project in clouds. I built Dockerfile and docker-compose.yml file to start an ubuntu machine and run the postgresql, vue.js and drf containers. But when I try run docker-compose build I get the following message:
build path either does not exist, is not accessible, or is not a valid URL
Here is my Dockerfile:
RUN apt-get update && upt-get install -y \
gcc \
musl-dev \
node.js \
postgresql-server-dev-10 \
apt-utils \
python3.7 \
python3.7-dev \
python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN npm install webpack#2.9
WORKDIR /app
COPY requirements.txt /app
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . /app
docker-compose.yml:
version: '3.5'
services:
postgres:
image: postgres:10
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: 8599
POSTGRES_DB: adserver
volumes:
- adserver-data/postgresql/data:/var/lib/postgresql/data
restart: always
rest_framework:
build:
context: ./app/adserver
dockerfile: Dockerfile
depends_on:
- postgres
command: ['python manage.py runserver']
restart: always
vue:
build:
context: ./app/adserver-vue
depends_on:
- rest_framework
command: ['npm run watch']
Please tell me what am I doing wrong?
Verify the name of folders because the folder app/adserver-vue needs to exist with the name equal in docker-compose.yml

New code changes exist in live container but are not reflected in the browser

I am using Docker with the open source BI tool Apache Superset. I have added a new file, specifically a .geojson file in the CountryMap directory. Now, when I try to build using docker-compose up --build or make changes in the frontend, Docker is not fully updated, and I get a file not found error when trying to run a query. When I look inside the container via docker exec -it container_id bash, the new file is there.
Dockerfile:
FROM python:3.6-jessie
RUN useradd --user-group --create-home --no-log-init --shell /bin/bash superset
# Configure environment
ENV LANG=C.UTF-8 \
LC_ALL=C.UTF-8
RUN apt-get update -y
# Install dependencies to fix `curl https support error` and `elaying package configuration warning`
RUN apt-get install -y apt-transport-https apt-utils
# Install superset dependencies
# https://superset.incubator.apache.org/installation.html#os-dependencies
RUN apt-get install -y build-essential libssl-dev \
libffi-dev python3-dev libsasl2-dev libldap2-dev libxi-dev
# Install extra useful tool for development
RUN apt-get install -y vim less postgresql-client redis-tools
# Install nodejs for custom build
# https://superset.incubator.apache.org/installation.html#making-your-own-build
# https://nodejs.org/en/download/package-manager/
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& apt-get install -y nodejs
WORKDIR /home/superset
COPY requirements.txt .
COPY requirements-dev.txt .
COPY contrib/docker/requirements-extra.txt .
RUN pip install --upgrade setuptools pip \
&& pip install -r requirements.txt -r requirements-dev.txt -r requirements-extra.txt \
&& rm -rf /root/.cache/pip
RUN pip install gevent
COPY --chown=superset:superset superset superset
ENV PATH=/home/superset/superset/bin:$PATH \
PYTHONPATH=/home/superset/superset/:$PYTHONPATH
USER superset
RUN cd superset/assets \
&& npm ci \
&& npm run build \
&& rm -rf node_modules
COPY contrib/docker/docker-init.sh .
COPY contrib/docker/docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
HEALTHCHECK CMD ["curl", "-f", "http://localhost:8088/health"]
EXPOSE 8088
docker-compose.yml:
version: '2'
services:
redis:
image: redis:3.2
restart: unless-stopped
ports:
- "127.0.0.1:6379:6379"
volumes:
- redis:/data
postgres:
image: postgres:10
restart: unless-stopped
environment:
POSTGRES_DB: superset
POSTGRES_PASSWORD: superset
POSTGRES_USER: superset
ports:
- "127.0.0.1:5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
superset:
build:
context: ../../
dockerfile: contrib/docker/Dockerfile
restart: unless-stopped
environment:
POSTGRES_DB: superset
POSTGRES_USER: superset
POSTGRES_PASSWORD: superset
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
REDIS_HOST: redis
REDIS_PORT: 6379
# If using production, comment development volume below
#SUPERSET_ENV: production
SUPERSET_ENV: development
# PYTHONUNBUFFERED: 1
user: root:root
ports:
- 8088:8088
depends_on:
- postgres
- redis
volumes:
# this is needed to communicate with the postgres and redis services
- ./superset_config.py:/home/superset/superset/superset_config.py
# this is needed for development, remove with SUPERSET_ENV=production
- ../../superset:/home/superset/superset
volumes:
postgres:
external: false
redis:
external: false
Why is there a not found error?
try to use absolute path in volumes:
volumes:
- /home/me/my_project/superset_config.py:/home/superset/superset/superset_config.py
- /home/me/my_project/superset:/home/superset/superset
It is because docker-compose is utilizing cache. If the dockerfile and the docker-compose.yml in not changed it does not recreate the container image. To avoid this you should use the following flag:
--force-recreate
--force-recreate
Recreate containers even if their configuration and image haven't
changed.
For development purposes I like to use the following switch as well:
-V, --renew-anon-volumes
Recreate anonymous volumes instead of retrieving data from the previous containers.

Docker - No such file or directory. Copy does not copy all files

Dockerfile does not completely copy all files from the local directory.
I don’t understand why he copies the folder from the backend, but it doesn’t have all the files.
Thanks for help..
structure:
docker/
django/
/Dockerfile
/backend/
requirements.txt
src/
angular4/
/Dockerfile
/client
docker-compose.yml
docker-compose.yml
version: '3'
services:
db:
image: postgres:9.6
hostname: db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: alinta
ports:
- "5432:5432"
backend:
build: ./django
image: alinta
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/var/www/alinta/
ports:
- "8000:8000"
depends_on:
- db
migration:
image: alinta
command: python3 manage.py migrate --noinput
volumes:
- .:/var/www/alinta/
depends_on:
- db
frontend:
build: ./angular4
volumes:
- .:/var/www/alinta
ports:
- "4200:4200"
Dockerfile (django):
FROM postgres:9.6
RUN apt-get update && apt-get install -q -y postgresql-9.6 postgresql-client-9.6 postgresql-contrib-9.6 postgresql-client-common postgresql-common
#RUN echo postgres:postgres | chpasswd
#RUN pg_createcluster 9.6 main --start
#RUN /etc/init.d/postgresql start
FROM python:3.7
MAINTAINER Nikita Alekseev <nik_alekseev#outlook.com>
# Alinta
# Version: 1.0
# Install Python and Package Libraries
RUN apt-get update && apt-get upgrade -y && apt-get autoremove && apt-get autoclean
RUN apt-get install -y \
libffi-dev \
libssl-dev \
libxml2-dev \
libxslt-dev \
libjpeg-dev \
libfreetype6-dev \
zlib1g-dev \
net-tools \
vim
RUN apt-get install -y \
python3-pip \
python3-dev \
python3-virtualenv \
libpq-dev \
postgresql \
postgresql-contrib \
nginx \
curl
RUN pip3 install virtualenv
# Project Files and Settings
ARG PROJECT=alinta
ARG PROJECT_DIR=/var/www/${PROJECT}
RUN mkdir -p $PROJECT_DIR/backend/src
RUN mkdir -p $PROJECT_DIR/backend/src/static
RUN mkdir -p $PROJECT_DIR/backend/src/media
#WORKDIR $PROJECT_DIR
COPY ./backend /var/www/alinta/
WORKDIR $PROJECT_DIR/backend
RUN virtualenv -p python3.7 --no-site-packages env
RUN /bin/bash -c "source env/bin/activate"
RUN pip3 install -r requirements.txt
WORKDIR $PROJECT_DIR/backend/src
EXPOSE 8000
docker-compose build
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
For your Dockerfile your requirements.txt is in backend directory, and not on the same level, so adding or copying it will make it visible:
so add:
ADD ./backend/requirements.txt requirements.txt
or
COPY ./backend/requirements.txt requirements.txt
before you run
RUN pip3 install -r requirements.txt
Just copying backend directory as:
COPY ./backend /var/www/alinta/
will not allow Docker to know the context of your requrements.txt. Using ADD or COPY will instruct Docker how to locate file.

How to run php-fpm in docker-compose.yml?

I tried to build a container used docker-compose. So I wrote the dockerfile and docker-compose.yml like following:
dockerfile
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y expect
RUN apt-get -y install software-properties-common
RUN apt-add-repository ppa:ondrej/php
RUN apt-get -y install php7.1 php7.1-fpm
RUN apt-get install php7.1-mysql
RUN apt-get -y install nginx
RUN apt-get -y install vim
COPY default /etc/nginx/sites-available/default
COPY www.conf /etc/php/7.1/fpm/pool.d/www.conf
COPY test /var/www/html/test
CMD service php7.1-fpm start && nginx -g "daemon off;"
docker-compose.yml
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "3011:80"
When I run following command, the php7.1-fpm is run success.
docker-compose build
docker-compose up --force-recreate -d
But I want to move the CMD from dockerfile to docker-compose, so I changed the file like following:
docker-compose.yml
command: service php7.1-fpm start && nginx -g "daemon off;"
But this time php7.1-fpm is not running.
How to fix this issue, so that I can run php7.1-fpm in docker-compose.yml?
you can not use service php7.1-fpm start in your Dockerfile, because container is just a process, not a real virtual machine, main process down and others will down neither
docker suggest you to divide them in different container, php-fpm, nginx, single image single container
solution:
docker/php-fpm/Dockerfile
FROM php:7.2-fpm
RUN docker-php-ext-install pdo pdo_mysql mbstring
docker-compose.yml:
version: '2.1'
services:
nginx:
image: nginx:latest
ports:
- 8001:80
volumes:
- ./:/app
# nginx configs
- ./docker/nginx/conf/nginx.conf:/etc/nginx/nginx.conf
php-fpm:
build: ./docker/php-fpm
volumes:
- ./:/app
php-composer:
restart: 'no'
image: composer
volumes:
- ./:/app
command: install
nodejs:
restart: 'no'
image: node:8.9
volumes:
- ./:/app
command: /bin/bash -c "cd /app && npm install && npm run prod"
networks:
default:

Resources