Running rust sqlx migrations locally with docker-compose - docker

I'm working through Zero to Prod in Rust and I've gone off script a bit. I'm working on dockerizing the whole setup locally including the database. On ENTRYPOINT the container calls a startup script that attempts to call sqlx migrate run, leading to the error ./scripts/init_db.sh: line 10: sqlx: command not found.
I think I've worked it out that because I'm using bullseye-slim as the runtime it doesn't keep the installed rust packages around for the final image, which helps with the build time and image size.
Is there a way to run sqlx migrations without having rust, cargo etc installed? Or is there a better way altogether to accomplish this? I'd like to avoid just reinstalling everything in the bullseye-slim image and losing some of the docker optimization there.
# Dockerfile
# .... chef segment omitted
FROM chef as builder
COPY --from=planner /app/recipe.json recipe.json
# Build our project dependencies, not our application!
RUN cargo chef cook --release --recipe-path recipe.json
# Up to this point, if our dependency tree stays the same,
# all layers should be cached.
COPY . .
ENV SQLX_OFFLINE true
# Build our project
RUN cargo build --release --bin my_app
FROM debian:bullseye-slim AS runtime
WORKDIR /app
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends openssl ca-certificates \
&& apt-get install -y --no-install-recommends postgresql-client \
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/my_app my_app
COPY configuration configuration
COPY scripts scripts
RUN chmod -R +x scripts
ENTRYPOINT ["./scripts/docker_startup.sh"]
docker-compose.yml looks like below
version: '3'
services:
db:
image: postgres:latest
environment:
- POSTGRES_DB=my_app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
ports:
- "5432:5432"
volumes:
- dbdata:/var/lib/postgresql/data
app:
image: my_app
environment:
- DATABASE_URL=postgres://postgres:password#postgres:5432/my_app
depends_on:
- db
ports:
- "8080:8080"
volumes:
dbdata:
driver: local

You can install sqlx-cli with cargo install in your build stage
cargo install sqlx-cli
then copy it over to the deployment stage with
COPY --from=builder $HOME/.cargo/bin/sqlx-cli sqlx-cli
Or you can run the migrations when your application starts with the migrate! macro
sqlx::migrate!("db/migrations")
.run(&pool)
.await?;

Related

After I run docker compose up, my mac returns error stating that it can't find my mix phx.server. How do I show docker where my mix.exs file is?

When I'm running Docker Compose up, I receive an error
** (Mix) The task "phx.server" could not be found
Note no mix.exs was found in the current directory
I believe it's the very last step I need to run the project. This is a phoenix/Elixir Docker project. Mix.exs is a top level file in my project, same level as my dockerfile/docker-compose file.
Dockerfile
FROM elixir:1.13.1
# Build Args
ARG PHOENIX_VERSION=1.6.6
ARG NODEJS_VERSION=16.x
# Apt
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y apt-utils
RUN apt-get install -y build-essential
RUN apt-get install -y inotify-tools
# Nodejs
RUN curl -sL https://deb.nodesource.com/setup_${NODEJS_VERSION} | bash
RUN apt-get install -y nodejs
# Phoenix
RUN mix local.hex --force
RUN mix archive.install --force hex phx_new #{PHOENIX_VERSION}
RUN mix local.rebar --force
# App Directory
ENV APP_HOME /app
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
COPY . .
# App Port
EXPOSE 4000
# Default Command
CMD ["mix", "phx.server"]
Docker-compose.yml
version: "3"
services:
book-search:
build: .
volumes:
- ./src:/app
ports:
- "4000:4000"
depends_on:
- db
db:
image: postgres:9.6
environment:
POSTGRES_DB: "db"
POSTGRES_HOST_AUTH_METHOD: "trust"
POSTGRES_USER: tmclean
POSTGRES_PASSWORD: tmclean
PGDATA: /var/lib/postgresql/data/pgdata
restart: always
volumes:
- ./pgdata:/var/lib/postgresql/data
Let me know what other questions I can answer
The problem is your docker-compose.yml file.
volumes:
- ./src:/app
You are overwriting the app with a probably non-existant src directory. Change it to:
volumes:
- .:/app
and it should work. However, if you do that, there is no point in copying the files in your Dockerfile, so you can also remove the
COPY . .
Alternatively, leave the COPY if you want the source files to be in the image, and remove the volumes section from the book-search service in docker-compose.yml.

docker-compose - cannot use bind mount in folder created when using BUILD

I have a docker-compose file which uses a Dockerfile to build the image. In this image (Dockerfile) I created the folder /workspace which I'd like to bind mount for persistence in my local filesystem.
After the docker-compose up, the folder is empty if I bind mount, but if I do not mount this folder everything works fine (and the folder exist with all the files I added).
This is my docker-compose.yml:
version: "3.9"
services:
web:
build: .
command: uwsgi --ini /workspace/confs/uwsgi.ini --logger file:/workspace/logs/uswgi.log --processes 1 --workers 1 --plugins-dir=/usr/lib/uwsgi/plugins/ --plugin=python
environment:
- DB_HOST=db
- DB_NAME=***
- DB_USER=***
- DB_PASS=***
depends_on:
- db
- redis
- memcached
volumes:
- ./workspace:/workspace
networks:
- asyncmail
- traefik
# db, redis and memcached are ommited here
# aditional labels for traefik is also ommited
This is my Dockerfile:
FROM ubuntu:trusty
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
SHELL ["/bin/bash", "-c"]
RUN mkdir /workspace
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y redis-server python3-pip git-core postgresql-client
RUN apt-get install -y libpq-dev python3-dev libffi-dev libtiff5-dev zlib1g-dev libjpeg8-dev libyaml-dev libpython3-dev openssh-client uwsgi-plugin-python3 libpcre3 libpcre3-dev uwsgi-plugin-python
ADD myapp /workspace/
WORKDIR /workspace/src/
RUN /bin/bash -c "pip3 install cffi \
&& pip3 install -r /workspace/src/requirements.txt \
&& ./manage.py collectstatic --noinput"
RUN ln -sf /usr/share/zoneinfo/America/Sao_Paulo /etc/localtime
# CMD ["uwsgi", "--ini", "/workspace/confs/uwsgi.ini", "--logger", "file:/workspace/logs/uswgi.log"]
I know there is some items it could be optimized, but when I do a docker-compose up -d the folder ./workspace is created with only a folder inside called src. Inside the container the /workspace only have this empty folder too;
If I remove the volumes line in docker-compose, inside the container, the folder /workspace have all the sourcecode of my app.
What am I doing wrong that I can't bind mount the workspace folder?
PS: I know this image i'm using (ubuntu trusty) is old, but my old app only run with this version.
am I correct in assuming that the files you want to appear inside workspace are actually in a folder called "myapp" in your host machine
(it seems so from this line)
ADD myapp /workspace/
I think you meant to map that into your docker container, so under volumes
volumes:
- ./myapp:/workspace
volume maps work one way, that is the folder inside the container is replaced by the contents of the mapped folder on the host, not the other way around...
I ended up with adding to the container the sourcecode directory to fix this problem. #NiRR answer helped a lot.
The final Dockerfile was changes to not include sourcecode in the image:
FROM ubuntu:trusty
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ARG DEBIAN_FRONTEND=noninteractive
SHELL ["/bin/bash", "-c"]
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python3-pip git-core postgresql-client
RUN apt-get install -y libpq-dev python3-dev libffi-dev libtiff5-dev zlib1g-dev libjpeg8-dev libyaml-dev libpython3-dev openssh-client uwsgi-plugin-python3 libpcre3 libpcre3-dev
WORKDIR /workspace/src
COPY myapp/src/requirements.txt .
RUN /bin/bash -c "pip3 install cffi \
&& pip3 install -r requirements.txt"
# To set timezone
RUN ln -sf /usr/share/zoneinfo/America/Sao_Paulo /etc/localtime
And I changed the docker-compose to the following final version:
version: "3.9"
services:
web:
build: .
command: ./start.sh
environment:
- DB_HOST=db
- DB_NAME=***
- DB_USER=***
- DB_PASS=***
volumes:
- ./myapp:/workspace
Now in the container start all the sourcecode from myapp is copied to inside the container;
Everything is under GIT control
If the code changes, we can make a push/pull and docker-compose up -d to restart the container. The new version will already be there.

How to deploy dockerized laravel app with elastic beanstalk?

I'm new to Docker. Trying to deploy dockerized laravel app using elastic beanstalk. Current Docker files -
docker-compose.yml -
version: '3'
services:
#PHP Service
app:
build:
context: ./
dockerfile: Dockerfile
image: admin
container_name: admin-app
restart: unless-stopped
working_dir: /usr/share/nginx/app/
volumes:
- ./:/usr/share/nginx/app/
networks:
- app-network
nginx:
image: nginx:stable-alpine
container_name: admin-nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./:/usr/share/nginx/app/
- ./nginx/conf.d/:/etc/nginx/conf.d
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
and Dockerfile
FROM php:7.4-fpm
ARG uid=1000
ARG user=sammy
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip \
libcurl4-openssl-dev pkg-config libssl-dev
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
RUN pecl install mongodb && docker-php-ext-enable mongodb && \
pecl install xdebug && docker-php-ext-enable xdebug
RUN pecl config-set php_ini /etc/php.ini
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Add user for laravel application
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Copy existing application directory contents
COPY . /usr/share/nginx/app
WORKDIR /usr/share/nginx/app
RUN chown -R $user:$user .
USER $user
RUN chown -R $user:$user storage bootstrap/cache
RUN chmod -R 775 storage bootstrap/cache
RUN composer install
RUN php artisan cache:clear
RUN php artisan view:clear
RUN php artisan config:clear
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
It works fine at local machine when I run docker compose up -d only if I have already run composer install otherwise throws following error
It is ok for development purpose I have to run composer install once, but for production, I think it is not right way to manually do composer install every time new version is deployed. Doesn't the command RUN composer install at Dockerfile install the required dependencies? I can see progress bar of dependencies being installed but no vendor folder is generated if I ssh into container. Again it works fine if I ssh into instance and manually install dependencies.
I have deployed a nodejs app also successfully using elastic beanstalk. There the dependencies were installed properly using command RUN npm install at Dockerfile. I don't see any difference in the process. Do I have to include vendor folder also in the zip file?. Please suggest correct way to deploy.

How to share Docker Volume between two docker containers?

I have the following Problem: I have two Docker containers, one for my App and one for NGINX. Now I want to share uploaded images from my app with the NGINX container. I tried to do that using a volume. But when I restart my app container, the images are lost. What can I do to save the images, even after I restarted or recreated the container?
My configuration:
docker-compose.yml
version: '3'
services:
# the application
app:
build:
context: .
dockerfile: ./docker/app/Dockerfile
environment:
- DB_USERNAME=postgres
- DB_PASSWORD=postgres
- DB_PORT=5432
volumes:
- .:/app
- gallery:/app/public/gallery
ports:
- 3000:3000
depends_on:
- db
# the database
db:
image: postgres:11.5
volumes:
- postgres_data:/var/lib/postgresql/data
# the nginx server
web:
build:
context: .
dockerfile: ./docker/web/Dockerfile
volumes:
- gallery:/app/public/gallery
depends_on:
- app
ports:
- 80:80
networks:
default:
external:
name: app-network
volumes:
gallery:
postgres_data:
app/Dockerfile:
FROM ruby:2.7.3
RUN apt-get update -qq
RUN apt-get install -y make autoconf libtool make gcc perl gettext gperf && git clone https://github.com/FreeTDS/freetds.git && cd freetds && sh ./autogen.sh && make && make install
# for imagemagick
RUN apt-get install imagemagick
# for postgres
RUN apt-get install -y libpq-dev
# for nokogiri
RUN apt-get install -y libxml2-dev libxslt1-dev
# for a JS runtime
RUN apt-get install -y nodejs
# Setting an Envioronment-Variable for the Rails App
ENV RAILS_ROOT /var/www/app
RUN mkdir -p $RAILS_ROOT
# Setting the working directory
WORKDIR $RAILS_ROOT
# Setting up the Environment
ENV RAILS_ENV='production'
ENV RACK_ENV='production'
# Adding the Gems
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install --jobs 20 --retry 5 --without development test
# Adding all Project files
COPY . .
RUN bundle exec rake assets:clobber
RUN bundle exec rake assets:precompile
EXPOSE 3000
CMD ["bundle", "exec", "puma", "-p", "3000"]
web/Dockerfile:
# Base Image
FROM nginx
# Dependiencies
RUN apt-get update -qq && apt-get -y install apache2-utils
# Establish where Nginx should look for files
ENV RAILS_ROOT /var/www/app
# Working Directory
WORKDIR $RAILS_ROOT
# Creating the Log-Directory
RUN mkdir log
# Copy static assets
COPY public public/
# Copy the NGINX Config-Template
COPY docker/web/nginx.conf /tmp/docker.nginx
# substitute variable references in the Nginx config template for real values from the environment
# put the final config in its place
RUN envsubst '$RAILS_ROOT' < /tmp/docker.nginx > /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Rather than a volume, you can mount the same directory on the host to multiple Docker containers simultaneously. So long as the containers are not writing to the same file simultaneously, which is not in your described use case, you shouldn’t have a problem.
For example:
docker run -d --name Web1 -v /home/ubuntu/images:/var/www/images httpd
docker run -d --name Other1 -v /home/ubuntu/images:/etc/app/images my-docker-image:latest
If you would rather a Docker volume, this article will give you everything you need to know.

GCP Cloud Run error on deploying my Docker image to Google Container Registry / Cloud Run

i´m quite new to Docker and GCP and try to find a working way, to deploy my Laravel App on GCP.
I already set up CI and and selected "cloudbuild.yaml" as build configuration. I followed innumerable tutorials and read the Google Container Docs, so i created a "cloudbuild.yaml" which includes arguments to use my docker-composer.yaml, to create the stack of my app (app code, database, server).
During the Google Cloud Build process i get:
Step #0: Creating workspace_app_1 ...
Step #0: Creating workspace_web_1 ...
Step #0: Creating workspace_db_1 ...
Step #0: Creating workspace_app_1 ... done
Step #0: Creating workspace_web_1 ... done
Step #0: Creating workspace_db_1 ... done
Finished Step #0
Starting Step #1
Step #1: Already have image (with digest): gcr.io/cloud-builders/docker
Step #1: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #1
ERROR
ERROR: build step 1 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
docker-compose.yml:
version: "3.8"
volumes:
php-fpm-socket:
db-store:
services:
app:
build:
context: .
dockerfile: ./infra/docker/php/Dockerfile
volumes:
- php-fpm-socket:/var/run/php-fpm
- ./backend:/work/backend
environment:
- DB_CONNECTION=mysql
- DB_HOST=db
- DB_PORT=3306
- DB_DATABASE=${DB_NAME:-laravel_local}
- DB_USERNAME=${DB_USER:-phper}
- DB_PASSWORD=${DB_PASS:-secret}
web:
build:
context: .
dockerfile: ./infra/docker/nginx/Dockerfile
ports:
- ${WEB_PORT:-80}:80
volumes:
- php-fpm-socket:/var/run/php-fpm
- ./backend:/work/backend
db:
build:
context: .
dockerfile: ./infra/docker/mysql/Dockerfile
ports:
- ${DB_PORT:-3306}:3306
volumes:
- db-store:/var/lib/mysql
environment:
- MYSQL_DATABASE=${DB_NAME:-laravel_local}
- MYSQL_USER=${DB_USER:-phper}
- MYSQL_PASSWORD=${DB_PASS:-secret}
- MYSQL_ROOT_PASSWORD=${DB_PASS:-secret}
cloudbuild.yaml
steps:
# running docker-compose
- name: 'docker/compose:1.28.4'
args: ['up', '-d']
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/MY_PROJECT_ID/laravel-docker-1', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/MY_PROJECT_ID/laravel-docker-1']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'laravel-docker-1', '--image', 'gcr.io/MY_PROJECT_ID/laravel-docker-1', '--region', 'europe-west3', '--platform', 'managed']
images:
- gcr.io/MY_PROJECT_ID/laravel-docker-1
What is wrong in this configuration?
I solved this issue to deploy a running Laravel 8 application to Google Cloud with the following Dockerfile. PS: Any optimization regarding the FROM and RUN steps are appreciated:
#
# PHP Dependencies
#
FROM composer:2.0 as vendor
WORKDIR /app
COPY database/ database/
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install \
--no-interaction \
--no-plugins \
--no-scripts \
--no-dev \
--prefer-dist
COPY . .
RUN composer dump-autoload
#
# Frontend
#
FROM node:14.9 as frontend
WORKDIR /app
COPY artisan package.json webpack.mix.js package-lock.json ./
RUN npm audit fix
RUN npm cache clean --force
RUN npm cache verify
RUN npm install -f
COPY resources/js ./resources/js
COPY resources/sass ./resources/sass
RUN npm run development
#
# Application
#
FROM php:7.4-fpm
WORKDIR /app
# Install PHP dependencies
RUN apt-get update -y && apt-get install -y build-essential libxml2-dev libonig-dev
RUN docker-php-ext-install pdo pdo_mysql opcache tokenizer xml ctype json bcmath pcntl
# Install Linux and Python dependencies
RUN apt-get install -y curl wget git file ruby-full locales vim
# Run definitions to make Brew work
RUN localedef -i en_US -f UTF-8 en_US.UTF-8
RUN useradd -m -s /bin/zsh linuxbrew && \
usermod -aG sudo linuxbrew && \
mkdir -p /home/linuxbrew/.linuxbrew && \
chown -R linuxbrew: /home/linuxbrew/.linuxbrew
USER linuxbrew
RUN /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
USER root
#RUN chown -R $CONTAINER_USER: /home/linuxbrew/.linuxbrew
ENV PATH "$PATH:/home/linuxbrew/.linuxbrew/bin"
#Install Chrome
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt install -y ./google-chrome-stable_current_amd64.deb
# Install Python modules (dependencies) of scraper
RUN brew install python3
RUN pip3 install selenium
RUN pip3 install bs4
RUN pip3 install pandas
# Copy Frontend build
COPY --from=frontend app/node_modules/ ./node_modules/
COPY --from=frontend app/public/js/ ./public/js/
COPY --from=frontend app/public/css/ ./public/css/
COPY --from=frontend app/public/mix-manifest.json ./public/mix-manifest.json
# Copy Composer dependencies
COPY --from=vendor app/vendor/ ./vendor/
COPY . .
RUN cp /app/drivers/chromedriver /usr/local/bin
#COPY .env.prod ./.env
COPY .env.local-docker ./.env
# Copy the scripts to docker-entrypoint-initdb.d which will be executed on container startup
COPY ./docker/ /docker-entrypoint-initdb.d/
COPY ./docker/init_db.sql .
RUN php artisan config:cache
RUN php artisan route:cache
CMD php artisan serve --host=0.0.0.0 --port=8080
EXPOSE 8080

Resources