Set Go Glide in Docker - docker

I am building an app with Go and Glide in docker. I also have to use reflex to trigger the compiling automatically.
I can not figure out how to make Glide work out with docker.
Dockerfile
FROM golang:1.8.1-alpine
ENV GOBINARIES /go/bin
ENV BUILDPATH /code
ENV REFLEXURL=http://s3.amazonaws.com/wbm-raff/bin/reflex1.8a
ENV REFLEXSHA=19bdbbb68c869f85ee22a6b7fa9c73f8e5b46d0fe7a73df37e028555a6ba03e8
WORKDIR $GOBINARIES
RUN rm -rf /var/cache/apk/*
RUN wget -q "$REFLEXURL" -O reflex
RUN chmod +x /go/bin/reflex
ENV TOOLS /go/_tools
RUN mkdir -p $BUILDPATH
ENV PORT 5000
EXPOSE $PORT
RUN mkdir -p $TOOLS
ADD build.sh $TOOLS
ADD reflex.conf $TOOLS
RUN chown root $TOOLS/build.sh
RUN chmod +x $TOOLS/build.sh
WORKDIR $BUILDPATH
CMD ["reflex","-c","/go/_tools/reflex.conf"]
build.sh
set -e
echo "[build.sh:building binary]"
cd $BUILDPATH
glide install -s -v
go build -o /servicebin && rm -rf /tmp/*
echo "[build.sh:launching binary]"
/servicebin
reflex.conf
-sr '\.build$' -- sh -c '/go/_tools/build.sh'
docker-compose.yaml
version: '3'
services:
logen:
build:
context: ./Docker
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- .:/code
Atom on-save plugin configuration file
[
{
"srcDir": ".",
"destDir": ".",
"files": "**/*.go",
"command": "echo $(date) - ${srcFile} > .build"
}
]
main.go
package main
import (
"io"
"log"
"net/http"
"os"
"github.com/astaxie/beego"
)
func hello(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, "Hello world!1")
}
func main() {
log.SetOutput(os.Stdout)
port := ":" + os.Getenv("PORT")
http.HandleFunc("/", hello)
log.Printf("\n Application is listening on %v\n", port)
http.ListenAndServe(port, nil)
}

Actually, I do not need to install Glide in the container! Just reflect the vendor folder in host machine to $GOPATH/src in docker-compose.yml. Then the compile will be ok.
version: '3'
services:
logen:
build:
context: ./Docker
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- .:/code
- ./vendor:/go/src

Related

How to deploy dockerized Django+uWSGI+Nginx app to Google App Engine using CircleCI

I have developed a Django dockerized web app using docker-compose. It runs in my local fine.
The point is that when I define a CI pipeline, specifically CircleCI (I don't know how it works with any other alternative), to upload it to GCloud App Engine the workflow works fine but when visiting the url it returns nothing (500 error).
The code I have and that I run locally using is the following. When I set the CircleCI pipeline I have no clue on how the app.yaml file interacts and what the steps in the .circleci/config.yml should be in order to run the docker-compose. Any idea or resource I might use?
My Dockerfile:
FROM python:3.9-alpine
ENV PATH="/scripts:${PATH}"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache --virtual .tmp gcc libc-dev linux-headers
RUN pip install -r /requirements.txt
RUN apk del .tmp
RUN mkdir -p /app
COPY ./app /app
WORKDIR /app
COPY ./scripts /scripts
#this allows for execute permission in all files inside /scripts/
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol
RUN chmod -R 755 /vol/web
USER user
CMD ["entrypoint.sh"]
My docker-compose file:
version: '3.9'
services:
app:
build:
context: .
volumes:
- static_data:/vol/web
environment:
- SECRET_KEY=samplesecret123
- ALLOWED_HOSTS=127.0.0.1,localhost
proxy:
build:
context: ./proxy
volumes:
- static_data:/vol/static
ports:
- "8080:8080"
depends_on:
- app
volumes:
static_data:
Nginx Dockerfile:
FROM nginxinc/nginx-unprivileged:1-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./uwsgi_params /etc/nginx/uwsgi_params
USER root
RUN mkdir -p /vol/static
RUN chmod 755 /vol/static
USER nginx
Nginx default.conf
server {
listen 8080;
location /static {
alias /vol/static;
}
location / {
uwsgi_pass app:8000;
include /etc/nginx/uwsgi_params;
}
}
entrypoint.sh
#!/bin/sh
set -e
python manage.py collectstatic --no-input
uwsgi --socket :8000 --master --enable-threads --module app.wsgi
.circleci/config.yml
version: 2.1
workflows:
version: 2
build_and_deploy_workflow:
jobs:
- build_and_deploy_job:
filters:
branches:
only:
- master
jobs:
build_and_deploy_job:
docker:
- image: google/cloud-sdk ##based in Debian
steps:
- checkout
- restore_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
- run:
name: Install requirements.txt
command: |
apt install -y python-pip
python3 -m pip install -r requirements.txt
- save_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
paths:
- "venv"
- run:
name: Install Docker Compose
command: |
curl -L https://github.com/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
apt-get install -y sudo
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker
- run:
name: 'Collect static'
command: |
docker-compose -f docker-compose-deploy.yml up --build
# docker-compose build
# docker-compose run --rm app
# docker-compose run --rm app sh -c "python manage.py collectstatic"
- run:
name: 'Deploy to app engine'
command: |
echo ${GCLOUD_SERVICE_KEY} > /tmp/sa_key.json | \
gcloud auth activate-service-account --key-file=/tmp/sa_key.json
rm /tmp/sa_key.json
gcloud config set project [projectname]
gcloud config set compute/region [region]
gcloud app deploy app.yaml
app.yaml GCloud App Engine:
runtime: python39
#entrypoint: gunicorn -b :$PORT --chdir app/ app.wsgi:application
#entrypoint: gunicorn -b :$PORT app:wsgi
entrypoint: uwsgi --socket :8000 --master --enable-threads --module app.wsgi
handlers:
- url: /static
static_dir: static/
- url: /.*
script: auto
Here is a link that could help you with an example of app.yaml file for a Python 3 application:
https://cloud.google.com/appengine/docs/standard/python3/config/appref
Code example:
runtime: python39 # or another supported version
instance_class: F2
env_variables:
BUCKET_NAME: "example-gcs-bucket"
handlers:
# Matches requests to /images/... to files in static/images/...
- url: /images
static_dir: static/images
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
For Python 3, the app.yaml is required to contain at least a runtime: python39 entry.
For a brief overview, see defining runtime settings:
https://cloud.google.com/appengine/docs/standard/python3/configuring-your-app-with-app-yaml
To deploy to Google App Engine with CircleCi I found this article that may help you with your main issue:
https://medium.com/#1555398769574/deploy-to-google-app-engine-with-circleci-or-github-actions-cb1bab15ca80
Code example:
.circleci/config.yaml
version: 2
jobs:
build:
working_directory: ~/workspace
docker:
- image: circleci/php:7.2-stretch-node-browsers
steps:
- checkout
- run: |
cp .env.example .env &&
php artisan key:generate
- persist_to_workspace:
root: .
paths:
- .
deploy:
working_directory: ~/workspace
docker:
- image: google/cloud-sdk
steps:
- attach_workspace:
at: .
- run:
name: Service Account Key
command: echo ${GCLOUD_SERVICE_KEY} > ${HOME}/gcloud-service-key.json
- run:
name: Set gcloud command
command: |
gcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
- run:
name: deploy to Google App Engine
command: |
gcloud app deploy app.yaml
workflows:
version: 2
build:
jobs:
- build
- deploy:
context: gcp
requires:
- build
filters:
branches:
only: master
Adding additional documentation on how to create CI/CD pipeline for Google App Engine with CircleCI 2.0:
https://runzhuoli.me/2018/12/21/ci-cd-gcp-gae-circleci.html

Cannot hot reload my ionic app integrated with Docker

I have this Dockerfile
FROM node:15.11.0-alpine
#ENVIRONNEMENT
ENV GLIB_PACKAGE_BASE_URL https://github.com/sgerrand/alpine-pkg-glibc/releases/download
ENV GLIB_VERSION 2.25-r0
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
ENV GRADLE_HOME /usr/local/gradle
ENV GRADLE_VERSION 4.4
ENV ANDROID_HOME /usr/local/android-sdk-linux
ENV ANDRDOID_TOOLS_VERSION r25.2.5
ENV ANDROID_API_LEVELS android-26
ENV ANDROID_BUILD_TOOLS_VERSION 26.0.2
ENV IONIC_VERSION 5
ENV PATH ${GRADLE_HOME}/bin:${JAVA_HOME}/bin:${ANDROID_HOME}/tools:$ANDROID_HOME/platform-tools:$PATH
# INSTALL JAVA
RUN apk update ...
# INSTALL IONIC AND CORDOVA
RUN npm install -g cordova ionic#${IONIC_VERSION}
#INSTALL Graddle
RUN mkdir -p ${GRADLE_HOME} ...
# INSTALL ANDROID
RUN mkdir -p ${ANDROID_HOME} ...
# INSTALL GLIBC
RUN curl -L ...
# CONFIGURATION
RUN echo y | android update sdk --no-ui -a --filter platform-tools,${ANDROID_API_LEVELS},build-tools-${ANDROID_BUILD_TOOLS_VERSION}
# Make license agreement
RUN mkdir $ANDROID_HOME/licenses ...
#FILES DELETION
RUN rm -rf /tmp/* /var/cache/apk/*
WORKDIR /usr/app
RUN npm install
COPY ./ /usr/app
I am building it with CMD:
docker build -t <image-name> .
I have this docker-compose.yml file:
version: '3.6'
services:
app:
container_name: karma5_ionic
build:
context: .
dockerfile: Dockerfile
ports:
- '8100:8100'
- '35729:35729'
command: ionic serve --external
I run the following command:
sudo docker-compose up -d
The application is displaying fine in the browser in localhost:8100
Problem:
When I make changes, there is no hot reload.
localhost:35729 might be temporarily down or it may have moved permanently to a new web address.
Or localhost:35729 the connections was reset
The only way that i can see changes is if I
run docker-compose build
docker-compose up -d again
To whom it may concern:
I just needed to add
volumes:
- "./:/usr/app"
Whole docker-compose.yml file:
version: '3.6'
services:
app:
container_name: karma5_ionic
build:
context: .
dockerfile: Dockerfile
volumes:
- "./:/usr/app"
ports:
- '8100:8100'
- '35729:35729'
command: ionic serve --external

SSHFS mount in Dockerfile fails unless it's from ENTRYPOINT

I'm attempting to SSHFS from the container to a remote server, with the mount created during the Dockerfile build.
The mount command works if executed in the already running container, and will work if I make the command the entrypoint (but then I have to string on the real entrypoint script on the end with a ; which feels too klugy.)
If I put the command in the Dockerfile with a RUN, it fails with a fuse: device not found, try 'modprobe fuse' first error.
Here's the files...
install.sh
#!/bin/bash
USAGE="install.sh <dir_to_parse> <filetype_to_parse>"
if [ $# -lt 2 ]
then
echo "$USAGE"
exit 1
fi
REMOTE_DIR=$1 FILE_EXTENSION=$2 docker-compose -p '' -f docker-compose.yml up -d --build
docker-compose.yml
version: "3"
services:
source.test:
build:
context: .
dockerfile: ./Dockerfile
image: test.source
container_name: test.source
environment:
ELASTIC_HOST: “http://<redacted>:<redacted>”
REMOTE_SERVER: <redacted>
REMOTE_USER: <redacted>
REMOTE_KEY: /etc/ssl/certs/<redacted>
FEEDER_URL: http://<redacted>/api
MONGOHOST: mongo
WALKDIRS: <redacted>
REMOTE_DIR: ${REMOTE_DIR}
FILE_EXTENSION: ${FILE_EXTENSION}
volumes:
- /etc/ssl/certs/:/etc/ssl/certs/
ports:
- 127.0.0.1:6000:80
cap_add:
- SYS_ADMIN
devices:
- "/dev/fuse:/dev/fuse"
security_opt:
- "apparmor:unconfined"
networks:
default:
external:
name: test
Dockerfile
FROM ubuntu:18.04
RUN apt-get update && apt-get -y install \
fuse \
sshfs
COPY <redacted> /etc/ssl/certs/<redacted>
COPY fuse.conf /etc/fuse.conf
RUN chown root:root /etc/fuse.conf
RUN chmod 644 /etc/fuse.conf
RUN mkdir /mnt/filestobeparsed
# Fails with fuse: device not found
RUN sshfs username#<xxx.xxx.xxx.xxx>:/remote/path /mnt/filestobeparsed -o StrictHostKeyChecking=no,IdentityFile=/etc/ssl/certs/<redacted>,auto_cache,reconnect,transform_symlinks,follow_symlinks,allow_other
ENTRYPOINT tail -f /dev/null
# Works but is klugy
#ENTRYPOINT sshfs username#<xxx.xxx.xxx.xxx>:/remote/path /mnt/filestobeparsed -o StrictHostKeyChecking=no,IdentityFile=/etc/ssl/certs/<redacted>,auto_cache,reconnect,transform_symlinks,follow_symlinks,allow_other; tail -f /dev/null

Why docker-compose build can't run command but docker build can?

My Dockerfile:
FROM node:10 AS builder
RUN npm install multi-file-swagger -g
WORKDIR /usr/src/app
COPY swagger/* ./
ARG API_HOST
ENV APP_HOST=$API_HOST
RUN sed -i 's+replace_host+'"$API_HOST"'+g' index.yaml
RUN multi-file-swagger index.yaml > index.json
FROM golang:1.12
WORKDIR /go/src/app
COPY . .
RUN go get -d -v ./...
RUN go install -v ./...
VOLUME /go/src/app
EXPOSE 8080
COPY --from=builder /usr/src/app/ swagger/
CMD ["app"]
My docker-compose.yml file:
version: '3.4'
services:
myapp:
build:
context: .
args:
- API_HOST=api.my-real-domain.com
volumes:
- ./:/go/src/app
ports:
- "8080:8080"
If run docker build only:
docker build -t myapp . --build-arg API_HOST=api.my-real-domain.com
It can run the commands:
RUN sed -i 's+replace_host+'"$API_HOST"'+g' index.yaml
RUN multi-file-swagger index.yaml > index.json
And when lunch the container, I can find the index.json exists.
But if use docker-compose build and docker-compose up, then check the index.json in container, can't find it.
you overwrite all your files in app folder by using:
volumes:
- ./:/go/src/app
you need to remove the volume section from your compose

How to create file and mount file system as read only?

I have the following Dockerfile (I've removed what is not relevant):
FROM centos:centos6
ENV TERM=xterm
ARG INSTALL_WKHTMLTOPDF=no
ARG WKHTMLTOPDF_VERSION=latest
ARG INSTALL_PDFTK=no
ARG PDFTK_VERSION=latest
ARG PHP_VERSION=default
...
COPY container-files /
...
EXPOSE 80 9001
WORKDIR /var/www/html
ENTRYPOINT bash -C '/entrypoint.sh';'bash'
The entrypoint.sh is as follow:
#!/bin/bash
set -e
if [ "$UID" == 0 ]; then
uid=1000;
else
uid=${UID};
fi
if [ -z "${GID}" ]; then
gid=1000;
else
gid=${GID};
fi
echo "UID: $uid"
echo "GID: $gid"
touch /var/log/xdebug.log
chown apache:root /var/log/xdebug.log
rm -f /var/run/apache2/apache2.pid
exec httpd -DFOREGROUND "$#"
And finally the docker-compose.yml file:
version: '3.4'
services:
erx:
image: arx_dev
ports:
- "80:80"
environment:
VHOST_DOCUMENT_ROOT: /var/www/html
volumes:
- ./server_logs:/var/log/:ro
After build the image and try docker-compose up -d it does not start because touch can't create the file in a RO filesystem.
PS F:\Development\docker\rx> docker logs rx_erx_1
UID: 1000
GID: 1000
touch: cannot touch `/var/log/xdebug.log': Read-only file system
PS F:\Development\docker\rx>
How I can create the file and then mount the /var/log as read only? I would like to check some logs from the host directly and avoid bash into the container. Any ideas?

Resources