The app that I have deployed into DigitalOcean (DO) droplet by using the docker container registry. The issue that I have now of integrating the let's encrypt certificate (SSL) for that docker image. I don't have any experience on this but I tried some tutorials like this SSL. However, i can't get the https:// for my domain and sub domain. https://dev.example.com
Folder structure of my app as below: ( added only important paths )
root
+--.github/workflows
-- main.yml
+-- app
-- .next
-- node_modules
-- public
-- src
-- Dockerfile
+-- nginx-conf
-- nginx.conf
+-- server
-- node_modules
-- src
-- Dockerfile
nginx-conf is the one that i added to set SSL using reverse proxy. that file include this server block.
server {
listen 8000 default_server;
listen [::]:8000 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com dev.example.com;
location / {
root /var/www/html;
try_files $uri /index.tsx;
}
location /api/ {
proxy_pass http://server/api/;
}
}
app/Dockerfile
FROM node:14
WORKDIR /usr/src/app
COPY . .
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NODE_ENV=test
CMD [ "npm", "start" ]
server/Dockerfile
# Create build
FROM node:alpine as build
WORKDIR /server
COPY package*.json ./
COPY tsconfig.json ./
RUN npm i -g typescript ts-node
RUN npm ci
COPY . .
RUN npm run build
# Run Build
FROM node:alpine as prod
WORKDIR /server
COPY --from=build /server/package*.json ./
COPY --from=build /server/build ./
RUN npm ci --only=production
EXPOSE 8000
CMD [ "npm", "run", "prod" ]
main.yml file build server and client apps by SSH into DO droplet. Finally deploy to the container. ( only include sample code )
jobs:
buildClient:
runs-on: ubuntu-latest
defaults:
run:
working-directory: app
steps:
- uses: actions/checkout#v2
- name: Build client docker image
run: |
docker login -u ${{ secrets }} -p ${{ secrets }} registry.digitalocean.com
docker build -t registry.digitalocean.com/....... .
docker push registry.digitalocean.com/..........
deployContainers:
name: Deploy containers to DO Droplet
runs-on: ubuntu-latest
needs: [
buildClient,
buildServer
]
steps:
- name: SSH into Droplet
uses: appleboy/ssh-action#v0.1.4
with:
host: ${{ secrets }}
username: ${{ secrets }}
key: ${{ secrets.SSH }}
passphrase: ${{ secrets }}
script: |
docker login -u ${{ secrets }} -p ${{ secrets }}
registry.digitalocean.com
docker stop app
docker rm app
docker image rm registry.digitalocean.com/.......
docker pull registry.digitalocean.com/..........
docker run --name app -p 80:3000 -d registry.digitalocean.com/.........
Related
I have a question regarding an approach and how secure it is. Basically I have a next js app which I push to Cloud Run via Github Actions. Now in Github Actions I have defined secrets which I then pass via github action yml file to Docker. Then in Docker I pass them to environment variables at build time. But then I use next.config.js to make it available to the app at build time. Here are my files
Github Action Yml File
name: nextjs-cloud-run
on:
push:
branches:
- main
env:
GCP_PROJECT_ID: ${{ secrets.GCP_PROJECT_ID }}
GCP_REGION: europe-west1
# project-name but it can be anything you want
REPO_NAME: some-repo-name
jobs:
build-and-deploy:
name: Setup, Build, and Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# This step is where our service account will be authenticated
- uses: google-github-actions/setup-gcloud#v0.2.0
with:
project_id: ${{ secrets.GCP_PROJECT_ID }}
service_account_key: ${{ secrets.GCP_SA_KEYBASE64 }}
service_account_email: ${{ secrets.GCP_SA_EMAIL }}
- name: Enable the necessary APIs and enable docker auth
run: |-
gcloud services enable containerregistry.googleapis.com
gcloud services enable run.googleapis.com
gcloud --quiet auth configure-docker
- name: Build, tag image & push
uses: docker/build-push-action#v3
with:
push: true
tags: "gcr.io/${{ secrets.GCP_PROJECT_ID }}/collect-opinion-frontend:${{ github.sha }}"
secrets: |
"NEXT_PUBLIC_STRAPI_URL=${{ secrets.NEXT_PUBLIC_STRAPI_URL }}"
"NEXTAUTH_SECRET=${{ secrets.NEXTAUTH_SECRET }}"
"NEXTAUTH_URL=${{ secrets.NEXTAUTH_URL }}"
- name: Deploy Image
run: |-
gcloud components install beta --quiet
gcloud beta run deploy $REPO_NAME --image gcr.io/$GCP_PROJECT_ID/$REPO_NAME:$GITHUB_SHA \
--project $GCP_PROJECT_ID \
--platform managed \
--region $GCP_REGION \
--allow-unauthenticated \
--quiet
This is my Dockerfile for Next Js
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
# get the environment vars from secrets
RUN --mount=type=secret,id=NEXT_PUBLIC_STRAPI_URL \
--mount=type=secret,id=NEXTAUTH_SECRET \
--mount=type=secret,id=NEXTAUTH_URL \
export NEXT_PUBLIC_STRAPI_URL=$(cat /run/secrets/NEXT_PUBLIC_STRAPI_URL) && \
export NEXTAUTH_SECRET=$(cat /run/secrets/NEXTAUTH_SECRET) && \
export NEXTAUTH_URL=$(cat /run/secrets/NEXTAUTH_URL) && \
yarn build
# RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
My next.config.js file
// next.config.js
module.exports = {
// ... rest of the configuration.
output: "standalone",
env: {
// Will only be available on the server side
NEXTAUTH_SECRET: process.env.NEXTAUTH_SECRET,
NEXTAUTH_URL: process.env.NEXTAUTH_URL, // Pass through env variables
},
};
My question is that, does this create a security issue with the fact that the environment variable could somehow be accessed via the client?
According to Next JS Documentation that should not be the case or at least that's how I understand. Snippet from the site
Note: environment variables specified in this way will always be included in the JavaScript bundle, prefixing the environment variable name with NEXT_PUBLIC_ only has an effect when specifying them through the environment or .env files.
I would appreciate if you could advise me on this
I am working on a Flask application and setting up a GitHub pipeline. My Docker file has an entry point that runs a couple of commands to upgrade DB and start gunicorn.
This works perfectly fine while running locally but while deploying through GitHub action it just ignores the entry point and does not run those commands.
Hare is my Docker file -
FROM python:3.10-slim
WORKDIR /opt/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
COPY ./requirements.txt /opt/app/requirements.txt
RUN chmod +x /opt/app/requirements.txt
RUN pip install -r requirements.txt
COPY . /opt/app/
RUN chmod +x /opt/app/docker-entrypoint.sh
ENTRYPOINT [ "/opt/app/docker-entrypoint.sh" ]
Docker entry point content
#! /bin/sh
echo "*********Upgrading database************"
flask db upgrade
echo "**************Statring gunicorn server***************"
gunicorn --bind 0.0.0.0:5000 wsgi:app
echo "************* started gunicorn server****************"
Here is my GitHub action -
name: CI
on:
push:
branches: [master]
jobs:
build_and_push:
runs-on: ubuntu-latest
steps:
- name: Checkout files
uses: actions/checkout#v2
deploy:
needs: build_and_push
runs-on: ubuntu-latest
steps:
- name: Checkout files
uses: actions/checkout#v2
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#v0.1.3
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
port: 22
- name: Start containers
run: docker-compose up -d --build
- name: Check running containers
run: docker ps
I am very new to Dockerfiles, writing shell commands, and GitHub Actions, so please suggest if there is an any better approach.
Thanks in advance!
Instead of using ENTRYPOINT use CMD inside your dockerfile. Because Entrypoint can override but CMD can not override.
I'm running a CI pipeline in gitlab-runner which is running on a linux machine.
In the pipeline I'm trying to build an image.
The Error I'm getting is,
ci runtime error: rootfs_linux.go:53: mounting "/sys/fs/cgroup" to rootfs "/var/lib/docker/overlay/d6b9ba61e6eed4bcd9c23722ad6f6a9fb05b6c536dbab31f47b835c25c0a343b/merged" caused "no subsystem for mount"
The Dockerfile contains :
# Set the base image to node:12-alpine
FROM node:16.7.0-buster
# Specify where our app will live in the container
WORKDIR /app
# Copy the React App to the container
COPY . /app/
# Prepare the container for building React
RUN npm install
RUN npm install react-scripts#4.0.3 -g
# We want the production version
RUN npm run build
# Prepare nginx
FROM nginx:stable
COPY --from=0 /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
# Fire up nginx
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
The gitlab-ci.yml contains :
image: gitlab/dind
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
stages:
- build
- docker-build
cache:
paths:
- node_modules
yarn-build:
stage: build
image: node:latest
script:
- unset CI
- yarn install
- yarn build
artifacts:
expire_in: 1 hour # the artifacts expire from the artifacts folder after 1 hour
paths:
- build
docker-build:
stage: docker-build
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker build -t $CI_REGISTRY_USER/$app_name .
- docker push $CI_REGISTRY_USER/$app_name
I'm not getting how to resolve this, I tried upgrading docker in my linux machine.
I have a pod with 3 containers in it: client, server, mongodb (MERN)
The pod has a mapped id to the host and the client listens to it -> 8184:3000
The website comes up and is reachable. Server logs says that it has been conented to the mogodb and is listening at port 3001 as I have assigned.
It seems that the client can not connect to the server side and therefor can not check the credentials for login which leads to get wrong pass or user all the time.
The whol program works localy on my windows.
Am I missing some part in docker or crating the pod. As far as I undrstood the containers in a pod should communicate as if they were running in a local network.
This is the gitlab-yml:
stages:
- build
variables:
GIT_SUBMODULE_STRATEGY: recursive
TAG_LATEST: $CI_REGISTERY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTERY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
TAG_NAME_Client: gitlab.comp.com/sdx-licence-manager:$CI_COMMIT_REF_NAME-client
TAG_NAME_Server: gitlab.comp.com/semdatex/sdx-licence-manager:$CI_COMMIT_REF_NAME-server
cache:
paths:
- client/node_modules/
- server/node_modules/
build_pod:
tags:
- sdxuser-pod-shell
stage: build
script:
- podman pod rm -f -a
- podman pod create --name lm-pod-$CI_COMMIT_SHORT_SHA -p 8184:3000
build_db:
image: mongo:4.4
tags:
- sdxuser-pod-shell
stage: build
script:
- podman run -dt --pod lm-pod-$CI_COMMIT_SHORT_SHA -v ~/lmdb_volume:/data/db:z --name mongo -d mongo
build_server:
image: node:16.6.1
stage: build
tags:
- sdxuser-pod-shell
script:
- cd server
- podman build -t $TAG_NAME_Server .
- podman run -dt --pod lm-pod-$CI_COMMIT_SHORT_SHA $TAG_NAME_Server
build_client:
image: node:16.6.1
stage: build
tags:
- sdxuser-pod-shell
script:
- cd client
- podman build -t $TAG_NAME_Client .
- podman run -d --pod lm-pod-$CI_COMMIT_SHORT_SHA $TAG_NAME_Client
Docker File Server:
FROM docker.io/library/node:16.6.1
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 3001
CMD [ "npm", "run", "start" ]
Docker File Client:
FROM docker.io/library/node:16.6.1
WORKDIR /app
COPY package*.json ./
RUN npm install
RUN npm install -g npm#7.21.0
COPY . ./
EXPOSE 3000
# start app
CMD [ "npm", "run", "start" ]
snippet from index.js at clientside trying to reach the server side checking log in credentials:
function Login(props) {
const [email, setEmail] = useState('');
const [password, setPassword] = useState('');
async function loginUser(credentials) {
return fetch('http://127.0.0.1:3001/login', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(credentials),
})
.then((data) => data.json());
}
}
pod:
Actually it has nothing to do with podman. Sorry about that. I added a proxy to my package.json and it redirected the requests correctly:
"proxy": "http://localhost:3001"
****Google Translator used****
I don't know why "ecosystem.config.js" is still included in npm agrs ...
So in the "ecosystem.config.js" file, args only has run and start, but when you build a docker, it looks like it works with npm ecosystem.config.js run start.
Please tell me why
// dockerfile
FROM node:lts-alpine
RUN npm install pm2 -g
COPY . /usr/src/nuxt/
WORKDIR /usr/src/nuxt/
RUN npm install
EXPOSE 8080
RUN npm run build
# start the app
CMD ["pm2-runtime", "ecosystem.config.js"]
// ecosystem.config.js
module.exports = {
apps: [
{
name: 'webapp',
exec_mode: 'cluster',
instances: 2,
script: 'npm',
args: ['run', 'start'],
env: {
HOST: '0.0.0.0',
PORT: 8080
},
autorestart: true,
max_memory_restart: '1G'
}
]
}
I struggled with ecosystem.config.js, I ended up using the yaml format instead: create process.yaml and enter your config>
apps:
- script: /app/index.js
name: 'app'
instances: 2
error_file: ./errors.log
exec_mode: cluster
env:
NODE_ENV: production
PORT: 12345
Then in the docker file:
COPY ./dist/index.js /app/
COPY process.yaml /app/
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
# Expose the listening port of your app
EXPOSE 12345
CMD [ "pm2-runtime", "/app/process.yaml"]
Just change the directories and files to the way you want things setup