I want to dockerize my app (Prisma 4.9.1, NextJS 12, PostgreSQL). The idea is, that you can clone the repo, type docker-compose up and everything works just fine.
The problem is: I don't know where put npx prisma db push. I've tried already multiple locations, but it's not working. Any ideas?
Dockerfile:
FROM node:18 AS dependencies
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn
FROM node:18 AS build
WORKDIR /app
COPY --from=dependencies /app/node_modules ./node_modules
COPY . .
RUN npx prisma generate
RUN yarn build:in:docker
FROM node:18 AS deploy
WORKDIR /app
ENV NODE_ENV production
COPY --from=build /app/public ./public
COPY --from=build /app/package.json ./package.json
COPY --from=build /app/.next/standalone ./
COPY --from=build /app/.next/static ./.next/static
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/prisma ./prisma
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
docker-compose.yml
version: '3.9'
services:
postgres:
image: postgres:latest
container_name: postgres
hostname: myhost
ports:
- 5432:5432
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: splitmate
volumes:
- postgres-data:/var/lib/postgresql/data
restart: unless-stopped
splitmate-app:
image: splitmate
build:
context: .
dockerfile: Dockerfile
target: deploy
volumes:
- postgres-data:/app/postgres-data
environment:
DATABASE_URL: postgresql://root:password#myhost:5432/splitmate?schema=public&connect_timeout=60
ports:
- 3000:3000
volumes:
postgres-data:
The container gets built and starts. But as soon as the code tries to access the database, I get this error:
features-splitmate-app-1 | Invalid `prisma.account.findUnique()` invocation:
features-splitmate-app-1 |
features-splitmate-app-1 |
features-splitmate-app-1 | The table `public.Account` does not exist in the current database. {
features-splitmate-app-1 | message: '\n' +
features-splitmate-app-1 | 'Invalid `prisma.account.findUnique()` invocation:\n' +
features-splitmate-app-1 | '\n' +
features-splitmate-app-1 | '\n' +
features-splitmate-app-1 | 'The table `public.Account` does not exist in the current database.',
features-splitmate-app-1 | stack: 'Error: \n' +
features-splitmate-app-1 | 'Invalid `prisma.account.findUnique()` invocation:\n' +
features-splitmate-app-1 | '\n' +
features-splitmate-app-1 | '\n' +
features-splitmate-app-1 | 'The table `public.Account` does not exist in the current database.\n' +
features-splitmate-app-1 | ' at RequestHandler.handleRequestError (/app/node_modules/#prisma/client/runtime/index.js:31941:13)\n' +
features-splitmate-app-1 | ' at RequestHandler.handleAndLogRequestError (/app/node_modules/#prisma/client/runtime/index.js:31913:12)\n' +
features-splitmate-app-1 | ' at RequestHandler.request (/app/node_modules/#prisma/client/runtime/index.js:31908:12)\n' +
features-splitmate-app-1 | ' at async PrismaClient._request (/app/node_modules/#prisma/client/runtime/index.js:32994:16)\n' +
features-splitmate-app-1 | ' at async getUserByAccount (/app/node_modules/#next-auth/prisma-adapter/dist/index.js:11:29)',
features-splitmate-app-1 | name: 'Error'
features-splitmate-app-1 | }
I found a solution
Dockerfile:
FROM node:18 AS dependencies
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn
FROM node:18 AS build
WORKDIR /app
COPY --from=dependencies /app/node_modules ./node_modules
COPY . .
RUN npx prisma generate
RUN yarn build:in:docker
COPY migrate-and-start.sh .
RUN chmod +x migrate-and-start.sh
FROM node:18 AS deploy
WORKDIR /app
ENV NODE_ENV production
COPY --from=build /app/public ./public
COPY --from=build /app/package.json ./package.json
COPY --from=build /app/.next/standalone ./
COPY --from=build /app/.next/static ./.next/static
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/prisma ./prisma
COPY --from=build /app/migrate-and-start.sh .
EXPOSE 3000
ENV PORT 3000
CMD ["./migrate-and-start.sh"]
migrate-and-start.sh
#!/bin/bash
npx prisma generate
npx prisma db push
node server.js
docker-compose.yml
version: '3.9'
services:
postgres:
image: postgres:latest
container_name: postgres
hostname: myhost
ports:
- 5432:5432
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: splitmate
volumes:
- postgres-data:/var/lib/postgresql/data
restart: unless-stopped
splitmate-app:
image: splitmate
build:
context: .
dockerfile: Dockerfile
target: deploy
volumes:
- postgres-data:/app/postgres-data
environment:
DATABASE_URL: postgresql://root:password#myhost:5432/splitmate?schema=public&connect_timeout=60
ports:
- 3000:3000
volumes:
postgres-data:
Related
I have a main domen and 3 subdomains on server are running on it, but I decided to add 3 subdomains and for some reason I get an error (Error: P1001: Can't reach database server at db-subdomen:5436 or 5432 ) . Project Next.js + prisma +docker.
As I understand it, the problem is either inside the scope in the container. Or in the .env file
In the other 3 subdomains , I have the same cat in Dockerfile and env . In docker-compose, which is the same for all projects, everything is also the same, but there is still an error. There may be a typo , I don't know anymore, tk did everything as always
My docker-compose (for all projects , this example for main and one subdomen):
#subdomen
app-subdomen:
container_name: app-subdomen
image: subdomen-image
build:
context: subdomen
dockerfile: Dockerfile
restart: always
environment:
NODE_ENV: production
networks:
- subdomen-net
env_file: subdomen/.env
ports:
- 7000:3000
depends_on:
- "db-subdomen"
command: sh -c "sleep 13 && npx prisma migrate deploy && npm start"
db-subdomen:
container_name: db-subdomen
env_file:
- subdomen/.env
image: postgres:latest
restart: always
volumes:
- db-subdomen-data:/var/lib/postgresql/data
networks:
- subdomen-net
#main domen
app-main:
image: main-image
build:
context: main
dockerfile: Dockerfile
restart: always
environment:
NODE_ENV: production
env_file: main/.env
ports:
- 3000:3000
depends_on:
- "db-main"
command: sh -c "sleep 3 && npx prisma migrate deploy && npm start"
networks:
- main-net
db-main:
env_file:
- main/.env
image: postgres:latest
restart: always
volumes:
- db-main-data:/var/lib/postgresql/data
networks:
- main-net
volumes:
db-main-data: {}
db-subdomen-data: {}
networks:
main-net:
name: main-net
subdomen-net:
name: subdomen-net
.env subdomen :
POSTGRES_USER=subdomenUser
POSTGRES_PASSWORD=subdomen
POSTGRES_DB=subdomen-db
SECRET=88xU_X8yfsfdsfsdfsdfsdfsdfdsdc
HOST=https://subdomen.domen.ru
DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}#db-arkont:5436/${POSTGRES_DB}?schema=public
Submodem Dockerfile (the other 3 projects and subdomains have the same problem and there is no problem:
FROM node:lts-alpine AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
# Install app dependencies
RUN npm install
RUN npx prisma generate
COPY . .
RUN npm run build
FROM node:lts-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
COPY --from=builder /app/prisma ./prisma
ENV NODE_ENV=production
EXPOSE 3000
I try to connect to Redis from my backend, but I keep getting the following error:
...
api-1 | [ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND undefined
api-1 | at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)
api-1 | [ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND undefined
api-1 | at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)
...
Here is how I config my redis client:
import Redis from "ioredis";
export const redisConfig = () => {
if (process.env.NODE_ENV === "production") {
return `redis://${process.env.REDIS_HOST}:${process.env.REDIS_PORT}`;
}
return "";
};
const redisCli = new Redis(redisConfig());
export default redisCli;
And this is my dockerfile:
# ---- Dependencies ----
FROM node:16-alpine AS base
# minimize image size
RUN apk add --no-cache libc6-compat
RUN npm install -g npm#latest
WORKDIR /app
COPY ./package*.json ./
RUN npm ci
# ---- Builder ----
FROM node:16-alpine AS builder
RUN npm install -g npm#latest
WORKDIR /app
COPY --from=base /app/node_modules ./node_modules
COPY ./src ./src
COPY package*.json tsconfig.json webpack.config.ts ./
RUN npm run build
# ---- Release ----
FROM node:16 AS release
WORKDIR /app
# COPY ./prisma ./prisma
# COPY ./.env ./
# COPY ./deployment ./deployment
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
# RUN npx prisma generate
RUN npm install pm2 -g
EXPOSE 3000
This one is the docker-compose.yml:
version: "3"
services:
api:
build: ./
depends_on:
- redis
links:
- redis
command: sh -c "node dist/server.js"
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- NODE_ENV=production
ports:
- 3000:3000
redis:
image: "redis:latest"
I have specified the links in docker-compose, but still receiving the same error.
How can I fix the error? Thanks for any help!!
You are receiving this error because your application is probably trying to connect to redis before redis is up and accessible. In your depends_on section, you can say that you want to start your application after your redis service is healthy. To do so, you must also configure a healthcheck to tell when redis is really ready to accept connections (redis-cli ping for example).
Here is an example of configuration that works for me:
version: "3"
services:
api:
build: ./
depends_on:
redis:
condition: service_healthy
links:
- redis
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- NODE_ENV=production
redis:
image: redis:latest
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 1s
timeout: 2s
retries: 10
Was able to connect with such a config into the Redis hosted in Docker
ConfigurationOptions co = new ConfigurationOptions()
{
SyncTimeout = 500000,
EndPoints =
{
{ "127.0.0.1", 49155 }
},
AbortOnConnectFail = false // this prevents that error,
, Password = "redispw"
};
RedisConnectorHelper.lazyConnection = new Lazy<ConnectionMultiplexer>(() =>
{
return ConnectionMultiplexer.Connect(co);
});
where 49155 is the doker path port
The following is specified
docker-compose.yml
services:
app:
build:
context: .
dockerfile: Dockerfile
command: "yarn start"
ports:
- "8080:8080"
volumes:
- ./app:/app
Dockerfile
FROM node:12.16.3
WORKDIR /app
COPY app/package.json .
COPY app/yarn.lock .
RUN yarn install
COPY . /app .
EXPOSE 8080
CMD yarn start
azure-pipelines-ci.yml
variables:
YARN_CACHE_FOLDER: $(Pipeline.Workspace)/.yarn
steps:
- task: Cache#2
inputs:
key: 'yarn | "$(Agent.OS)" | yarn.lock'
restoreKeys: |
yarn | "$(Agent.OS)"
yarn
path: $(YARN_CACHE_FOLDER)
- script: |
docker-compose up -d
However, the cache is working, but Docker build speed remains the same
How can I make it work?
Steps I followed build using docker-compose
I setup python robot framework with flask based application
I created Dockerfile
DockerFile
FROM alpine:latest
COPY . /app
WORKDIR /app
RUN ls -la /
RUN apk add --no-cache sqlite py3-pip
RUN pip3 install -r requirements.txt
ENV FLASK_PORT 8181
ENV FLASK_APP demo_app
CMD ["sh", "run.sh"]
COPY testing/ui/config/ /app/tests/config/
COPY testing/ui/pages/ /app/tests/pages/
COPY testing/ui/steps/ /app/tests/steps/
COPY testing/ui/test_data/ /app/tests/test_data/
COPY testing/ui/tests/ /app/tests/tests/
COPY testing/ui/test_suites/ /app/tests/test_suites/
RUN ls -la /
WORKDIR /app/tests/test_suites/
CMD ["sh","run_ui_negative_tests.sh"]
I created docker-compose file
version: '3'
services:
flask:
hostname: demoapp
image: demoapp:0.0.1
build:
context: .
dockerfile: ./Dockerfile
links:
- chrome
tty: true
chrome:
image: selenium/node-chrome:4.0.0-alpha-7-prerelease-20201009
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
ports:
- "5900:5900"
selenium-hub:
image: selenium/hub:4.0.0-alpha-7-prerelease-20201009
container_name: selenium-hub
ports:
- "4442:4442"
Error I got
WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
h
Try to add path where your chrome driver application is stored.
driver = webdriver.Chrome(executable_path=r'your_path\chromedriver.exe')
when i use docker build i receive this error: error image
i've change the relative path on docker file to absolute path changing --from=build-env to bin/Release/netcoreapp3.1/publish/ but when i use docker-compose the error show again
Dockerfile
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "SmartSchool.WebAPI.dll"]
docker-compose
version: "3.8"
volumes:
SmartSchoolDb:
networks:
mysqlNET:
smartschoolNET:
services:
mysql:
image: "mysql:5.7"
container_name: mysql
ports:
- "3306:3306"
volumes:
- SmartSchoolDb:/var/lib/mysql
networks:
- mysqlNET
environment:
- MYSQL_USER=root
- MYSQL_PASSWORD=test
- MYSQL_ROOT_PASSWORD=test
- MYSQL_ROOT_HOST=%
- bind-address:0.0.0.0
smartschool:
build:
context: .
dockerfile: Dockerfile
container_name: smart
networks:
- mysqlNET
- smartschoolNET
ports:
- 5000:80
environment:
- DBHOST=mysql
depends_on:
- mysql
I add .dockerignore created using vscode command ctrl + shift + p and docker: add docker files to workspace
i used this .dockerignore below
**/.classpath
**/.dockerignore
**/.env
**/.git
**/.gitignore
**/.project
**/.settings
**/.toolstarget
**/.vs
**/.vscode
**/*.*proj.user
**/*.dbmdl
**/*.jfm
**/azds.yaml
**/bin
**/charts
**/docker-compose*
**/Dockerfile*
**/node_modules
**/npm-debug.log
**/obj
**/secrets.dev.yaml
**/values.dev.yaml
README.md
bin/
obj/
out/
TestResults/