Cannot connect to Redis in Docker - docker

I try to connect to Redis from my backend, but I keep getting the following error:
...
api-1 | [ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND undefined
api-1 | at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)
api-1 | [ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND undefined
api-1 | at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)
...
Here is how I config my redis client:
import Redis from "ioredis";
export const redisConfig = () => {
if (process.env.NODE_ENV === "production") {
return `redis://${process.env.REDIS_HOST}:${process.env.REDIS_PORT}`;
}
return "";
};
const redisCli = new Redis(redisConfig());
export default redisCli;
And this is my dockerfile:
# ---- Dependencies ----
FROM node:16-alpine AS base
# minimize image size
RUN apk add --no-cache libc6-compat
RUN npm install -g npm#latest
WORKDIR /app
COPY ./package*.json ./
RUN npm ci
# ---- Builder ----
FROM node:16-alpine AS builder
RUN npm install -g npm#latest
WORKDIR /app
COPY --from=base /app/node_modules ./node_modules
COPY ./src ./src
COPY package*.json tsconfig.json webpack.config.ts ./
RUN npm run build
# ---- Release ----
FROM node:16 AS release
WORKDIR /app
# COPY ./prisma ./prisma
# COPY ./.env ./
# COPY ./deployment ./deployment
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
# RUN npx prisma generate
RUN npm install pm2 -g
EXPOSE 3000
This one is the docker-compose.yml:
version: "3"
services:
api:
build: ./
depends_on:
- redis
links:
- redis
command: sh -c "node dist/server.js"
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- NODE_ENV=production
ports:
- 3000:3000
redis:
image: "redis:latest"
I have specified the links in docker-compose, but still receiving the same error.
How can I fix the error? Thanks for any help!!

You are receiving this error because your application is probably trying to connect to redis before redis is up and accessible. In your depends_on section, you can say that you want to start your application after your redis service is healthy. To do so, you must also configure a healthcheck to tell when redis is really ready to accept connections (redis-cli ping for example).
Here is an example of configuration that works for me:
version: "3"
services:
api:
build: ./
depends_on:
redis:
condition: service_healthy
links:
- redis
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- NODE_ENV=production
redis:
image: redis:latest
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 1s
timeout: 2s
retries: 10

Was able to connect with such a config into the Redis hosted in Docker
ConfigurationOptions co = new ConfigurationOptions()
{
SyncTimeout = 500000,
EndPoints =
{
{ "127.0.0.1", 49155 }
},
AbortOnConnectFail = false // this prevents that error,
, Password = "redispw"
};
RedisConnectorHelper.lazyConnection = new Lazy<ConnectionMultiplexer>(() =>
{
return ConnectionMultiplexer.Connect(co);
});
where 49155 is the doker path port

Related

Docker subdomen (Error: P1001: Can't reach database server at `db-subdomen`:`5436` )

I have a main domen and 3 subdomains on server are running on it, but I decided to add 3 subdomains and for some reason I get an error (Error: P1001: Can't reach database server at db-subdomen:5436 or 5432 ) . Project Next.js + prisma +docker.
As I understand it, the problem is either inside the scope in the container. Or in the .env file
In the other 3 subdomains , I have the same cat in Dockerfile and env . In docker-compose, which is the same for all projects, everything is also the same, but there is still an error. There may be a typo , I don't know anymore, tk did everything as always
My docker-compose (for all projects , this example for main and one subdomen):
#subdomen
app-subdomen:
container_name: app-subdomen
image: subdomen-image
build:
context: subdomen
dockerfile: Dockerfile
restart: always
environment:
NODE_ENV: production
networks:
- subdomen-net
env_file: subdomen/.env
ports:
- 7000:3000
depends_on:
- "db-subdomen"
command: sh -c "sleep 13 && npx prisma migrate deploy && npm start"
db-subdomen:
container_name: db-subdomen
env_file:
- subdomen/.env
image: postgres:latest
restart: always
volumes:
- db-subdomen-data:/var/lib/postgresql/data
networks:
- subdomen-net
#main domen
app-main:
image: main-image
build:
context: main
dockerfile: Dockerfile
restart: always
environment:
NODE_ENV: production
env_file: main/.env
ports:
- 3000:3000
depends_on:
- "db-main"
command: sh -c "sleep 3 && npx prisma migrate deploy && npm start"
networks:
- main-net
db-main:
env_file:
- main/.env
image: postgres:latest
restart: always
volumes:
- db-main-data:/var/lib/postgresql/data
networks:
- main-net
volumes:
db-main-data: {}
db-subdomen-data: {}
networks:
main-net:
name: main-net
subdomen-net:
name: subdomen-net
.env subdomen :
POSTGRES_USER=subdomenUser
POSTGRES_PASSWORD=subdomen
POSTGRES_DB=subdomen-db
SECRET=88xU_X8yfsfdsfsdfsdfsdfsdfdsdc
HOST=https://subdomen.domen.ru
DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}#db-arkont:5436/${POSTGRES_DB}?schema=public
Submodem Dockerfile (the other 3 projects and subdomains have the same problem and there is no problem:
FROM node:lts-alpine AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
# Install app dependencies
RUN npm install
RUN npx prisma generate
COPY . .
RUN npm run build
FROM node:lts-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
COPY --from=builder /app/prisma ./prisma
ENV NODE_ENV=production
EXPOSE 3000

Docker compose doesnt work i get Error: Cannot find module '#vitejs/plugin-react'

To whom it may concern,
Im trying to run my app with docker compose file and get an error:
This is my docker-compsoe file
version: '3.8'
services:
backend:
container_name: backend
build:
context: .
dockerfile: ./docker/Dockerfile.backend-dev
restart: always
ports:
- '5000:5000'
env_file:
- ./apps/backend/envs/.env.development
volumes:
- type: bind
source: ./apps/backend/src
target: /app/apps/backend/src
- /app/apps/backend/node_modules
frontend:
container_name: frontend
build:
context: .
dockerfile: ./docker/Dockerfile.frontend-dev
restart: always
ports:
- '3000:3000'
volumes:
- type: bind
source: ./apps/frontend/src
target: /app/apps/frontend/src
- /app/apps/frontend/node_modules
And this is my dockerfile of the frontend
FROM node:18.14.0
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
WORKDIR /app
COPY ./package.json ./pnpm-workspace.yaml ./.npmrc ./
COPY ./apps/frontend/package.json ./apps/frontend/
RUN pnpm i -w
RUN pnpm --filter frontend i
COPY ./tsconfig.base.json ./
COPY ./apps/frontend/ ./apps/frontend/
CMD ["pnpm", "-F", "frontend", "start" ]
And this is the error
apps/frontend start$ vite -c ./vite.config.ts
apps/frontend start: failed to load config from /app/apps/frontend/vite.config.ts
apps/frontend start: error when starting dev server:
apps/frontend start: Error: Cannot find module '#vitejs/plugin-react'
Here is my pnpm workspace.yaml file
packages:
- 'apps/*'
Please help
When running docker-compose up -d

AdonisJS 5 Dockerfile - Cannot find ace

I have a docker setup with AdonisJS 5. I'm currently trying to get it started (and it builds just fine). But as discussed a bit further down, the ace command cannot be found. ace is the CLI package for AdonisJS (think ng for Angular)
This is my Dockerfile:
ARG NODE_IMAGE=node:16.13.1-alpine
FROM $NODE_IMAGE AS base
RUN apk --no-cache add dumb-init
RUN mkdir -p /home/node/app && chown node:node /home/node/app
WORKDIR /home/node/app
USER node
RUN mkdir tmp
FROM base AS dependencies
COPY --chown=node:node ./package*.json ./
RUN npm ci
RUN npm i #adonisjs/cli
COPY --chown=node:node . .
FROM dependencies AS build
RUN node ace build --production
FROM base AS production
ENV NODE_ENV=production
ENV PORT=$PORT
ENV HOST=0.0.0.0
COPY --chown=node:node ./package*.json ./
RUN npm ci --production
COPY --chown=node:node --from=build /home/node/app/build .
EXPOSE $PORT
CMD [ "dumb-init", "node", "service/build/server.js" ]
And this is my docker-compose.yml:
version: '3.9'
services:
postgres:
container_name: postgres
image: postgres
restart: always
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-password}
- POSTGRES_USER=${POSTGRES_USER:-user}
networks:
- family-service-network
volumes:
- fn-db_volume:/var/lib/postgresql/data
adminer:
container_name: adminer
image: adminer
restart: always
networks:
- family-service-network
ports:
- 8080:8080
minio:
container_name: storage
image: 'bitnami/minio:latest'
ports:
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ROOT_USER=user
- MINIO_ROOT_PASSWORD=password
- MINIO_SERVER_ACCESS_KEY=access-key
- MINIO_SERVER_SECRET_KEY=secret-key
networks:
- family-service-network
volumes:
- fn-s3_volume:/var/lib/postgresql/data
fn_service:
container_name: fn_service
restart: always
build:
context: ./service
target: dependencies
ports:
- ${PORT:-3333}:${PORT:-3333}
- 9229:9229
networks:
- family-service-network
env_file:
- ./service/.env
volumes:
- ./:/home/node/app
- /home/node/app/node_modules
depends_on:
- postgres
command: dumb-init node ace serve --watch --node-args="--inspect=0.0.0.0"
volumes:
fn-db_volume:
fn-s3_volume:
networks:
family-service-network:
When I run this with docker-compose up everything works, except for the fn_service.
I get the error:
Error: Cannot find module '/home/node/app/ace'
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15)
at Function.Module._load (node:internal/modules/cjs/loader:778:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)
at node:internal/main/run_main_module:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
node:internal/modules/cjs/loader:936
throw err;
^
I followed this tutorial, and I can't seem to find anything by googling. I'm sure it's something miniscule.
Any help would be appreciated.

Docker varnish error 503 backend fetch failed

Here is default.vcl
vcl 4.1;
backend default {
.host = "127.0.0.1";
.port = "8080";
}
My docker-composer.yml
version: "3"
services:
varnish:
image: varnish:stable
container_name: varnish
volumes:
- "./default.vcl:/etc/varnish/default.vcl"
ports:
- "80:80"
tmpfs:
- /var/lib/varnish:exec
environment:
- VARNISH_SIZE=2G
depends_on:
- "node"
node:
build: ./
container_name: node
ports:
- "8080:8000"
My Dockerfile
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 8080
CMD [ "node", "app.js" ]
When I run the application through http://localhost:8080/ it works and when I run it directly in port 80 like this http://localhost/ it throws
Error 503 Backend fetch failed
Am I missing out something in configuring port? Or How can I check the varnish log through docker?

flask_1 | WebDriverException: Message: 'chromedriver' executable needs to be in PATH

Steps I followed build using docker-compose
I setup python robot framework with flask based application
I created Dockerfile
DockerFile
FROM alpine:latest
COPY . /app
WORKDIR /app
RUN ls -la /
RUN apk add --no-cache sqlite py3-pip
RUN pip3 install -r requirements.txt
ENV FLASK_PORT 8181
ENV FLASK_APP demo_app
CMD ["sh", "run.sh"]
COPY testing/ui/config/ /app/tests/config/
COPY testing/ui/pages/ /app/tests/pages/
COPY testing/ui/steps/ /app/tests/steps/
COPY testing/ui/test_data/ /app/tests/test_data/
COPY testing/ui/tests/ /app/tests/tests/
COPY testing/ui/test_suites/ /app/tests/test_suites/
RUN ls -la /
WORKDIR /app/tests/test_suites/
CMD ["sh","run_ui_negative_tests.sh"]
I created docker-compose file
version: '3'
services:
flask:
hostname: demoapp
image: demoapp:0.0.1
build:
context: .
dockerfile: ./Dockerfile
links:
- chrome
tty: true
chrome:
image: selenium/node-chrome:4.0.0-alpha-7-prerelease-20201009
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
ports:
- "5900:5900"
selenium-hub:
image: selenium/hub:4.0.0-alpha-7-prerelease-20201009
container_name: selenium-hub
ports:
- "4442:4442"
Error I got
WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
h
Try to add path where your chrome driver application is stored.
driver = webdriver.Chrome(executable_path=r'your_path\chromedriver.exe')

Resources