Problem connecting between containers in Pod - docker

I have a pod with 3 containers in it: client, server, mongodb (MERN)
The pod has a mapped id to the host and the client listens to it -> 8184:3000
The website comes up and is reachable. Server logs says that it has been conented to the mogodb and is listening at port 3001 as I have assigned.
It seems that the client can not connect to the server side and therefor can not check the credentials for login which leads to get wrong pass or user all the time.
The whol program works localy on my windows.
Am I missing some part in docker or crating the pod. As far as I undrstood the containers in a pod should communicate as if they were running in a local network.
This is the gitlab-yml:
stages:
- build
variables:
GIT_SUBMODULE_STRATEGY: recursive
TAG_LATEST: $CI_REGISTERY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTERY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
TAG_NAME_Client: gitlab.comp.com/sdx-licence-manager:$CI_COMMIT_REF_NAME-client
TAG_NAME_Server: gitlab.comp.com/semdatex/sdx-licence-manager:$CI_COMMIT_REF_NAME-server
cache:
paths:
- client/node_modules/
- server/node_modules/
build_pod:
tags:
- sdxuser-pod-shell
stage: build
script:
- podman pod rm -f -a
- podman pod create --name lm-pod-$CI_COMMIT_SHORT_SHA -p 8184:3000
build_db:
image: mongo:4.4
tags:
- sdxuser-pod-shell
stage: build
script:
- podman run -dt --pod lm-pod-$CI_COMMIT_SHORT_SHA -v ~/lmdb_volume:/data/db:z --name mongo -d mongo
build_server:
image: node:16.6.1
stage: build
tags:
- sdxuser-pod-shell
script:
- cd server
- podman build -t $TAG_NAME_Server .
- podman run -dt --pod lm-pod-$CI_COMMIT_SHORT_SHA $TAG_NAME_Server
build_client:
image: node:16.6.1
stage: build
tags:
- sdxuser-pod-shell
script:
- cd client
- podman build -t $TAG_NAME_Client .
- podman run -d --pod lm-pod-$CI_COMMIT_SHORT_SHA $TAG_NAME_Client
Docker File Server:
FROM docker.io/library/node:16.6.1
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 3001
CMD [ "npm", "run", "start" ]
Docker File Client:
FROM docker.io/library/node:16.6.1
WORKDIR /app
COPY package*.json ./
RUN npm install
RUN npm install -g npm#7.21.0
COPY . ./
EXPOSE 3000
# start app
CMD [ "npm", "run", "start" ]
snippet from index.js at clientside trying to reach the server side checking log in credentials:
function Login(props) {
const [email, setEmail] = useState('');
const [password, setPassword] = useState('');
async function loginUser(credentials) {
return fetch('http://127.0.0.1:3001/login', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(credentials),
})
.then((data) => data.json());
}
}
pod:

Actually it has nothing to do with podman. Sorry about that. I added a proxy to my package.json and it redirected the requests correctly:
"proxy": "http://localhost:3001"

Related

Cannot access localhost with dockerized app

MacOS Monterey
I have a simple Dockerfile
FROM denoland/deno:1.29.1
EXPOSE 4200
WORKDIR /app
ADD . /app
RUN deno cache ./src/index.ts
CMD ["run", "--allow-net", "--allow-read", "./src/index.ts"]
And the most simple deno code
const handler = (request: Request) => {
return new Response("heyo", { status: 200 });
};
DenoServer.serve(handler, { hostname: HOST, port: PORT });
Running the application locally works fine and I can reach localhost:4200. However when I run the app with docker the request fails
I use
docker run --publish 4200:4200 frontend
└───> curl http://localhost:4200
curl: (52) Empty reply from server
I can see the container running and trying to hit the {{ .NetworkSettings.IPAddress }} doesn't work either
docker container running on localhost
It appears that the necessary environment variables were not included in the docker run command. To specify the host and port, you can use the -e option in the docker command.
docker build -t deno-sample .
docker run -e HOST=localhost -e PORT=4200 -p 4200:4200 deno-sample
For more information, please refer to the Docker documentation at the following link: https://docs.docker.com/engine/reference/commandline/run/
FROM denoland/deno:1.29.1
EXPOSE 4200
WORKDIR /app
COPY . /app
# RUN deno cache ./src/index.ts
RUN ls
CMD ["deno", "run", "--allow-net", "--allow-read", "app.ts"]
Here is my app.ts in same path as of dockerfile
function requestHandler() {
console.log(">>>>>")
return new Response("Hey, I'm a server")
}
serve(requestHandler, { hostname: '0.0.0.0', port: 4200 })

docker : No subsytem for mount

I'm running a CI pipeline in gitlab-runner which is running on a linux machine.
In the pipeline I'm trying to build an image.
The Error I'm getting is,
ci runtime error: rootfs_linux.go:53: mounting "/sys/fs/cgroup" to rootfs "/var/lib/docker/overlay/d6b9ba61e6eed4bcd9c23722ad6f6a9fb05b6c536dbab31f47b835c25c0a343b/merged" caused "no subsystem for mount"
The Dockerfile contains :
# Set the base image to node:12-alpine
FROM node:16.7.0-buster
# Specify where our app will live in the container
WORKDIR /app
# Copy the React App to the container
COPY . /app/
# Prepare the container for building React
RUN npm install
RUN npm install react-scripts#4.0.3 -g
# We want the production version
RUN npm run build
# Prepare nginx
FROM nginx:stable
COPY --from=0 /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
# Fire up nginx
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
The gitlab-ci.yml contains :
image: gitlab/dind
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
stages:
- build
- docker-build
cache:
paths:
- node_modules
yarn-build:
stage: build
image: node:latest
script:
- unset CI
- yarn install
- yarn build
artifacts:
expire_in: 1 hour # the artifacts expire from the artifacts folder after 1 hour
paths:
- build
docker-build:
stage: docker-build
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker build -t $CI_REGISTRY_USER/$app_name .
- docker push $CI_REGISTRY_USER/$app_name
I'm not getting how to resolve this, I tried upgrading docker in my linux machine.

pm2-runtime ecosystem npm script fail

****Google Translator used****
I don't know why "ecosystem.config.js" is still included in npm agrs ...
So in the "ecosystem.config.js" file, args only has run and start, but when you build a docker, it looks like it works with npm ecosystem.config.js run start.
Please tell me why
// dockerfile
FROM node:lts-alpine
RUN npm install pm2 -g
COPY . /usr/src/nuxt/
WORKDIR /usr/src/nuxt/
RUN npm install
EXPOSE 8080
RUN npm run build
# start the app
CMD ["pm2-runtime", "ecosystem.config.js"]
// ecosystem.config.js
module.exports = {
apps: [
{
name: 'webapp',
exec_mode: 'cluster',
instances: 2,
script: 'npm',
args: ['run', 'start'],
env: {
HOST: '0.0.0.0',
PORT: 8080
},
autorestart: true,
max_memory_restart: '1G'
}
]
}
I struggled with ecosystem.config.js, I ended up using the yaml format instead: create process.yaml and enter your config>
apps:
- script: /app/index.js
name: 'app'
instances: 2
error_file: ./errors.log
exec_mode: cluster
env:
NODE_ENV: production
PORT: 12345
Then in the docker file:
COPY ./dist/index.js /app/
COPY process.yaml /app/
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
# Expose the listening port of your app
EXPOSE 12345
CMD [ "pm2-runtime", "/app/process.yaml"]
Just change the directories and files to the way you want things setup

docker-compose port expose not working on mac 10.13.6 (High Sierra)

I have a docker container that hosts a Node application. I am trying to connect the application using following URL https://localhost:8000, but the connection is refused. I used docker-compose up -d command to run it.
This is the response I get when I run docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------
freeswitch-console_console_1 nodemon start Up 0.0.0.0:8000->8000/tcp
My docker-compose file is
version: '3'
services:
console:
build: .
ports:
- "8000:8000"
image: console:cp
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
And my Dockerfile is
FROM node:10
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
RUN npm install -g nodemon
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 8000
CMD [ "nodemon", "start" ]
I tried following command and it didn't work either:
docker run -p 8000:8000 console:cp nodemon index.js
Some more details
docker-compose version 1.22.0, build f46880f
Docker version 18.06.0-ce, build 0ffa825
MacOSX 10.13.6 (High Sierra)
Any ideas? Thanks in advance.
In the end I removed the Docker daemon configuration which were set to following
{
"debug" : true,
"userland-proxy" : false,
"experimental" : true
}
It started working. I think it has to do with userland-proxy.
Thanks everybody for help.

Docker images using circleci

I am working on integrating CI/CD pipeline using Docker. How can i use dockercompose file to build and create container?
I have tried it in putting Dockerfile, and docker-compose.yml, but none of them works.
Below is docker-compose file :
FROM ruby:2.2
EXPOSE 8000
RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN bundle install
CMD ["ruby","app.rb"]
RUN docker build -t itsruby:itsruby .
RUN docker run -d itsruby:itsruby
Below is docker-compose.yml
version: 2
jobs:
build:
docker:
- image: circleci/ruby:2.2
steps:
- checkout
- run: CMD ["ruby","app.rb"]
- run: |
docker build -t itsruby:itsruby .
docker run -d itsruby:itsruby
test:
docker:
- image: circleci/ruby:2.2
steps:
- checkout
- run: CMD ["ruby","app.rb"]
The build is getting failed in circle/ci.

Resources