Unable to connect to localhost port running docker image - docker

I'm trying to run a basic HelloWorld express app on my localhost using docker.
Docker version: Docker version 19.03.13
Project structure:
my-project
src
index.js
Dockerfile
package.json
package-lock.json
Dockerfile:
# Use small base image with nodejs 10
FROM node:10.13-alpine
# set current work dir
WORKDIR /src
# Copy package.json, packge-lock.json into current dir
COPY ["package.json", "package-lock.json*", "./"]
# install dependencies
RUN npm install --production
# copy sources
COPY ./src .
# open default port
EXPOSE 3000
# Run app
CMD ["node", "index.js"]
package.json
{
"name": "knative-serving-helloworld",
"version": "1.0.0",
"description": "Simple hello world sample in Node",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"author": "",
"license": "Apache-2.0",
"dependencies": {
"express": "^4.16.4"
},
}
index.js
const express = require('express');
const app = express();
app.get('/', (req, res) => {
console.log('Hello world received a request.');
res.send(`Hello world!\n`);
});
const port = process.env.PORT || 3000;
app.listen(port, () => {
console.log('Hello world listening on port', port);
});
Here are the commands I'm running:
>> docker build --tag hello-world:1.0 . // BUILD IMAGE AND GET ID
>> docker run IMAGE_ID // RUN CONTAINER WITH IMAGE_ID
Image seems to build just fine:
And this is the result after I run the image:
But this is what I get when I hit localhost:3000
I'm very new to Docker. What am I doing wrong?

You need to publish your port 3000.
docker run -p 3000:3000 IMAGE_ID
Just exposing the port is not enough it needs to be mapped on the host's port too.

Use host 0.0.0.0
app.listen(port, '0.0.0.0' () => {
console.log('Hello world listening on port', port);
});
Also, You need to publish port 3000:
docker run -p 3000:3000 IMAGE_ID

Related

Docker, cant make axios requests from a server within a docker container

I have the following simple server in express, with the following docker file
axiostest.mjs
import axios from "axios"
import express from "express"
const app = express();
app.get("/", (request, response) => {
axios.get(`http://localhost:8888/admin_issues`).then(res => {
console.log(res.data);
response.send(res.data)
}).catch(err => {
console.log("ERRRROR")
console.log(err);
response.send(err)
})
});
app.listen(1112, () => {
console.log("Listen on the port 1112...");
});
DockerFile
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 1112
CMD [ "node", "axiostest.mjs" ]
If I run the server normally with
node axiostest.mjs, and then I do a postman call to localhost:1112
It works just fine
But If I build the docker container
docker build . -t me/express-test
and then i run it
docker run -p 49160:1112 -d me/express-test
If i do a postman call to localhost:49160
It says
"message": "connect ECONNREFUSED 127.0.0.1:8888"
Because axios is failing to connect to 127.0.0.1:8888
How can I fix this?

Problem connecting between containers in Pod

I have a pod with 3 containers in it: client, server, mongodb (MERN)
The pod has a mapped id to the host and the client listens to it -> 8184:3000
The website comes up and is reachable. Server logs says that it has been conented to the mogodb and is listening at port 3001 as I have assigned.
It seems that the client can not connect to the server side and therefor can not check the credentials for login which leads to get wrong pass or user all the time.
The whol program works localy on my windows.
Am I missing some part in docker or crating the pod. As far as I undrstood the containers in a pod should communicate as if they were running in a local network.
This is the gitlab-yml:
stages:
- build
variables:
GIT_SUBMODULE_STRATEGY: recursive
TAG_LATEST: $CI_REGISTERY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTERY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
TAG_NAME_Client: gitlab.comp.com/sdx-licence-manager:$CI_COMMIT_REF_NAME-client
TAG_NAME_Server: gitlab.comp.com/semdatex/sdx-licence-manager:$CI_COMMIT_REF_NAME-server
cache:
paths:
- client/node_modules/
- server/node_modules/
build_pod:
tags:
- sdxuser-pod-shell
stage: build
script:
- podman pod rm -f -a
- podman pod create --name lm-pod-$CI_COMMIT_SHORT_SHA -p 8184:3000
build_db:
image: mongo:4.4
tags:
- sdxuser-pod-shell
stage: build
script:
- podman run -dt --pod lm-pod-$CI_COMMIT_SHORT_SHA -v ~/lmdb_volume:/data/db:z --name mongo -d mongo
build_server:
image: node:16.6.1
stage: build
tags:
- sdxuser-pod-shell
script:
- cd server
- podman build -t $TAG_NAME_Server .
- podman run -dt --pod lm-pod-$CI_COMMIT_SHORT_SHA $TAG_NAME_Server
build_client:
image: node:16.6.1
stage: build
tags:
- sdxuser-pod-shell
script:
- cd client
- podman build -t $TAG_NAME_Client .
- podman run -d --pod lm-pod-$CI_COMMIT_SHORT_SHA $TAG_NAME_Client
Docker File Server:
FROM docker.io/library/node:16.6.1
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 3001
CMD [ "npm", "run", "start" ]
Docker File Client:
FROM docker.io/library/node:16.6.1
WORKDIR /app
COPY package*.json ./
RUN npm install
RUN npm install -g npm#7.21.0
COPY . ./
EXPOSE 3000
# start app
CMD [ "npm", "run", "start" ]
snippet from index.js at clientside trying to reach the server side checking log in credentials:
function Login(props) {
const [email, setEmail] = useState('');
const [password, setPassword] = useState('');
async function loginUser(credentials) {
return fetch('http://127.0.0.1:3001/login', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(credentials),
})
.then((data) => data.json());
}
}
pod:
Actually it has nothing to do with podman. Sorry about that. I added a proxy to my package.json and it redirected the requests correctly:
"proxy": "http://localhost:3001"

Cannot find module 'apollo-server' when deploying on AWS EC2 using docker-compose

I have set up a GraphQL server and tried to deploy it on EC2 using docker-compose.
There is an error that "nodemon not found in npm". After adding "--global nodemon" in the Docker file, I ran docker-compose down, docker container prune, docker image rm [image name], then docker-compose up --build.
Then there is this Error: Cannot find module 'apollo-server'.
enter image description here
docker-compose.yml
version: '3.6'
services:
graphql-server:
container_name: backend
build: ./
volumes:
- ./:/src/graphqlserver
command: npm start
working_dir: /src/graphqlserver
ports:
- "4000:4000"
Dockerfile
FROM node:latest
RUN mkdir -p /src/graphqlserver
WORKDIR /src/graphqlserver
COPY . /src/graphqlserver/
RUN npm install --global nodemon
EXPOSE 4000
CMD [ "npm", "start" ]
server.js
const { ApolloServer } = require('apollo-server');
const typeDefs = require('./typeDefs');
const resolvers = require('./resolvers');
const server = new ApolloServer({ typeDefs, resolvers });
server.listen().then(({ url }) => {
console.log(`🚀 Server ready at ${url}`);
});
package.json
"dependencies": {
"apollo-server": "^2.25.2",
"graphql": "^15.5.1"
},
"devDependencies": {
"nodemon": "^2.0.9"
}

Vue Vite cannot connect to docker container

Hi I have install fresh app vue3 typescript + vite , my problem after building the image and spin the container. I cannot access the localhost:3000, the browser will just display
The connection was reset
docker run --rm -it -v %cd%/:/app/src -p 3000:3000 myvitets
Dockerfile
FROM node:14-buster-slim
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
EXPOSE 3000
CMD [ "npm", "run", "dev"]
I also add .dockerignore
node_modules/
.git
.gitignore
can someone help me please how to run my app to the container..
Thank you in advance.
I had the same problem and the below works for me.
In package.json, change the scripts
From
"dev": "vite"
To
"dev": "vite --host 0.0.0.0"
First: in the package.json add --host tag
"scripts": {
"dev": "vite --host",
"build": "vite build",
"preview": "vite preview --port 4173",
"test:unit": "vitest --environment jsdom"
},
Second: in the vite.config.js add the server port
// https://vitejs.dev/config/
export default defineConfig({
server: {
port: 3000
},
plugins: [vue()],
resolve: {
alias: {
'#': fileURLToPath(new URL('./src', import.meta.url))
}
}
})
the port should be the same as in the Dockerfile, in your case 3000

pm2-runtime ecosystem npm script fail

****Google Translator used****
I don't know why "ecosystem.config.js" is still included in npm agrs ...
So in the "ecosystem.config.js" file, args only has run and start, but when you build a docker, it looks like it works with npm ecosystem.config.js run start.
Please tell me why
// dockerfile
FROM node:lts-alpine
RUN npm install pm2 -g
COPY . /usr/src/nuxt/
WORKDIR /usr/src/nuxt/
RUN npm install
EXPOSE 8080
RUN npm run build
# start the app
CMD ["pm2-runtime", "ecosystem.config.js"]
// ecosystem.config.js
module.exports = {
apps: [
{
name: 'webapp',
exec_mode: 'cluster',
instances: 2,
script: 'npm',
args: ['run', 'start'],
env: {
HOST: '0.0.0.0',
PORT: 8080
},
autorestart: true,
max_memory_restart: '1G'
}
]
}
I struggled with ecosystem.config.js, I ended up using the yaml format instead: create process.yaml and enter your config>
apps:
- script: /app/index.js
name: 'app'
instances: 2
error_file: ./errors.log
exec_mode: cluster
env:
NODE_ENV: production
PORT: 12345
Then in the docker file:
COPY ./dist/index.js /app/
COPY process.yaml /app/
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
# Expose the listening port of your app
EXPOSE 12345
CMD [ "pm2-runtime", "/app/process.yaml"]
Just change the directories and files to the way you want things setup

Resources