I have the following simple server in express, with the following docker file
axiostest.mjs
import axios from "axios"
import express from "express"
const app = express();
app.get("/", (request, response) => {
axios.get(`http://localhost:8888/admin_issues`).then(res => {
console.log(res.data);
response.send(res.data)
}).catch(err => {
console.log("ERRRROR")
console.log(err);
response.send(err)
})
});
app.listen(1112, () => {
console.log("Listen on the port 1112...");
});
DockerFile
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 1112
CMD [ "node", "axiostest.mjs" ]
If I run the server normally with
node axiostest.mjs, and then I do a postman call to localhost:1112
It works just fine
But If I build the docker container
docker build . -t me/express-test
and then i run it
docker run -p 49160:1112 -d me/express-test
If i do a postman call to localhost:49160
It says
"message": "connect ECONNREFUSED 127.0.0.1:8888"
Because axios is failing to connect to 127.0.0.1:8888
How can I fix this?
Related
I have jest automation with puppeteer that needs to run as a docker container. but after the build when I try to run it. I get this error Error: Jest: Got error running globalSetup - /usr/src/app/node_modules/jest-environment-puppeteer/setup.js, reason: Failed to launch the browser process! /usr/src/app/node_modules/puppeteer/.local-chromium/linux-884014/chrome-linux/chrome: error while loading shared libraries: libxshmfence.so.1: cannot open shared object file: No such file or directory
I'm quit new for the docker stuff not sure what I'm doing wrong here
jest.config.ts
module.exports = {
setupFiles:['dotenv/config'],
preset: "jest-puppeteer",
roots: ['<rootDir>/src'],
transform: {
'^.+\\.tsx?$': 'ts-jest'
},
testRegex: '(/__tests__/.*|(\\.|/)(test|spec))\\.tsx?$',
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
}
jest-puppeteer.config.js
module.exports = {
launch: {
headless: true, //Specify whether to launch UI
defaultViewport: null,
args: ['--start-maximized', '--disable-gpu',
'--disable-dev-shm-usage', '--disable-setuid-sandbox',
'--no-sandbox'],
},
testEnvironmentOptions: { resources: 'usable' },
};
Dockerfile
FROM node:16.14.0
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
RUN chown -R node /usr/src/app
USER node
CMD npm run test
I installed chrome separately into docker and tried to run it but that didn't work.
I use this docker image and it fixed the issue
https://hub.docker.com/r/alekzonder/puppeteer/
I have a pod with 3 containers in it: client, server, mongodb (MERN)
The pod has a mapped id to the host and the client listens to it -> 8184:3000
The website comes up and is reachable. Server logs says that it has been conented to the mogodb and is listening at port 3001 as I have assigned.
It seems that the client can not connect to the server side and therefor can not check the credentials for login which leads to get wrong pass or user all the time.
The whol program works localy on my windows.
Am I missing some part in docker or crating the pod. As far as I undrstood the containers in a pod should communicate as if they were running in a local network.
This is the gitlab-yml:
stages:
- build
variables:
GIT_SUBMODULE_STRATEGY: recursive
TAG_LATEST: $CI_REGISTERY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTERY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
TAG_NAME_Client: gitlab.comp.com/sdx-licence-manager:$CI_COMMIT_REF_NAME-client
TAG_NAME_Server: gitlab.comp.com/semdatex/sdx-licence-manager:$CI_COMMIT_REF_NAME-server
cache:
paths:
- client/node_modules/
- server/node_modules/
build_pod:
tags:
- sdxuser-pod-shell
stage: build
script:
- podman pod rm -f -a
- podman pod create --name lm-pod-$CI_COMMIT_SHORT_SHA -p 8184:3000
build_db:
image: mongo:4.4
tags:
- sdxuser-pod-shell
stage: build
script:
- podman run -dt --pod lm-pod-$CI_COMMIT_SHORT_SHA -v ~/lmdb_volume:/data/db:z --name mongo -d mongo
build_server:
image: node:16.6.1
stage: build
tags:
- sdxuser-pod-shell
script:
- cd server
- podman build -t $TAG_NAME_Server .
- podman run -dt --pod lm-pod-$CI_COMMIT_SHORT_SHA $TAG_NAME_Server
build_client:
image: node:16.6.1
stage: build
tags:
- sdxuser-pod-shell
script:
- cd client
- podman build -t $TAG_NAME_Client .
- podman run -d --pod lm-pod-$CI_COMMIT_SHORT_SHA $TAG_NAME_Client
Docker File Server:
FROM docker.io/library/node:16.6.1
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 3001
CMD [ "npm", "run", "start" ]
Docker File Client:
FROM docker.io/library/node:16.6.1
WORKDIR /app
COPY package*.json ./
RUN npm install
RUN npm install -g npm#7.21.0
COPY . ./
EXPOSE 3000
# start app
CMD [ "npm", "run", "start" ]
snippet from index.js at clientside trying to reach the server side checking log in credentials:
function Login(props) {
const [email, setEmail] = useState('');
const [password, setPassword] = useState('');
async function loginUser(credentials) {
return fetch('http://127.0.0.1:3001/login', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(credentials),
})
.then((data) => data.json());
}
}
pod:
Actually it has nothing to do with podman. Sorry about that. I added a proxy to my package.json and it redirected the requests correctly:
"proxy": "http://localhost:3001"
I was trying to dockerize my existing simple vue app , following on this tutorial from vue webpage https://v2.vuejs.org/v2/cookbook/dockerize-vuejs-app.html. I successfully created the image and the container. My problem is that when I edit my code like "hello world" in App.vue it will not automatically update or what they called this hot reload ? or should I migrate to the latest Vue so that it will work ?
docker run -it --name=mynicevue -p 8080:8080 mynicevue/app
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
# RUN npm run build
EXPOSE 8080
CMD [ "http-server", "serve" ]
EDIT:
Still no luck. I comment out the npm run build. I set up also vue.config.js and add this code
module.exports = {
devServer: {
watchOptions: {
ignored: /node_modules/,
aggregateTimeout: 300,
poll: 1000,
},
}
};
then I run the container like this
`docker run -it --name=mynicevue -v %cd%:/app -p 8080:8080 mynicevue/app
when the app launches to browser I get this error in terminal and the browser is whitescreen
"GET /" Error (404): "Not found"
Can someone help me please of my Dockerfile what is wrong or missing so that I can play my vue app using docker ?
Thank you in advance.
Okay I tried your project in my local and here's how you do it.
Dockerfile
FROM node:lts-alpine
# bind your app to the gateway IP
ENV HOST=0.0.0.0
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
EXPOSE 8080
ENTRYPOINT [ "npm", "run", "dev" ]
Use this command to run the docker image after you build it:
docker run -v ${PWD}/src:/app/src -p 8080:8080 -d mynicevue/app
Explanation
It seems that Vue is expecting your app to be bound to your gateway IP when it is served from within a container. Hence ENV HOST=0.0.0.0 inside the Dockerfile.
You need to mount your src directory to the running container's /app/src directory so that the changes in your local filesystem directly reflects and visible in the container itself.
The way in Vue to watch for the file changes is using npm run dev, hence ENTRYPOINT [ "npm", "run", "dev" ] in Dockerfile
if you tried previous answers and still doesn't work , try adding watch:{usePolling: true} to vite.config.js file
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
server: {
host: true,
port: 4173,
watch: {
usePolling: true
}
}
})
I'm trying to run a basic HelloWorld express app on my localhost using docker.
Docker version: Docker version 19.03.13
Project structure:
my-project
src
index.js
Dockerfile
package.json
package-lock.json
Dockerfile:
# Use small base image with nodejs 10
FROM node:10.13-alpine
# set current work dir
WORKDIR /src
# Copy package.json, packge-lock.json into current dir
COPY ["package.json", "package-lock.json*", "./"]
# install dependencies
RUN npm install --production
# copy sources
COPY ./src .
# open default port
EXPOSE 3000
# Run app
CMD ["node", "index.js"]
package.json
{
"name": "knative-serving-helloworld",
"version": "1.0.0",
"description": "Simple hello world sample in Node",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"author": "",
"license": "Apache-2.0",
"dependencies": {
"express": "^4.16.4"
},
}
index.js
const express = require('express');
const app = express();
app.get('/', (req, res) => {
console.log('Hello world received a request.');
res.send(`Hello world!\n`);
});
const port = process.env.PORT || 3000;
app.listen(port, () => {
console.log('Hello world listening on port', port);
});
Here are the commands I'm running:
>> docker build --tag hello-world:1.0 . // BUILD IMAGE AND GET ID
>> docker run IMAGE_ID // RUN CONTAINER WITH IMAGE_ID
Image seems to build just fine:
And this is the result after I run the image:
But this is what I get when I hit localhost:3000
I'm very new to Docker. What am I doing wrong?
You need to publish your port 3000.
docker run -p 3000:3000 IMAGE_ID
Just exposing the port is not enough it needs to be mapped on the host's port too.
Use host 0.0.0.0
app.listen(port, '0.0.0.0' () => {
console.log('Hello world listening on port', port);
});
Also, You need to publish port 3000:
docker run -p 3000:3000 IMAGE_ID
I am working on sails application In my application I have used mysql and mongo adpater to connect with different database. Both db are hosted on somewhere on internet. Application is working fine in my local environment. I am facing issue once I add project to docker container. I am able to generate docker image and run docker container. When I call simple routers where DB connection is not exists it's working fine but when I call Testcontroller which is return data from mongodb. it give me ReferenceError: Test is not define. here Test is mongodb's entity.
DockerFile:
FROM node:latest
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "./"]
RUN npm install --verbose --force && mv node_modules ../
COPY . .
EXPOSE 80
CMD npm start
TestController
/**
* TestController
* #description :: Server-side actions for handling incoming requests.
*
* #help :: See https://sailsjs.com/docs/concepts/actions
*/
module.exports = {
index: async function(req, res) {
var data = await Test.find(); // Here I am getting error Test is not define.
res.json(data);
}
};
Routes.js
'GET /test': {controller:'test', action:'index'}
I found issue I am moving node_modules to previous directory that is the issue.
below Configuration works for me.
FROM node:latest
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "./"]
RUN npm install --verbose --force
COPY . .
EXPOSE 80
CMD npm start