Docker socket.io - docker

I am trying to run the client from docker so that it connects to my server that is outside the containers but does not connect to me, it only works if I run it locally or if I pass the --netork host parameter, the latter is not valid since I have what to launch multiple containers
This is my code
var client = require('socket.io-client');
var options = {
secure:true,
reconnect: true,
rejectUnauthorized : false,
forceNew : true
};
var socket = client.connect('wss://192.168.1.15:8443',options);
var channel = process.env.SESSION;
var canal_1 = channel+'-1';
var canal_2 = channel+'-2';
socket.on('connect', function(){});
socket.on(canal_1, function(data){
console.log(data)
});
socket.on(canal_2, function(data){
console.log(data)
});
socket.on('disconnect', function(){});
And this is my docker
FROM node:12.13-alpine
RUN apk update && apk add --update alpine-sdk wget libxtst-dev libpng-dev python2 xorg-server-dev
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
RUN npm install robotjs
USER node
RUN npm install
COPY --chown=node:node . .
ENTRYPOINT [ "node"]
And this my package.json
{
"name": "nodejs-socket",
"version": "1.0.0",
"description": "nodejs socket",
"author": "Ricardo Jimenez Hurtado <jimenezhurtadoricardo#gmail.com>",
"license": "MIT",
"main": "server.js",
"keywords": [
"nodejs",
"express",
"socket"
],
"dependencies": {
"express": "^4.16.4",
"fs": "0.0.1-security",
"https": "^1.0.0",
"socket.io-client": "^2.3.0"
}
}
And this is my run code for docker
docker run -ti -e SESSION=1 node_server client_socket.js
Thanks for all
Best regards

You can achieve what you want here in a number of ways.
The best of which would be to also run your server node in a docker container, and use docker networks to enable communication between the server and clients.

I solve this problem with two steps, first I develop a better code with socket.io and socket.io-client for that the client join to a room and sendo step I revise all my network configuration for that my servername can be find for all my networks device
thanks for all

Related

How do I get server.js to do the equivalent of "yarn start" in nodejs to docker conversion?

I am trying to get a nodejs app to run in a container. I have followed the tutorial here: https://nodejs.org/en/docs/guides/nodejs-docker-webapp/ and it works for the example. But, I do not know how to convert my start up command "yarn start" to be triggered by server.js.
All of the examples of dockerizing a nodejs app that I have found online use this server.js approach. In the tutorial linked above the container ends up running a simple "Hello World!" example app. I have tried putting
CMD [ "yarn", "start" ]
in my Dockerfile but it does not launch the app.
This is my package.json file:
{
"name": "my-server",
"version": "1.0.0",
"private": true,
"scripts": {
"start": "concurrently \"cd api && yarn startd\" \"cd admin && yarn startd\"",
"dev": "concurrently \"cd api && yarn dev\" \"cd admin && yarn dev\"",
"logs": "heroku logs -t",
"heroku-postbuild": "cd src/client && npm install && npm run build"
},
"devDependencies": {
"concurrently": "^4.1.0",
"nodemon": "^1.19.4",
"prettier": "2.7.1",
"prettier-plugin-apex": "1.10.0"
}
}
Question is: How do I get server.js to do the equivalent of "yarn start"?
server.js (modified to open two ports):
'use strict';
const express = require('express');
// Constants
const PORT1 = 3000;
const PORT2 = 4000;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello World from my-server');
});
app.listen(PORT1, PORT2, HOST, () => {
console.log(`Running on http://${HOST}:${PORT1}`);
});
Thank you!

Docker image build fails: "protoc-gen-grpc-web: program not found or is not executable"

i inherited a project with several microservices running on kubernetes. after copying the repo and running the steps that the previous team outlined, i have an issue building one of the images that i need to deploy. the script for the build is:
cd graph_endpoint
cp ../../Protobufs/Graph_Endpoint/graph_endpoint.proto .
protoc -I. graph_endpoint.proto --js_out=import_style=commonjs:.
protoc -I. graph_endpoint.proto --grpc-web_out=import_style=commonjs,mode=grpcwebtext:.
export NODE_OPTIONS=--openssl-legacy-provider
npx webpack ./test.js --mode development
cp ./dist/graph_endpoint.js ../public/graph_endpoint.js
cd ..
docker build . -t $1/canvas-lti-frontend:v2
docker push $1/canvas-lti-frontend:v2
i'm getting an error from line 4:
protoc-gen-grpc-web: program not found or is not executable
--grpc-web_out: protoc-gen-grpc-web: Plugin failed with status code 1.
any idea how to fix it? i have no prior experience with docker.
here's the Dockerfile:
FROM node:16
# Install app dependencies
COPY package*.json /frontend-app/
WORKDIR /frontend-app
RUN npm install
COPY server.js /frontend-app/
# Bundle app source
COPY public /frontend-app/public
COPY routes /frontend-app/routes
COPY controllers /frontend-app/controllers
WORKDIR /frontend-app
EXPOSE 3000
CMD [ "node", "server.js"]
and package.json:
{
"name": "frontend",
"version": "1.0.0",
"description": "The user-facing application for the Canvas LTI Student Climate Dashboard",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"#okta/oidc-middleware": "^4.3.0",
"#okta/okta-signin-widget": "^5.14.0",
"express": "^4.18.2",
"express-session": "^1.17.2",
"vue": "^2.6.14"
},
"devDependencies": {
"nodemon": "^2.0.20",
"protoc-gen-grpc-web": "^1.4.1"
}
}
You don't have protoc-gen-grpc-web installed on the machine on which you're running the build script.
You can download the plugins from the grpc-web repo's releases page.
protoc has a plugin mechanism.
protoc looks for its plugins in the path and expects these binaries to be prefixed protoc-gen-{foo}.
However, when you reference the plugin from protoc, you simply use {foo} and generally this is suffixed with _out and sometimes _opt, i.e. protoc ... --{foo}_out --{foo}_opt.
The plugin protoc-gen-grpc-web (once installed and accessible in the host's path) is thus referenced with protoc ... --grpc_web_out=...

Combined JSON & Mochawesome test report not generating during Cypress tests in Docker container?

I am running Cypress tests inside a Docker container to generate a HTML test report.
Here is my folder structure:
As you can see in the cypress/reports/mocha folder, there are some JSON test results generated.
All tests are passing & the 3 JSON files there are populated.
Also, notice the empty cypress/reports/mochareports folder. This should contain the combined JSON of all test results, & a HTML test report.
Here is my package.json:
{
"name": "cypress-docker",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"clean:reports": "mkdir -p cypress/reports && rm -R -f cypress/reports/* && mkdir cypress/reports/mochareports",
"pretest": "npm run clean:reports",
"scripts": "cypress run",
"chrome:scripts": "cypress run --browser chrome ",
"firefox:scripts": "cypress run --browser firefox ",
"combine-reports": "mochawesome-merge cypress/reports/mocha/*.json > cypress/reports/mochareports/report.json",
"generate-report": "marge cypress/reports/mochareports/report.json -f report -o cypress/reports/mochareports",
"posttest": "npm run combine-reports && npm run generate-report",
"test": "npm run scripts || npm run posttest",
"chrome:test": "npm run pretest && npm run chrome:scripts || npm run posttest",
"firefox:test": "npm run pretest && npm run firefox:scripts || npm run posttest"
},
"keywords": [],
"author": "QA BOX <qabox#gmail.com>",
"license": "MIT",
"dependencies": {
"cypress": "^6.8.0",
"cypress-multi-reporters": "^1.4.0",
"mocha": "^8.2.1",
"mochawesome": "^6.2.1",
"mochawesome-merge": "^4.2.0",
"mochawesome-report-generator": "^5.1.0"
}
}
Here is my cypress.json:
{
"reporter": "cypress-multi-reporters",
"reporterOptions": {
"reporterEnabled": "mochawesome",
"mochawesomeReporterOptions": {
"reportDir": "cypress/reports/mocha",
"quite": true,
"overwrite": false,
"html": false,
"json": true
}
}
}
Here are the commands I use to run the tests:
To build the image - docker build -t cyp-dock-mocha-report .
docker-compose run e2e-chrome
Here is my Dockerfile:
FROM cypress/included:6.8.0
RUN mkdir /cypress-docker
WORKDIR /cypress-docker
COPY ./package.json .
COPY ./package-lock.json .
COPY ./cypress.json .
COPY ./cypress ./cypress
RUN npm install
ENTRYPOINT ["npm", "run"]
Here is my docker-compose.yml:
version: "3"
services:
# this container will run Cypress test using built-in Electron browser
e2e-electron:
image: "cyp-dock-mocha-report"
command: "test"
volumes:
- ./cypress/videos:/cypress-docker/cypress/videos
- ./cypress/reports:/cypress-docker/cypress/reports
# this container will run Cypress test using Chrome browser
e2e-chrome:
image: "cyp-dock-mocha-report"
command: "chrome:test"
volumes:
- ./cypress/videos:/cypress-docker/cypress/videos
- ./cypress/reports:/cypress-docker/cypress/reports
# this container will run Cypress test using Firefox browser
# note that both Chrome and Firefox browsers were pre-installed in the Docker image
e2e-firefox:
image: "cyp-dock-mocha-report"
command: "firefox:test"
# if you want to debug FF run, pass DEBUG variable like
environment:
- DEBUG=cypress:server:browsers:firefox-util,cypress:server:util:process_profiler
volumes:
- ./cypress/videos:/cypress-docker/cypress/videos
- ./cypress/reports:/cypress-docker/cypress/reports
All tests are passing as you can see below:
I don't know why the Mochawesome HTML test report isn't generating, or the merged JSO
Can someone please tell me why the merged JSON & the HTML test report aren't being generated in mochareports folder, & how I can get them to?
thanks for giving me a hint on how to use docker compose with this image! I think I see where the issue is: in the package.json file, under scripts, instead of "merge", you wrote "marge":
"generate-report": "marge cypress/reports/mochareports/report.json -f report -o cypress/reports/mochareports"

AWS ECS EC2 ECR not updating files after deployment with docker volume nginx

I am being stuck on issue with my volume and ECS.
I would like to attach volume so i can store there .env files etc so i dont have to recreate this manually after every deployment.
The problem is, the way I have it set up it does not update(or overwrite) files, which are pushed to ECR. So If i do code change and push it to git, it does following:
Creates new image and pushes it to ECR
It Creates new containers with image pushed to ECR (it dynamically assigns tag to the image)
when I do docker ps on EC2 I see new containers, and container with code changes is built from correct image which has just been pushed to ECR. So it seems all is working fine until this point.
But the code changes dont appear when i refresh browser nor after clearing caches.
I am attaching volume to the folder /var/www/html where sits my app, so from my understanding this code should get replaced during deployment. But the problem is, it does not replaces the code.
When I remove the volume, I can see the code changes everytime deployment finishes but I also always have to create manually .env file + run couple of commands.
PS: I have another container (mysql) which is setting volume exactly the same way and changes I do in database are persistent even after new container is created.
Please see my Docker file and taskDefinition.json to see how I deal with volumes.
Dockerfile:
FROM alpine:${ALPINE_VERSION}
# Setup document root
WORKDIR /var/www/html
# Install packages and remove default server definition
RUN apk add --no-cache \
curl \
nginx \
php8 \
php8-ctype \
php8-curl \
php8-dom \
php8-fpm \
php8-gd \
php8-intl \
php8-json \
php8-mbstring \
php8-mysqli \
php8-pdo \
php8-opcache \
php8-openssl \
php8-phar \
php8-session \
php8-xml \
php8-xmlreader \
php8-zlib \
php8-tokenizer \
php8-fileinfo \
php8-json \
php8-xml \
php8-xmlwriter \
php8-simplexml \
php8-dom \
php8-pdo_mysql \
php8-pdo_sqlite \
php8-tokenizer \
php8-pecl-redis \
php8-bcmath \
php8-exif \
supervisor \
nano \
sudo
# Create symlink so programs depending on `php` still function
RUN ln -s /usr/bin/php8 /usr/bin/php
# Configure nginx
COPY tools/docker/config/nginx.conf /etc/nginx/nginx.conf
# Configure PHP-FPM
COPY tools/docker/config/fpm-pool.conf /etc/php8/php-fpm.d/www.conf
COPY tools/docker/config/php.ini /etc/php8/conf.d/custom.ini
# Configure supervisord
COPY tools/docker/config/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Make sure files/folders needed by the processes are accessable when they run under the nobody user
RUN chown -R nobody.nobody /var/www/html /run /var/lib/nginx /var/log/nginx
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apk update && apk add bash
# Install node npm
RUN apk add --update nodejs npm \
&& npm config set --global loglevel warn \
&& npm install --global marked \
&& npm install --global node-gyp \
&& npm install --global yarn \
# Install node-sass's linux bindings
&& npm rebuild node-sass
# Switch to use a non-root user from here on
USER nobody
# Add application
COPY --chown=nobody ./ /var/www/html/
RUN cat /var/www/html/resources/js/Components/Sections/About.vue
RUN composer install --optimize-autoloader --no-interaction --no-progress --ignore-platform-req=ext-zip --ignore-platform-req=ext-zip
USER root
RUN yarn && yarn run production
USER nobody
VOLUME /var/www/html
# Expose the port nginx is reachable on
EXPOSE 8080
# Let supervisord start nginx & php-fpm
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
# Configure a healthcheck to validate that everything is up&running
HEALTHCHECK --timeout=10s CMD curl --silent --fail http://127.0.0.1:8080/fpm-ping
taskDefinition.json
{
"containerDefinitions": [
{
"name": "fooweb-nginx-php",
"cpu": 100,
"memory": 512,
"links": [
"mysql"
],
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-web",
"containerPath": "/var/www/html"
}
]
},
{
"name": "mysql",
"image": "mysql",
"cpu": 50,
"memory": 512,
"portMappings": [
{
"containerPort": 3306,
"hostPort": 4306,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "123"
},
{
"name": "MYSQL_PASSWORD",
"value": "123"
},
{
"name": "MYSQL_USER",
"value": "123"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "123"
}
],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-mysql",
"containerPath": "/var/lib/mysql"
}
]
}
],
"family": "art_web_task_definition",
"taskRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"executionRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [
{
"name": "fooweb-storage-mysql",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
},
{
"name": "fooweb-storage-web",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
}
],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1536",
"memory": "1536",
"tags": []
}
So I believe there will be some problem with the way I have set the volume or maybe there could be some permission issue ?
Many thanks !
"I am attaching volume to the folder /var/www/html where sits my app,
so from my understanding this code should get replaced during
deployment."
That's the opposite of how docker volumes work.
It is going to ignore anything in /var/www/html inside the docker image, and instead reuse whatever you have in the mounted volume. Mounted volumes are primarily for persisting files between container restarts and image changes. If there is updated code in /var/www/html inside the image you are building, and you want that updated code to be active when your application is deployed, then you can't mount that as a volume.
If you are specifying a VOLUME instruction in your Dockerfile, then the very first time you ran your container in it would have "initialized" the volume with the files that are inside the docker container, as part of the process of creating the volume. After that, the files in the volume on the host server are persisted across container restarts/deployments, and any new updates to that path inside the new docker images are ignored.

It seems you are running Vue CLI inside a container

I m trying to run my vuejs app using vs-code remote-containers. Its deployed and I can access it via the url: localhost:8080/ but If I update some js file, its not re-compiling and even not hot-reloading.
devcontainer.json
{
"name": "Aquawebvue",
"dockerFile": "Dockerfile",
"appPort": [3000],
"runArgs": ["-u", "node"],
"settings": {
"workbench.colorTheme": "Cobalt2",
"terminal.integrated.automationShell.linux": "/bin/bash"
},
"postCreateCommand": "yarn",
"extensions": [
"esbenp.prettier-vscode",
"wesbos.theme-cobalt2",
]
}
Dockerfile
FROM node:12.13.0
RUN npm install -g prettier
After opening container and running cmd 'yarn serve' in terminal it builds and deploy successfully but I got this warning:
It seems you are running Vue CLI inside a container.
Since you are using a non-root publicPath, the hot-reload socket
will not be able to infer the correct URL to connect. You should
explicitly specify the URL via devServer.public.
VSCode has a pre-defined .devcontainer directory for Vue projects. It can be found on GitHub. You can install it automatically by running the command Add Development Container Configuration Files... > Show All Definitions > Vue.
Dockerfile
# [Choice] Node.js version: 14, 12, 10
ARG VARIANT=14
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
RUN su node -c "umask 0002 && npm install -g http-server #vue/cli #vue/cli-service-global"
WORKDIR /app
EXPOSE 8080
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
devcontainer.json
{
"name": "Vue (Community)",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
// Update 'VARIANT' to pick a Node version: 10, 12, 14
"args": { "VARIANT": "14" }
},
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/zsh"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint",
"octref.vetur"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [
8080
],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node"
}

Resources