I am setting up a Javascript development environment for myself. I don't know much about it but I am using Webpack and Babel in the setup. After a lot of digging through I finally got Webpack compiling and running, where I could load a simple react page.
Now I created a Dockerfile to run my work on an image. Its very basic, load the files, expose the port and run the dev server. But when I try to access the site, I get a ERR_SOCKET_NOT_CONNECTED response. There is no errors from Webpack.
Dockerfile
FROM node:8-alpine
WORKDIR /
EXPOSE 3000
COPY . /frontend
WORKDIR /frontend
CMD [ "npm", "run", "dev" ]
Webpack.dev.js
const webpack = require('webpack');
const webpackDevServer = require('webpack-dev-server');
const path = require('path');
const HtmlWebpackPlugin = require('html-webpack-plugin');
module.exports = {
devtool: 'source-map',
entry: {
app: './src/index.js',
vendor: ['react', 'react-dom', 'react-router-dom', 'semantic- ui-react']
},
output: {
path: __dirname + '/src/dev',
filename: '[name].js'
},
devServer: {
port: 3000
},
resolve: {
alias: {
'../../theme.config$': path.join(__dirname, 'src/semantic-ui/theme.config')
}
},
module: {
rules: [
{
test: /\.(js|jsx)$/,
exclude: /node_modules/,
use: ['babel-loader']
},
{
test: /\.(png|jpg|gif|svg|eot|ttf|woff|woff2)$/,
use: {
loader: 'url-loader',
options: {
limit: 100000,
},
},
},
{
test: /\.less$/,
use: ['style-loader', 'css-loader', 'less-loader']
}
]
},
plugins: [
new webpack.HotModuleReplacementPlugin(),
new webpack.optimize.CommonsChunkPlugin({
name: "vendor",
minChunks: Infinity
}),
new HtmlWebpackPlugin({
template: './src/index.html',
filename: path.resolve(__dirname, './src/dev/index.html'),
})
]
}
To be honest I am not sure what lot of this config file is about, as I copied it from a work colleague and made whatever tweaks were needed. But I once I got Webpack compiling and running I was happy and left it as is.
Here is the command I use to start the container docker container run --rm -p 3000:3000 frontend
There is no errors showing on the container from Webpack and I have exposed the ports correctly so I don't know why this isn't working or if its a docker issue or a webpack issue. Any help would be greatly appreciated
Related
I'm trying to set up an environment in which vite's hot reload is available through traefik's reverse proxy. For this, I noticed that it is necessary to add a certificate in the vite settings vite.config.js.
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
// import mkcert from 'vite-plugin-mkcert';
export default defineConfig({
server: {
// https: true,
host: '0.0.0.0',
hmr: {
host: '0.0.0.0'
},
},
plugins: [
laravel({
input: ['resources/css/app.css', 'resources/js/app.js'],
refresh: true,
}),
// mkcert()
],
});
The code above works correctly for localhost. When I use vite-plugin-mkcert I get the following error with npm run dev:
error when starting dev server:
Error: EACCES: permission denied, mkdir '/root/.vite-plugin-mkcert'
I tried installing the package using --unsafe-perm=true --allow-root options, but it didn't work.
The whole environment is inside docker and other packages don't have the same problem.
My container uses the root user.
Solved in the following way:
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
import fs from 'fs';
export default defineConfig({
server: {
https: {
key: fs.readFileSync('./certs/local-key.pem'),
cert: fs.readFileSync('./certs/local-cert.pem'),
},
host: '0.0.0.0',
hmr: {
host: 'template.docker.localhost'
},
},
plugins: [
laravel({
input: ['resources/css/app.css', 'resources/js/app.js'],
refresh: true,
}),
],
});
Now I don't need the package anymore and hot-reload works with reverse proxy.
Include your host without http/https, make sure that you have installed mkcert.
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
import react from '#vitejs/plugin-react';
import mkcert from 'vite-plugin-mkcert';
export default defineConfig({
plugins: [
mkcert(),
laravel({
input: 'resources/js/app.jsx',
}),
react(),
],
server: {
host:"testserver.dev",
port: 8000,
https:true,
},
});
After setting up you need to run npm with https
npm run dev -- --https
I have a Vue-cli app that I'm trying to convert to vite. I am using Docker to run the server. I looked at a couple tutorials and got vite to run in development mode without errors. However, the browser can't access the port. That is, when I'm on my macbook's command line (outside of Docker) I can't curl it:
$ curl localhost:8080
curl: (52) Empty reply from server
If I try localhost:8081 I get Failed to connect. In addition, if I run the webpack dev server it works normally so I know that my container's port is exposed.
Also, if I run curl in the same virtual machine that is running the vite server it works, so I know that vite is working.
Here are the details:
In package.json:
...
"dev": "vue-cli-service serve",
"vite": "vite",
...
The entire vite.config.ts file:
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
resolve: { alias: { '#': '/src' } },
plugins: [vue()],
server: {
port: 8080
}
})
The command that starts the container:
docker-compose run --publish 8080:8080 --rm app bash
The docker-compose.yml file:
version: '3.7'
services:
app:
image: myapp
build: .
container_name: myapp
ports:
- "8080:8080"
The Dockerfile:
FROM node:16.10.0
RUN npm install -g npm#8.1.3
RUN npm install -g #vue/cli#4.5.15
RUN mkdir /srv/app && chown node:node /srv/app
USER node
WORKDIR /srv/app
The command that I run inside the docker container for vite:
npm run vite
The command that I run inside the docker container for vue-cli:
npm run dev
So, to summarize: my setup works when running the vue-cli dev server but doesn't work when using the vite dev server.
I figured it out. I needed to add a "host" attribute in the config, so now my vite.config.ts file is:
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
resolve: { alias: { '#': '/src' } },
plugins: [vue()],
server: {
host: true,
port: 8080
}
})
You can also start your vite server with:
$ npm run dev -- --host
This passes the --host flag to the vite command line.
You will see output like:
vite v2.7.9 dev server running at:
> Local: http://localhost:3000/
> Network: http://192.168.4.68:3000/
ready in 237ms.
(I'm running a VirtualBox VM - but I think this applies here as well.)
You need to add host 0.0.0.0 to allow any external access:
export default defineConfig({
server: {
host: '0.0.0.0',
watch: {
usePolling: true
}
},})
I have a react app with a custom webpack setup. For development on my environment I had it setup to work fine, but now I want to create a docker image of it and use my app in production mode. But I have no clue how to do it. I have been looking up information online and found a few tutorials like this. But they used a multistaged Dockerfile and finished it off with nginx. I am not concerned about that right now, all I want is a simple Dockerfile that will let my production code run on a container.
Ran into different problems along the way but got over them and now I am at the point where I have a Dockerfile that can create an image of my work and run successfully in a container. But now the UI wont load. At this stage I am on the verge of giving up as this is something that seems basic, but is proving next to impossible!
Can anyone shed any light on this for me and point me in the right direction?
package.json
{
"version": "1.0.0",
"main": "src/index.js",
"scripts": {
"dev": "webpack-dev-server",
"production": "webpack-dev-server --mode production",
"build": "webpack",
"start": "node server.js"
},
"author": "",
"license": "ISC",
"dependencies": {...},
"devDependencies": {...}
}
webpack.config.js
const path = require('path');
const HtmlWebpackPlugin = require('html-webpack-plugin');
const {CleanWebpackPlugin} = require('clean-webpack-plugin');
module.exports = {
mode: "development",
entry: {
app: "./src/index.js"
},
devServer: {
port: "9000",
contentBase: path.join(__dirname, './'), // where dev server will look for static files, not compiled
publicPath: '/', //relative path to output path where devserver will look for compiled files
},
output: {
filename: 'js/[name].bundle.js',
path: path.resolve(__dirname, 'dist'), // base path where to send compiled assets
publicPath: '/' // base path where referenced files will be look for
},
resolve: {
extensions: [".js", ".jsx", ".json"],
alias: {
'#': path.resolve(__dirname, 'src') // shortcut to reference src folder from anywhere
}
},
performance: {
hints: false,
maxEntrypointSize: 512000,
maxAssetSize: 512000
},
module: {
rules: [
{ // config for html
test: /\.html$/i,
loader: "html-loader",
},
{
test: /\.(js|jsx)$/,
exclude: /node_modules/,
use: {
loader: "babel-loader"
}
},
{
test: /\.css$/,
include: path.resolve(__dirname, "src"),
use: [
'style-loader',
'css-loader',
'postcss-loader'
]
},
{ // config for images
test: /\.(png|svg|jpg|jpeg|gif)$/,
use: [
{
loader: 'file-loader',
options: {
outputPath: 'images',
}
}
],
},
{ // config for fonts
test: /\.(woff|woff2|eot|ttf|otf)$/,
use: [
{
loader: 'file-loader',
options: {
outputPath: 'fonts',
}
}
],
}
]
},
plugins: [
new HtmlWebpackPlugin({
template: "./src/index.html",
filename: "index.html",
title: "Candledata"
}),
new CleanWebpackPlugin({
cleanOnceBeforeBuildPatterns: ["css/*.*", "js/*.*", "fonts/*.*", "images/*.*"]
}),
]
}
Dockerfile
FROM node:14-alpine AS build
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install --frozen-lockfile
COPY . /app
RUN yarn build
EXPOSE 9000
CMD ["yarn", "production"]
The docker image gets created, and can be ran in a container. When I go to localhost:9000 the page wont load. The developers console doesn't show anything but the network tab says it failed to get the document;
Any ideas on how I can get this working as expected?
Update #1
Docker commands
docker build -t frontend .
docker run -i --rm -p 9000:9000 --network=whole_network frontend
Update #2
What's the cmd you used to run the container? Did you use the -p parameter like at:
docker run --name=myserver -p SERVERPORT:9000 .......
so the docker port 9000 could be seen through SERVERPORT?
The docker container basically works like an OS environment and webpack is just used to optimize js compilation:
https://webpack.js.org/guides/production/
In the package.json you could use:
"build": "webpack --config=webpack.prod.config.js --progress --watch-poll -p"
so you can specify the webpack configuration file to compile for production.
What is the server you are using? node? Did you started the node server? Webpack compilation for production is supposed to create just "optimized js code". But you still need a server that will pick up that code and this is not necessarily the server you used during development.
Interesting article about setting Dev and Production environments
https://www.freecodecamp.org/news/creating-a-production-ready-webpack-4-config-from-scratch/
I have looked at various solutions to no avail.
Testing webpack-dev-server on WSL 2 works fine; when I update the src/main.js file the browser updates however when inside of a docker container again running within WSL 2, the browser does not automatically update on saving changes, however the content does update when I manually refresh the browser?
Docker container ran via
sudo docker run -ti --name justatest -p 3009:8080 -v /home/dev/webpacktest:/home/test node:12 /bin/bash
webpack.dev.config
const path = require("path");
module.exports = {
mode: "development",
entry: {
main: ["./src/main.js"],
},
output: {
filename: "[name].bundle.js",
path: path.resolve(__dirname, "./dist"),
},
devServer: {
contentBase: "./dist",
host: "0.0.0.0",
port: "8080",
},
};
package.json
{
"name": "webpacktest",
"version": "1.0.0",
"scripts": {
"dev": "webpack-dev-server --config webpack.dev.js --hot --port 8080 --host 0.0.0.0"
},
"license": "MIT",
"devDependencies": {
"webpack": "^4.44.2",
"webpack-cli": "^3.3.12",
"webpack-dev-server": "^3.11.0"
}
}
Also when I run a create-react-app inside a docker container inside of WSL 2 the browser refreshes on change. How does create-react-app do it
Rebooted computer and all was fine :/
How can I access environment variables in Vue, that are passed to the container at runtime and not during the build?
Stack is as follows:
VueCLI 3.0.5
Docker
Kubernetes
There are suggested solutions on stackoverflow and elsewhere to use .env file to pass variables (and using mode) but that's at build-time and gets baked into the docker image.
I would like to pass the variable into Vue at run-time as follows:
Create Kubernetes ConfigMap (I get this right)
Pass ConfigMap value into K8s pod env variable when running deployment yaml file (I get this right)
Read from env variable created above eg. VUE_APP_MyURL and do something with that value in my Vue App (I DO NOT get this right)
I've tried the following in helloworld.vue:
<template>
<div>{{displayURL}}
<p>Hello World</p>
</div>
</template>
<script>
export default {
data() {
return {
displayURL: ""
}
},
mounted() {
console.log("check 1")
this.displayURL=process.env.VUE_APP_ENV_MyURL
console.log(process.env.VUE_APP_ENV_MyURL)
console.log("check 3")
}
}
</script>
I get back "undefined" in the console log and nothing showing on the helloworld page.
I've also tried passing the value into a vue.config file and reading it from there. Same "undefined" result in console.log
<template>
<div>{{displayURL}}
<p>Hello World</p>
</div>
</template>
<script>
const vueconfig = require('../../vue.config');
export default {
data() {
return {
displayURL: ""
}
},
mounted() {
console.log("check 1")
this.displayURL=vueconfig.VUE_APP_MyURL
console.log(vueconfig.VUE_APP_MyURL)
console.log("check 3")
}
}
</script>
With vue.config looking like this:
module.exports = {
VUE_APP_MyURL: process.env.VUE_APP_ENV_MyURL
}
If I hardcode a value into VUE_APP_MyURL in the vue.config file it shows successfully on the helloworld page.
VUE_APP_ENV_MyURL is successfully populated with the correct value when I interrogate it: kubectl describe pod
process.env.VUE_APP_MyURL doesn't seem to successfully retrieve the value.
For what it is worth... I am able to use process.env.VUE_APP_3rdURL successfully to pass values into a Node.js app at runtime.
Create a file config.js with your desired configuration. We will use that later to create a config map that we deploy to Kubernetes. Put it into your your Vue.js project where your other JavaScript files are. Although we will exclude it later from minification, it is useful to have it there so that IDE tooling works with it.
const config = (() => {
return {
"VUE_APP_ENV_MyURL": "...",
};
})();
Now make sure that your script is excluded from minification. To do that, create a file vue.config.js with the following content that preserves our config file.
const path = require("path");
module.exports = {
publicPath: '/',
configureWebpack: {
module: {
rules: [
{
test: /config.*config\.js$/,
use: [
{
loader: 'file-loader',
options: {
name: 'config.js'
},
}
]
}
]
}
}
}
In your index.html, add a script block to load the config file manually. Note that the config file won't be there as we just excluded it. Later, we will mount it from a ConfigMap into our container. In this example, we assume that we will mount it into the same directory as our HTML document.
<script src="<%= BASE_URL %>config.js"></script>
Change your code to use our runtime config:
this.displayURL = config.VUE_APP_ENV_MyURL || process.env.VUE_APP_ENV_MyURL
In Kubernetes, create a config map that uses the content your config file. Of course, you wanna read the content from your config file.
apiVersion: v1
kind: ConfigMap
metadata:
...
data:
config.js: |
var config = (() => {
return {
"VUE_APP_ENV_MyURL": "...",
};
})();
Reference the config map in your deployment. This mounts the config map as a file into your container. The mountPath Already contains our minified index.html. We mount the config file that we referenced before.
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
volumes:
- name: config-volume
configMap:
name: ...
containers:
- ...
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/config.js
subPath: config.js
Now you can access the config file at <Base URL>/config.js and you should see the exact content that you put into the ConfigMap entry. Your HTML document loads that config map as it loads the rest of your minified Vue.js code. Voila!
I am adding my working solution here, for those who are still having trouble. I do think that #Hendrik M Halkow 's answer is more elegant, though I couldn't manage to solve it using that, simply just because of my lack of expertise in webpack and Vue.I just couldn't figure out where to put the config file and how to refer it.
My approach is to make use of the environment variables with constants (dummy values) to build it for production, then replace that constants in the image using a custom entrypoint script. The solution goes like this.
I have encapsulated all configs into one file called app.config.js
export const clientId = process.env.VUE_APP_CLIENT_ID
export const baseURL = process.env.VUE_APP_API_BASE_URL
export default {
clientId,
baseURL,
}
This is used in the project just by looking up the value from config file.
import { baseURL } from '#/app.config';
Then I am using standard .env.[profile] files to set environment variables.
e.g. the .env.development
VUE_APP_API_BASE_URL=http://localhost:8085/radar-upload
VUE_APP_CLIENT_ID=test-client
Then for production I set string constants as values.
e.g. the .env.production
VUE_APP_API_BASE_URL=VUE_APP_API_BASE_URL
VUE_APP_CLIENT_ID=VUE_APP_CLIENT_ID
Please not here the value can be any unique string. Just to keep the readability easier, I am just replacing the environment variable name as the value. This will just get compiled and bundled similar to development mode.
In my Dockerfile, I add an entrypoint that can read those constants and replace it will environment variable values.
My Dockerfile looks like this (this is pretty standard)
FROM node:10.16.3-alpine as builder
RUN mkdir /app
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY . /app/
RUN npm run build --prod
FROM nginx:1.17.3-alpine
# add init script
COPY ./docker/nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY --from=builder /app/dist/ .
COPY ./docker/entrypoint.sh /entrypoint.sh
# expose internal port:80 and run init.sh
EXPOSE 80
ENTRYPOINT ["/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
Then create a ./docker/entrypoint.sh file as below.
#!/bin/sh
ROOT_DIR=/usr/share/nginx/html
# Replace env vars in JavaScript files
echo "Replacing env constants in JS"
for file in $ROOT_DIR/js/app.*.js* $ROOT_DIR/index.html $ROOT_DIR/precache-manifest*.js;
do
echo "Processing $file ...";
sed -i 's|VUE_APP_API_BASE_URL|'${VUE_APP_API_BASE_URL}'|g' $file
sed -i 's|VUE_APP_CLIENT_ID|'${VUE_APP_CLIENT_ID}'|g' $file
done
echo "Starting Nginx"
nginx -g 'daemon off;'
This enables me to have runtime configurable image that I can run on many environments. I know it is a bit of a hack. But have seen many people do it this way.
Hope this helps someone.
Create config file
In public folder: public/config.js
const config = (() => {
return {
"VUE_CONFIG_APP_API": "...",
};
})();
Update index.html
Update public/index.html to contain following at the end of head:
<!-- docker configurable variables -->
<script src="<%= BASE_URL %>config.js"></script>
There is no need to update vue.config.js as we are using the public folder for configuration.
ESLint
ESLint would give us error of usage of undefined variable. Therefore we define global variable in .eslintrc.js file:
globals: {
config: "readable",
},
Usage
Eg. in the store src/store/user.js
export const actions = {
async LoadUsers({ dispatch }) {
return await dispatch(
"axios/get",
{
url: config.VUE_CONFIG_APP_API + "User/List",
},
{ root: true }
);
},
...
K8S ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: fe-config
namespace: ...
data:
config.js: |
var config = (() => {
return {
"VUE_CONFIG_APP_API": "...",
};
})();
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
volumes:
- name: config-volume
configMap:
name: fe-config
containers:
- ...
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/config.js
subPath: config.js
I had the same problem in my current project and found out that it is not possible to access environment variables at runtime at the moment so I end up with the solution of creating .env files or local environment variables that, as you said, are used at the build time.
I recently created a set of plugins to solve this problem elegantly:
No global namespace pollution.
No import statement is required.
Almost zero configuration.
Out-of-the-box support for Vue CLI (also works with Webpack, Rollup and Vite, CSR, SSR and SSG, and unit testing tools. Powered by Unplugin and Babel).
You can access environment variables (heavily inspired by Vite) this way:
// src/index.js
console.log(`Hello, ${import.meta.env.HELLO}.`);
During production, it will be temporarily replaced with a placeholder:
// dist/index.js
console.log(`Hello, ${"__import_meta_env_placeholder__".HELLO}.`);
Finally, you can use built-in script to replace placeholders with real environment variables in your system (e.g., read environment variables in your k8s pod):
// dist/index.js
console.log(`Hello, ${{ HELLO: "import-meta-env" }.HELLO}.`);
// > Hello, import-meta-env.
You can see more info at https://iendeavor.github.io/import-meta-env/ .
And there is a Docker setup example and Vue CLI setup example.
Hope this helps someone who needs this.
I got it to work with the solution proposed by #Hendrik M Halkow.
But I stored the config.js in the static folder. By doing that, I don't have to care about not minifying the file.
Then include it like this:
<script src="<%= BASE_URL %>static/config.js"></script>
and use this volume mount configuration:
...
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/static/config.js
subPath: config.js
If you are using VueJs3 + Vite3 + TypeScript, you can do this:
create app.config.ts (not JS)
export const clientId = import.meta.env.VITE_CLIENT_ID
export const baseURL = import.meta.env.VITE_API_BASE_URL
export default {
clientId,
baseURL,
}
Replace values in assets subdir: (improved shell script)
#!/bin/sh
# #see https://stackoverflow.com/questions/18185305/storing-bash-output-into-a-variable-using-eval
ROOT_DIR=/usr/share/nginx/html
# Replace env vars in JavaScript files
echo "Replacing env constants in JS"
keys="VITE_CLIENT_ID
VITE_API_BASE_URL"
for file in $ROOT_DIR/assets/index*.js* ;
do
echo "Processing $file ...";
for key in $keys
do
value=$(eval echo \$$key)
echo "replace $key by $value"
sed -i 's#'"$key"'#'"$value"'#g' $file
done
done
echo "Starting Nginx"
nginx -g 'daemon off;'
In the Dockerfile don't forget "RUN chmod u+x"
# build
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY ./docker/nginx.conf /etc/nginx/conf.d/default.conf
COPY ./docker/entrypoint.sh /entrypoint.sh
COPY ./docker/entrypoint.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/entrypoint.sh
EXPOSE 80
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
then you be able to import it in any TypeScript file in the project
import { baseURL } from '#/app.config';
#see update of NehaM response: Pass environment variable into a Vue App at runtime