I'm trying to set up an environment in which vite's hot reload is available through traefik's reverse proxy. For this, I noticed that it is necessary to add a certificate in the vite settings vite.config.js.
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
// import mkcert from 'vite-plugin-mkcert';
export default defineConfig({
server: {
// https: true,
host: '0.0.0.0',
hmr: {
host: '0.0.0.0'
},
},
plugins: [
laravel({
input: ['resources/css/app.css', 'resources/js/app.js'],
refresh: true,
}),
// mkcert()
],
});
The code above works correctly for localhost. When I use vite-plugin-mkcert I get the following error with npm run dev:
error when starting dev server:
Error: EACCES: permission denied, mkdir '/root/.vite-plugin-mkcert'
I tried installing the package using --unsafe-perm=true --allow-root options, but it didn't work.
The whole environment is inside docker and other packages don't have the same problem.
My container uses the root user.
Solved in the following way:
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
import fs from 'fs';
export default defineConfig({
server: {
https: {
key: fs.readFileSync('./certs/local-key.pem'),
cert: fs.readFileSync('./certs/local-cert.pem'),
},
host: '0.0.0.0',
hmr: {
host: 'template.docker.localhost'
},
},
plugins: [
laravel({
input: ['resources/css/app.css', 'resources/js/app.js'],
refresh: true,
}),
],
});
Now I don't need the package anymore and hot-reload works with reverse proxy.
Include your host without http/https, make sure that you have installed mkcert.
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
import react from '#vitejs/plugin-react';
import mkcert from 'vite-plugin-mkcert';
export default defineConfig({
plugins: [
mkcert(),
laravel({
input: 'resources/js/app.jsx',
}),
react(),
],
server: {
host:"testserver.dev",
port: 8000,
https:true,
},
});
After setting up you need to run npm with https
npm run dev -- --https
Related
Context:
The app is currently running on a docker container.
There are three containers in total, all of them attached to the same network.
-MariaDB
-Flask app
-Vue app (node-16-buster)
When trying to call an api from my flask backend I get this error from axios:
However, when I copy the url and just try curl (from the vue container's terminal) it works like a charm.
No such problems were observed when I ran everything on my local machine.
This is the app's vite.config.js file.
import { fileURLToPath, URL } from 'node:url'
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
resolve: {
alias: {
'#': fileURLToPath(new URL('./src', import.meta.url))
}
},
server:{
port: 80,
host: "0.0.0.0"
}
})
I tried fiddling with different docker network configurations, but those yielded no results.
Yes, this indeed turns out to be correct. Since the code runs on the browser I just had to change the URL to be correct.
I.e. pointing to the host where the docker containers run, rather than pointing to the docker container iteself.
I have a Vue-cli app that I'm trying to convert to vite. I am using Docker to run the server. I looked at a couple tutorials and got vite to run in development mode without errors. However, the browser can't access the port. That is, when I'm on my macbook's command line (outside of Docker) I can't curl it:
$ curl localhost:8080
curl: (52) Empty reply from server
If I try localhost:8081 I get Failed to connect. In addition, if I run the webpack dev server it works normally so I know that my container's port is exposed.
Also, if I run curl in the same virtual machine that is running the vite server it works, so I know that vite is working.
Here are the details:
In package.json:
...
"dev": "vue-cli-service serve",
"vite": "vite",
...
The entire vite.config.ts file:
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
resolve: { alias: { '#': '/src' } },
plugins: [vue()],
server: {
port: 8080
}
})
The command that starts the container:
docker-compose run --publish 8080:8080 --rm app bash
The docker-compose.yml file:
version: '3.7'
services:
app:
image: myapp
build: .
container_name: myapp
ports:
- "8080:8080"
The Dockerfile:
FROM node:16.10.0
RUN npm install -g npm#8.1.3
RUN npm install -g #vue/cli#4.5.15
RUN mkdir /srv/app && chown node:node /srv/app
USER node
WORKDIR /srv/app
The command that I run inside the docker container for vite:
npm run vite
The command that I run inside the docker container for vue-cli:
npm run dev
So, to summarize: my setup works when running the vue-cli dev server but doesn't work when using the vite dev server.
I figured it out. I needed to add a "host" attribute in the config, so now my vite.config.ts file is:
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
resolve: { alias: { '#': '/src' } },
plugins: [vue()],
server: {
host: true,
port: 8080
}
})
You can also start your vite server with:
$ npm run dev -- --host
This passes the --host flag to the vite command line.
You will see output like:
vite v2.7.9 dev server running at:
> Local: http://localhost:3000/
> Network: http://192.168.4.68:3000/
ready in 237ms.
(I'm running a VirtualBox VM - but I think this applies here as well.)
You need to add host 0.0.0.0 to allow any external access:
export default defineConfig({
server: {
host: '0.0.0.0',
watch: {
usePolling: true
}
},})
I have a Vue-cli app that I'm trying to convert to vite. I am using Docker to run the server. I looked at a couple tutorials and got vite to run in development mode without errors. However, the browser can't access the port. That is, when I'm on my macbook's command line (outside of Docker) I can't curl it:
$ curl localhost:8080
curl: (52) Empty reply from server
If I try localhost:8081 I get Failed to connect. In addition, if I run the webpack dev server it works normally so I know that my container's port is exposed.
Also, if I run curl in the same virtual machine that is running the vite server it works, so I know that vite is working.
Here are the details:
In package.json:
...
"dev": "vue-cli-service serve",
"vite": "vite",
...
The entire vite.config.ts file:
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
resolve: { alias: { '#': '/src' } },
plugins: [vue()],
server: {
port: 8080
}
})
The command that starts the container:
docker-compose run --publish 8080:8080 --rm app bash
The docker-compose.yml file:
version: '3.7'
services:
app:
image: myapp
build: .
container_name: myapp
ports:
- "8080:8080"
The Dockerfile:
FROM node:16.10.0
RUN npm install -g npm#8.1.3
RUN npm install -g #vue/cli#4.5.15
RUN mkdir /srv/app && chown node:node /srv/app
USER node
WORKDIR /srv/app
The command that I run inside the docker container for vite:
npm run vite
The command that I run inside the docker container for vue-cli:
npm run dev
So, to summarize: my setup works when running the vue-cli dev server but doesn't work when using the vite dev server.
I figured it out. I needed to add a "host" attribute in the config, so now my vite.config.ts file is:
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
resolve: { alias: { '#': '/src' } },
plugins: [vue()],
server: {
host: true,
port: 8080
}
})
You can also start your vite server with:
$ npm run dev -- --host
This passes the --host flag to the vite command line.
You will see output like:
vite v2.7.9 dev server running at:
> Local: http://localhost:3000/
> Network: http://192.168.4.68:3000/
ready in 237ms.
(I'm running a VirtualBox VM - but I think this applies here as well.)
You need to add host 0.0.0.0 to allow any external access:
export default defineConfig({
server: {
host: '0.0.0.0',
watch: {
usePolling: true
}
},})
I have a react app with a custom webpack setup. For development on my environment I had it setup to work fine, but now I want to create a docker image of it and use my app in production mode. But I have no clue how to do it. I have been looking up information online and found a few tutorials like this. But they used a multistaged Dockerfile and finished it off with nginx. I am not concerned about that right now, all I want is a simple Dockerfile that will let my production code run on a container.
Ran into different problems along the way but got over them and now I am at the point where I have a Dockerfile that can create an image of my work and run successfully in a container. But now the UI wont load. At this stage I am on the verge of giving up as this is something that seems basic, but is proving next to impossible!
Can anyone shed any light on this for me and point me in the right direction?
package.json
{
"version": "1.0.0",
"main": "src/index.js",
"scripts": {
"dev": "webpack-dev-server",
"production": "webpack-dev-server --mode production",
"build": "webpack",
"start": "node server.js"
},
"author": "",
"license": "ISC",
"dependencies": {...},
"devDependencies": {...}
}
webpack.config.js
const path = require('path');
const HtmlWebpackPlugin = require('html-webpack-plugin');
const {CleanWebpackPlugin} = require('clean-webpack-plugin');
module.exports = {
mode: "development",
entry: {
app: "./src/index.js"
},
devServer: {
port: "9000",
contentBase: path.join(__dirname, './'), // where dev server will look for static files, not compiled
publicPath: '/', //relative path to output path where devserver will look for compiled files
},
output: {
filename: 'js/[name].bundle.js',
path: path.resolve(__dirname, 'dist'), // base path where to send compiled assets
publicPath: '/' // base path where referenced files will be look for
},
resolve: {
extensions: [".js", ".jsx", ".json"],
alias: {
'#': path.resolve(__dirname, 'src') // shortcut to reference src folder from anywhere
}
},
performance: {
hints: false,
maxEntrypointSize: 512000,
maxAssetSize: 512000
},
module: {
rules: [
{ // config for html
test: /\.html$/i,
loader: "html-loader",
},
{
test: /\.(js|jsx)$/,
exclude: /node_modules/,
use: {
loader: "babel-loader"
}
},
{
test: /\.css$/,
include: path.resolve(__dirname, "src"),
use: [
'style-loader',
'css-loader',
'postcss-loader'
]
},
{ // config for images
test: /\.(png|svg|jpg|jpeg|gif)$/,
use: [
{
loader: 'file-loader',
options: {
outputPath: 'images',
}
}
],
},
{ // config for fonts
test: /\.(woff|woff2|eot|ttf|otf)$/,
use: [
{
loader: 'file-loader',
options: {
outputPath: 'fonts',
}
}
],
}
]
},
plugins: [
new HtmlWebpackPlugin({
template: "./src/index.html",
filename: "index.html",
title: "Candledata"
}),
new CleanWebpackPlugin({
cleanOnceBeforeBuildPatterns: ["css/*.*", "js/*.*", "fonts/*.*", "images/*.*"]
}),
]
}
Dockerfile
FROM node:14-alpine AS build
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install --frozen-lockfile
COPY . /app
RUN yarn build
EXPOSE 9000
CMD ["yarn", "production"]
The docker image gets created, and can be ran in a container. When I go to localhost:9000 the page wont load. The developers console doesn't show anything but the network tab says it failed to get the document;
Any ideas on how I can get this working as expected?
Update #1
Docker commands
docker build -t frontend .
docker run -i --rm -p 9000:9000 --network=whole_network frontend
Update #2
What's the cmd you used to run the container? Did you use the -p parameter like at:
docker run --name=myserver -p SERVERPORT:9000 .......
so the docker port 9000 could be seen through SERVERPORT?
The docker container basically works like an OS environment and webpack is just used to optimize js compilation:
https://webpack.js.org/guides/production/
In the package.json you could use:
"build": "webpack --config=webpack.prod.config.js --progress --watch-poll -p"
so you can specify the webpack configuration file to compile for production.
What is the server you are using? node? Did you started the node server? Webpack compilation for production is supposed to create just "optimized js code". But you still need a server that will pick up that code and this is not necessarily the server you used during development.
Interesting article about setting Dev and Production environments
https://www.freecodecamp.org/news/creating-a-production-ready-webpack-4-config-from-scratch/
I am setting up a Javascript development environment for myself. I don't know much about it but I am using Webpack and Babel in the setup. After a lot of digging through I finally got Webpack compiling and running, where I could load a simple react page.
Now I created a Dockerfile to run my work on an image. Its very basic, load the files, expose the port and run the dev server. But when I try to access the site, I get a ERR_SOCKET_NOT_CONNECTED response. There is no errors from Webpack.
Dockerfile
FROM node:8-alpine
WORKDIR /
EXPOSE 3000
COPY . /frontend
WORKDIR /frontend
CMD [ "npm", "run", "dev" ]
Webpack.dev.js
const webpack = require('webpack');
const webpackDevServer = require('webpack-dev-server');
const path = require('path');
const HtmlWebpackPlugin = require('html-webpack-plugin');
module.exports = {
devtool: 'source-map',
entry: {
app: './src/index.js',
vendor: ['react', 'react-dom', 'react-router-dom', 'semantic- ui-react']
},
output: {
path: __dirname + '/src/dev',
filename: '[name].js'
},
devServer: {
port: 3000
},
resolve: {
alias: {
'../../theme.config$': path.join(__dirname, 'src/semantic-ui/theme.config')
}
},
module: {
rules: [
{
test: /\.(js|jsx)$/,
exclude: /node_modules/,
use: ['babel-loader']
},
{
test: /\.(png|jpg|gif|svg|eot|ttf|woff|woff2)$/,
use: {
loader: 'url-loader',
options: {
limit: 100000,
},
},
},
{
test: /\.less$/,
use: ['style-loader', 'css-loader', 'less-loader']
}
]
},
plugins: [
new webpack.HotModuleReplacementPlugin(),
new webpack.optimize.CommonsChunkPlugin({
name: "vendor",
minChunks: Infinity
}),
new HtmlWebpackPlugin({
template: './src/index.html',
filename: path.resolve(__dirname, './src/dev/index.html'),
})
]
}
To be honest I am not sure what lot of this config file is about, as I copied it from a work colleague and made whatever tweaks were needed. But I once I got Webpack compiling and running I was happy and left it as is.
Here is the command I use to start the container docker container run --rm -p 3000:3000 frontend
There is no errors showing on the container from Webpack and I have exposed the ports correctly so I don't know why this isn't working or if its a docker issue or a webpack issue. Any help would be greatly appreciated