As far as I search, there are several tips to place dynamic variables in nginx/conf.d/default.conf. Instead, I want to activate env variable in apiURLs.js file, which will be loaded in vue files.
# apiURLs.js
export const apiUrls = {
auth: "http://{{IP_ADDRESS}}:8826/auth/",
service1: "http://{{IP_ADDRESS}}:8826/",
service2: "http://{{IP_ADDRESS}}:8827/"
};
# in xxx.vue
let apiUrl = `${apiUrls.auth}login/`;
I tried envsubst in docker-compose.yml as
environment:
- IP_ADDRESS
command: /bin/sh -c "envsubst < /code/src/apiUrls.js | tee /code/src/apiUrls.js && nginx -g 'daemon off;'"
It appears to works as I go in docker container and confirmed that apiUrls.js is replaced.
However when I check the front app in browser and inspect api via chrome developer tool, it turned out that the api referred is still low url string with env argment, i.e. http://%24%7Bip_address%7D:8826/auth/login.
I suspect that vue project is pre-compiled during making docker image.
Is there some solution?
Or is it possible to purse env variable declared in nginx/conf.d/default.conf to apiURLs.js?
Your suspicion is correct. Any js file will almost certainly get processed into a new bundled file by webpack during the vue build, as opposed to being loaded at runtime.
If you can navigate in your docker to the vue project and re-run the build, your env variables should update.
Check package.json in the vue project for different build options. You should get something like:
{
"name": "vue-example",
"version": "0.1.0",
"private": true,
"scripts": {
"serve": "vue-cli-service serve",
"build": "vue-cli-service build",
"lint": "vue-cli-service lint"
},
//...
and be able to rebuild it with npm run build
(Also in the name of stating the obvious, remember to disable cache pages in the chrome dev tools when checking the updated site)
Related
I have a Vue-cli app that I'm trying to convert to vite. I am using Docker to run the server. I looked at a couple tutorials and got vite to run in development mode without errors. However, the browser can't access the port. That is, when I'm on my macbook's command line (outside of Docker) I can't curl it:
$ curl localhost:8080
curl: (52) Empty reply from server
If I try localhost:8081 I get Failed to connect. In addition, if I run the webpack dev server it works normally so I know that my container's port is exposed.
Also, if I run curl in the same virtual machine that is running the vite server it works, so I know that vite is working.
Here are the details:
In package.json:
...
"dev": "vue-cli-service serve",
"vite": "vite",
...
The entire vite.config.ts file:
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
resolve: { alias: { '#': '/src' } },
plugins: [vue()],
server: {
port: 8080
}
})
The command that starts the container:
docker-compose run --publish 8080:8080 --rm app bash
The docker-compose.yml file:
version: '3.7'
services:
app:
image: myapp
build: .
container_name: myapp
ports:
- "8080:8080"
The Dockerfile:
FROM node:16.10.0
RUN npm install -g npm#8.1.3
RUN npm install -g #vue/cli#4.5.15
RUN mkdir /srv/app && chown node:node /srv/app
USER node
WORKDIR /srv/app
The command that I run inside the docker container for vite:
npm run vite
The command that I run inside the docker container for vue-cli:
npm run dev
So, to summarize: my setup works when running the vue-cli dev server but doesn't work when using the vite dev server.
I figured it out. I needed to add a "host" attribute in the config, so now my vite.config.ts file is:
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
resolve: { alias: { '#': '/src' } },
plugins: [vue()],
server: {
host: true,
port: 8080
}
})
You can also start your vite server with:
$ npm run dev -- --host
This passes the --host flag to the vite command line.
You will see output like:
vite v2.7.9 dev server running at:
> Local: http://localhost:3000/
> Network: http://192.168.4.68:3000/
ready in 237ms.
(I'm running a VirtualBox VM - but I think this applies here as well.)
You need to add host 0.0.0.0 to allow any external access:
export default defineConfig({
server: {
host: '0.0.0.0',
watch: {
usePolling: true
}
},})
I deployed a MERN app to a digital ocean droplet with Docker. If I run my docker-compose.yml file local on my PC it works well. I have 2 containers: 1 backend, 1 frontend. If I try to compose-up on droplet, it seems the frontend is ok but can't communicate with backend.
I use http-proxy-middleware, my setupProxy.js file:
const { createProxyMiddleware } = require('http-proxy-middleware');
module.exports = function (app) {
app.use(
'/api',
createProxyMiddleware({
target: 'http://0.0.0.0:5001',
changeOrigin: true,
})
);
};
I tried target: 'http://main-be:5001', too, as main-be is the name of my backend container, but get the same error. Just the Request URL is http://main-be:5001/api/auth/login in the chrome/devops/network.
...also another page:
My docker-compose.yml file:
version: '3.4'
networks:
main:
services:
main-be:
image: main-be:latest
container_name: main-be
ports:
- '5001:5001'
networks:
main:
volumes:
- ./backend/config.env:/app/config.env
command: 'npm run prod'
main-fe:
image: main-fe:latest
container_name: main-fe
networks:
main:
volumes:
- ./frontend/.env:/app/.env
ports:
- '3000:3000'
command: 'npm start'
My Dockerfile in the frontend folder:
FROM node:12.2.0-alpine
COPY . .
RUN npm ci
CMD ["npm", "start"]
My Dockerfile in the backend folder:
FROM node:12-alpine3.14
WORKDIR /app
COPY . .
RUN npm ci --production
CMD ["npm", "run", "prod"]
backend/package.json file:
"scripts": {
"start": "nodemon --watch --exec node --experimental-modules server.js",
"dev": "nodemon server.js",
"prod": "node server.js"
},
frontend/.env file:
SKIP_PREFLIGHT_CHECK=true
HOST=0.0.0.0
backend/config.env file:
DE_ENV=development
PORT=5001
My deploy.sh script to build images, copy to droplet...
#build and save backend and frontend images
docker build -t main-be ./backend & docker build -t main-fe ./frontend
docker save -o ./main-be.tar main-be & docker save -o ./main-fe.tar main-fe
#deploy services
ssh root#46.111.119.161 "pwd && mkdir -p ~/apps/first && cd ~/apps/first && ls -al && echo 'im in' && rm main-be.tar && rm main-fe.tar &> /dev/null"
#::scp file
#scp ./frontend/.env root#46.111.119.161:~/apps/first/frontend
#upload main-be.tar and main-fe.tar to VM via ssh
scp ./main-be.tar ./main-fe.tar root#46.111.119.161:~/apps/thesis/
scp ./docker-compose.yml root#46.111.119.161:~/apps/first/
ssh root#46.111.119.161 "cd ~/apps/first && ls -1 *.tar | xargs --no-run-if-empty -L 1 docker load -i"
ssh root#46.111.119.161 "cd ~/apps/first && sudo docker-compose up"
frontend/src/utils/axios.js:
import axios from 'axios';
export const baseURL = 'http://localhost:5001';
export default axios.create({ baseURL });
frontend/src/utils/constants.js:
const API_BASE_ORIGIN = `http://localhost:5001`;
export { API_BASE_ORIGIN };
I have been trying for days but can't see where the problem is so any help highly appreciated.
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
I'm running VSCode 1.66.0 and I'm trying to attach the golang debugger on a running instance of a specific golang-docker-image. I'm able to do this when using 'dlv' but not when using 'dlv dap' and I can't understand why.
I'm following the instructions provided here:
https://vscode-debug-specs.github.io/go/#debugging-running-remote-process
Versions used:
VSCode ver. 1.66.0
dlv ver. 1.8.2
Docker Desktop ver. 4.4.4
My launch.json looks like this:
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Remote Process",
"type": "go",
"request": "attach",
"mode": "remote",
"port": 2345,
"host": "127.0.0.1",
"showLog": true,
"apiVersion": 2,
"dlvLoadConfig": {
"followPointers": true,
"maxVariableRecurse": 1,
"maxStringLen": 200,
"maxArrayValues": 64,
"maxStructFields": -1
}
}
]
}
My dockerfile looks like so (as you can see the resulting image is based on the official image "mcr.microsoft.com/vscode/devcontainers/go:0-1.18"):
# syntax=docker/dockerfile:experimental
FROM golang:1.18 as builder
[...]
RUN CGO_ENABLED=0 GOOS=linux GOARCH=$ARCH go build -gcflags "all=-N -l" -v -o main .
FROM mcr.microsoft.com/vscode/devcontainers/go:0-1.18
WORKDIR /app
CMD ["./main"]
The command I use to launch the 'dlv' process inside the running docker instance is this:
dlv dap --listen=:2345 --log=true --api-version=2 attach 1 ./app/main
The VSCode UI allegedly allows me to attach via:
> Debug: Select and Start Debugging -> Attach to Remote Process
However the debugger doesn't really get activated in the sense that the breakpoints are not honored and the 'dlv' command shown above doesn't print anything in the output.
If I remove the 'dap' part from the 'dlv' command:
dlv --listen=:2345 --log=true --headless=true --log-output=debugger,debuglineerr,gdbwire,lldbout,rpc --api-version=2 --accept-multiclient --headless attach 1 ./app/main
then the debugger attaches perfectly and works just fine (this is the legacy mode of the dlv debugger if I understand correctly). What gives? Am I missing something?
I am trying to start my application using the http-server through npm start. It is working fine in the localhost but when i deploy the same application through docker container (port-mapped) , it doesn't appear in my local browser.
I have a directory inside my docker container(Ubuntu) which is runned through the command
docker run -it -p 8001:5500 ubuntu.
I have installed everything inside my container - npm, vim editor ,etc.
My package.json file is
{
"name": "test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "http-server -a localhost -p 5500"
},
"author": "",
"license": "ISC",
"dependencies": {
"concurrently": "^5.0.2",
"http-server": "^0.12.1"
}
}
After that i have done npm install and then executed the command npm start.
The output of the command is :
But when i am trying to open my application in chrome browser it is throwing me an empty response.
Also when i am running the same application through my localhost it is accessible.
Why i am not able to access the server through the port mapping of docker container.??
localhost inside container won't be accessible from outside the container. Run your server on 0.0.0.0 inside the container.
I am not getting how to change the default 5000 port in Svelte to some other port if we install the sample template through degit.
The sveltejs/template uses sirv-cli.
You can add --port or -p in your start:dev script in package.json.
Instead of:
"start:dev": "sirv public --single --dev"
Use:
"start:dev": "sirv public --single --dev --port 5555"
You can see more of sirv-cli options:
https://github.com/lukeed/sirv/tree/master/packages/sirv-cli
You can use env vars HOST and PORT.
From https://www.npmjs.com/package/sirv-cli:
Note: The HOST and PORT environment variables will override flag values.
Like this:
HOST=0.0.0.0 PORT=6000 npm run dev
Go to package.json, you will find this line:
"start": "sirv public --no-clear"
Change it to this, or to any other port that you want:
"start": "sirv public --no-clear --port 8089"
As now svelte uses vitejs so for both svelte and sveltekit,
if you want to change it to a fixed port for your project. Inside your package.json file under "scripts": change the dev script
"dev": "vite --port 3333",
if you want to change it at the time of starting development server
npm run dev -- --port=3333