Should I use both .dockerignore and Lando's "excludes" parameter? - docker

In a Lando setup my project folder looks like this:
.lando.yml
lando/
Dockerfile
.dockerignore
// ... more config folders
htdocs/
vendor/
node_modules/
var/
cache/
In my .dockerignore I have this
# .dockerignore
.DS_Store
**/htdocs/vendor
**/htdocs/var
**/htdocs/node_modules
But Lando also has a parameter called "excludes" where I can define folders to be excluded
# .lando.yml
name: my-project
recipe: symfony
excludes:
- htdocs/vendor
- htdocs/var
- htdocs/node_modules
config:
...
services:
...
Now my question is, should I use both .dockerignore and the "excludes"? Or is one option better than the other?

Related

mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory

i am new in using bitbucket pipelines. I have an issue related with deploying my dist file to ftp server. this is an error "mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory" that occurs when i am trying to deploy project.
this is my bitbucket.yml file
# Template NodeJS build
# This template allows you to validate your NodeJS code.
# The workflow allows running tests and code linting on the default branch.
image: node:16
pipelines:
branches:
master:
- step:
name: Install dependencies
caches:
- node
script:
- npm install
artifacts:
- node_modules/** # Save modules for next steps
- step:
name: Build project
caches:
- node
script:
- npm run build
artifacts:
- dist/** # Save build for next steps
- step:
name: Deploy to Production
trigger: manual
deployment: Production
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: '/var/www/*******/booking.crt-minds.ru/'
LOCAL_PATH: 'dist/*'
EXTRA_ARGS: "--exclude=.bitbucket/ --exclude=.git/ --exclude=bitbucket-pipelines.yml --exclude=.gitignore" # Ignore these
I have tried to delete local_path in yml and see what happened. but first of all i do not understand if my pipeline has access to ftp server. How can i check it? so then i need to understand how to replace dist folder files in ftp server? May be my bitbucket.yml file incorrect configured?
Telling from the pipe's documentation
LOCAL_PATH: Optional path to local directory to upload. Default ${BITBUCKET_CLONE_DIR}.
I bet it is interpreting the value you passed not as glob pattern but literally a folder named dist/*
Try to drop that /*:
- step:
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: /var/www/site
LOCAL_PATH: dist

Docker-Compose for ASP.Net Core on Windows Containers

I am trying to build an ASP.Net Core container on Windows base because I would like to test some AD Queries. On the same time I would like to share my development folder with the container so that I can edit files on the fly without recompiling the container each time. On Linux and laravel it worked quite fine with this:
volumes:
- ./:/var/www
On windows my docker file looks like this:
FROM mcr.microsoft.com/dotnet/core/sdk:latest AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:latest
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "project.dll"]
and my composer like this:
version: '3.5'
services:
#ASP
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
environment:
- "ASPNETCORE_URLS=http://+:80"
container_name: app
working_dir: /app
#restart: on-failure
volumes:
- type: bind
source: .\
target: c:\app
entrypoint: ["dotnet", "project.dll"]
As soon I add the volumes part I receive following message:
It was not possible to find any installed .NET Core SDKs
Did you mean to run .NET Core SDK commands? Install a .NET Core SDK from:
https://aka.ms/dotnet-download
Without volumes I can at least run it normally but trying out something is kind of terrible ,)
Any idea what I am doing wrong?
Update
I found the error. The app data after it is mounted is empty but should contain the app files.
C:\app>dir
Volume in drive C has no label.
Volume Serial Number is 76CF-40D6
Directory of C:\app
05/03/2020 11:34 AM <DIR> .
05/03/2020 11:34 AM <DIR> ..
05/01/2020 10:25 PM 162 appsettings.Development.json
05/01/2020 10:25 PM 192 appsettings.json
05/03/2020 11:34 AM 106,534 project.deps.json
05/03/2020 11:34 AM 9,216 project.dll
05/03/2020 11:34 AM 169,984 project.exe
05/03/2020 11:34 AM 1,864 project.pdb
05/03/2020 11:34 AM 224 project.runtimeconfig.json
05/03/2020 11:34 AM 35,840 project.Views.dll
05/03/2020 11:34 AM 3,544 project.Views.pdb
05/03/2020 11:34 AM 490 web.config
05/03/2020 11:34 AM <DIR> wwwroot
10 File(s) 328,050 bytes
3 Dir(s) 21,299,187,712 bytes free
Still my question would be, what do I need to change to have the app folder on my local disk?
Another update.
I got the folder mounted, but unfortunately, different as Lavavel, I cannot edit the files on the fly, even if I changed everything to Razor. Maybe I have to add somehow another folder to the container... At the moment I am thinking the best way is to install the IIS locally...
Update, it finally works. I ll update tomorrow the solution.
Ok, so here is my explanation what I had to change.
Activating Razor
In the project.csproj file:
PackageReference Include="Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation" Version="3.1.3" Condition="'$(Configuration)' == 'Debug'"
In the launchsettings.json
"ASPNETCORE_HOSTINGSTARTUPASSEMBLIES":"Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation"
Change to Razor
Editing the Startup.cs
services.AddControllersWithViews()
.AddRazorRuntimeCompilation();
services.Configure<MvcRazorRuntimeCompilationOptions>(options =>
{
var libraryPath = Path.GetFullPath(
Path.Combine(HostEnvironment.ContentRootPath, "..", "ProjectFolderName"));
options.FileProviders.Add(new PhysicalFileProvider(libraryPath));
});
(Important is the project folder name)
Add the project folder to the container
volumes:
- .\bin\Debug\netcoreapp3.1:c:\app:ro
- .:c:\project

Confluence on Docker runs setup assistent on existing installation after update

A few days ago, my watchtower updated Confluence on Docker with the 6.15.1-alpine tag. It's hosted using Atlassians official image. Since those update, Confluence shows the setup screen. Haven't any chance to get inside the admin panel. When continue the wizard end entering server credentials of the existing installation, it gave an error that an installation already exists that would be overwritten if continued.
It was a re-push of the exact version tag 6.15.1 tag, not a regular version update. So there seems no possibility to use the old, working image. Also other versions seems re-pushed. Tried some older ones and also a new one, without success.
docker-compose.yml
version: "2"
volumes:
confluence-home:
services:
confluence:
container_name: confluence
image: atlassian/confluence-server:6.15.1-alpine
#restart: always
mem_limit: 6g
volumes:
- confluence-home:/var/atlassian/application-data/confluence
- ./confluence.cfg.xml:/var/atlassian/application-data/confluence/confluence.cfg.xml
- ./server.xml:/opt/atlassian/confluence/conf/server.xml
- ./mysql-connector-java-5.1.42-bin.jar:/opt/atlassian/confluence/lib/mysql-connector-java-5.1.42-bin.jar
networks:
- traefik
environment:
- "TZ=Europe/Berlin"
- JVM_MINIMUM_MEMORY=4096m
- JVM_MAXIMUM_MEMORY=4096m
labels:
- "traefik.port=8090"
- "traefik.backend=confluence"
- "traefik.frontend.rule=Host:confluence.my-domain.com"
networks:
traefik:
external: true
I found out that there were the following changes on the images:
Ownership
The logs throwed errors about not beinng able to write on log files because nearly the entire home directory was owned by an user called bin:
root#8ac38faa94f1:/var/atlassian/application-data/confluence# ls -l
total 108
drwx------ 2 bin bin 4096 Aug 19 00:03 analytics-logs
drwx------ 3 bin bin 4096 Jun 15 2017 attachments
drwx------ 2 bin bin 24576 Jan 12 2019 backups
[...]
This could be fixed by executing a chown:
docker exec -it confluence bash
chown confluence:confluence -R /var/atlassian/application-data/confluence
Moutings inside mount
My docker-compose.yml mounts a volume to /var/atlassian/application-data/confluence and inside those volume, the confluence.cfg.xml file was mounted from current directory. This approach is a bit older and should seperate the user data in the volume from configuration files like docker-compose.yml and also the application itself as confluence.cfg.xml.
Seems not properly working any more on using Docker 17.05 and Docker-Compose 1.8.0 (at least in combination with Confluence), so I simply removed that second mount and placed the configuration file inside the volume.
Atlassian creates config files now dynamically
It was noticeable that my mounted configuration files like confluence.cfg.xml and server.xml were overwritten by Atlassians container. Their source code shows that they now use Jina2, a common Python template engine used in e.g. Ansible. A python script parse those files on startup and create Confluences configuration files, without properly checking on all of those files if they already exists.
Mounting them as read only caused the app to crash because this is also not handled in their Python script. By analyzing their templates, I learned that they replaced nearly every config item by environment variables. Not a bad approach, so I specified my server.xml parameters by env variables instead of mouting the entire file.
In my case, Confluence is behind a Traefik reverse proxy and it's required to tell Confluence it's final application url for end users:
environment:
- ATL_proxyName=confluence.my-domain.com
- ATL_proxyPort=443
- ATL_tomcat_scheme=https
Final working docker-compose.yml
By applying all modifications above, accessing the existing installation works again using the following docker-compose.yml file:
version: "2"
volumes:
confluence-home:
services:
confluence:
container_name: confluence
image: atlassian/confluence-server:6.15.1
#restart: always
mem_limit: 6g
volumes:
- confluence-home:/var/atlassian/application-data/confluence
- ./mysql-connector-java-5.1.42-bin.jar:/opt/atlassian/confluence/lib/mysql-connector-java-5.1.42-bin.jar
networks:
- traefik
environment:
- "TZ=Europe/Berlin"
- JVM_MINIMUM_MEMORY=4096m
- JVM_MAXIMUM_MEMORY=4096m
- ATL_proxyName=confluence.my-domain.com
- ATL_proxyPort=443
- ATL_tomcat_scheme=https
labels:
- "traefik.port=8090"
- "traefik.backend=confluence"
- "traefik.frontend.rule=Host:confluence.my-domain.com"
networks:
traefik:
external: true

ERROR: Service 'api-gateway' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder931060141/api-gateway: no such file or directory

I am having trouble building a Go-based Docker project. My overall directory structure looks like:
api-gateway
│ ├─handler
│ └─resource
--Dockerfile
My Dockerfile contains:
FROM alpine:3.2
ADD api-gateway /api-gateway
ADD resource/pri_key.pem resource/pub_key.pem /resource/
#ADD resource/ca-certificates.crt /etc/ssl/certs/
VOLUME /resource/
ENTRYPOINT [ "/api-gateway" ]
Even though I'm using ADD to include a file in the image, I'm still getting an error. api-gateway is a directory that includes the Dockerfile inside.
D:\FileWithDocument\ExtraCodeProject\shop-micro-master>docker-compose up
Building api-gateway
Step 1/5 : FROM alpine:3.2
---> 98f5f2d17bd1
Step 2/5 : ADD api-gateway /api-gateway
ERROR: Service 'api-gateway' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder931060141/api-gateway: no such file or directory
I use Docker Desktop in Windows.
Docker Engine version is:
Client: Docker Engine - Community
Version: 18.09.2
API version: 1.39
Go version: go1.10.8
Git commit: 6247962
Built: Sun Feb 10 04:12:31 2019
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.2
API version: 1.39 (minimum version 1.12)
Go version: go1.10.6
Git commit: 6247962
Built: Sun Feb 10 04:13:06 2019
OS/Arch: linux/amd64
Experimental: true
When i download the github project and run docker build, it still outputs this error.
ERROR: Service 'api-gateway' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder931060141/api-gateway: no such file or directory
When you run docker build, the directory you give it becomes the context directory; you can only refer to file paths within that directory tree, and any file paths in COPY or ADD statements are always relative to that directory. That means if you're running docker build from the directory named api-gateway that contains the Dockerfile, . is the same directory. Your Dockerfile might look more like:
FROM alpine:3.2
# This will create the directory in the image if it
# doesn't already exist.
WORKDIR /api-gateway
# Copy the entire current directory into the image.
# (Prefer COPY to ADD, unless you specifically want
# automatic archive extraction or HTTP fetches.)
COPY . .
# Copy in some additional files.
# (Remember that anyone who has the image can extract any
# file from it: this leaks a private key.)
COPY resource/pri_key.pem resource/pub_key.pem /resource/
COPY resource/ca-certificates.crt /etc/ssl/certs/
# Set the default command to launch.
# (Prefer CMD to ENTRYPOINT: it is easier to get a debugging
# container with a shell, and there is a useful pattern that
# uses an ENTRYPOINT wrapper to do first-time setup before
# launching the CMD.)
CMD ["/api-gateway/handler"]
If you see a "docker-builder12345678/...: no such file or directory" error, you should always interpret the path components after the long number as relative to the directory you passed to docker build.

Pass environment variable into a Vue App at runtime

How can I access environment variables in Vue, that are passed to the container at runtime and not during the build?
Stack is as follows:
VueCLI 3.0.5
Docker
Kubernetes
There are suggested solutions on stackoverflow and elsewhere to use .env file to pass variables (and using mode) but that's at build-time and gets baked into the docker image.
I would like to pass the variable into Vue at run-time as follows:
Create Kubernetes ConfigMap (I get this right)
Pass ConfigMap value into K8s pod env variable when running deployment yaml file (I get this right)
Read from env variable created above eg. VUE_APP_MyURL and do something with that value in my Vue App (I DO NOT get this right)
I've tried the following in helloworld.vue:
<template>
<div>{{displayURL}}
<p>Hello World</p>
</div>
</template>
<script>
export default {
data() {
return {
displayURL: ""
}
},
mounted() {
console.log("check 1")
this.displayURL=process.env.VUE_APP_ENV_MyURL
console.log(process.env.VUE_APP_ENV_MyURL)
console.log("check 3")
}
}
</script>
I get back "undefined" in the console log and nothing showing on the helloworld page.
I've also tried passing the value into a vue.config file and reading it from there. Same "undefined" result in console.log
<template>
<div>{{displayURL}}
<p>Hello World</p>
</div>
</template>
<script>
const vueconfig = require('../../vue.config');
export default {
data() {
return {
displayURL: ""
}
},
mounted() {
console.log("check 1")
this.displayURL=vueconfig.VUE_APP_MyURL
console.log(vueconfig.VUE_APP_MyURL)
console.log("check 3")
}
}
</script>
With vue.config looking like this:
module.exports = {
VUE_APP_MyURL: process.env.VUE_APP_ENV_MyURL
}
If I hardcode a value into VUE_APP_MyURL in the vue.config file it shows successfully on the helloworld page.
VUE_APP_ENV_MyURL is successfully populated with the correct value when I interrogate it: kubectl describe pod
process.env.VUE_APP_MyURL doesn't seem to successfully retrieve the value.
For what it is worth... I am able to use process.env.VUE_APP_3rdURL successfully to pass values into a Node.js app at runtime.
Create a file config.js with your desired configuration. We will use that later to create a config map that we deploy to Kubernetes. Put it into your your Vue.js project where your other JavaScript files are. Although we will exclude it later from minification, it is useful to have it there so that IDE tooling works with it.
const config = (() => {
return {
"VUE_APP_ENV_MyURL": "...",
};
})();
Now make sure that your script is excluded from minification. To do that, create a file vue.config.js with the following content that preserves our config file.
const path = require("path");
module.exports = {
publicPath: '/',
configureWebpack: {
module: {
rules: [
{
test: /config.*config\.js$/,
use: [
{
loader: 'file-loader',
options: {
name: 'config.js'
},
}
]
}
]
}
}
}
In your index.html, add a script block to load the config file manually. Note that the config file won't be there as we just excluded it. Later, we will mount it from a ConfigMap into our container. In this example, we assume that we will mount it into the same directory as our HTML document.
<script src="<%= BASE_URL %>config.js"></script>
Change your code to use our runtime config:
this.displayURL = config.VUE_APP_ENV_MyURL || process.env.VUE_APP_ENV_MyURL
In Kubernetes, create a config map that uses the content your config file. Of course, you wanna read the content from your config file.
apiVersion: v1
kind: ConfigMap
metadata:
...
data:
config.js: |
var config = (() => {
return {
"VUE_APP_ENV_MyURL": "...",
};
})();
Reference the config map in your deployment. This mounts the config map as a file into your container. The mountPath Already contains our minified index.html. We mount the config file that we referenced before.
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
volumes:
- name: config-volume
configMap:
name: ...
containers:
- ...
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/config.js
subPath: config.js
Now you can access the config file at <Base URL>/config.js and you should see the exact content that you put into the ConfigMap entry. Your HTML document loads that config map as it loads the rest of your minified Vue.js code. Voila!
I am adding my working solution here, for those who are still having trouble. I do think that #Hendrik M Halkow 's answer is more elegant, though I couldn't manage to solve it using that, simply just because of my lack of expertise in webpack and Vue.I just couldn't figure out where to put the config file and how to refer it.
My approach is to make use of the environment variables with constants (dummy values) to build it for production, then replace that constants in the image using a custom entrypoint script. The solution goes like this.
I have encapsulated all configs into one file called app.config.js
export const clientId = process.env.VUE_APP_CLIENT_ID
export const baseURL = process.env.VUE_APP_API_BASE_URL
export default {
clientId,
baseURL,
}
This is used in the project just by looking up the value from config file.
import { baseURL } from '#/app.config';
Then I am using standard .env.[profile] files to set environment variables.
e.g. the .env.development
VUE_APP_API_BASE_URL=http://localhost:8085/radar-upload
VUE_APP_CLIENT_ID=test-client
Then for production I set string constants as values.
e.g. the .env.production
VUE_APP_API_BASE_URL=VUE_APP_API_BASE_URL
VUE_APP_CLIENT_ID=VUE_APP_CLIENT_ID
Please not here the value can be any unique string. Just to keep the readability easier, I am just replacing the environment variable name as the value. This will just get compiled and bundled similar to development mode.
In my Dockerfile, I add an entrypoint that can read those constants and replace it will environment variable values.
My Dockerfile looks like this (this is pretty standard)
FROM node:10.16.3-alpine as builder
RUN mkdir /app
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY . /app/
RUN npm run build --prod
FROM nginx:1.17.3-alpine
# add init script
COPY ./docker/nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY --from=builder /app/dist/ .
COPY ./docker/entrypoint.sh /entrypoint.sh
# expose internal port:80 and run init.sh
EXPOSE 80
ENTRYPOINT ["/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
Then create a ./docker/entrypoint.sh file as below.
#!/bin/sh
ROOT_DIR=/usr/share/nginx/html
# Replace env vars in JavaScript files
echo "Replacing env constants in JS"
for file in $ROOT_DIR/js/app.*.js* $ROOT_DIR/index.html $ROOT_DIR/precache-manifest*.js;
do
echo "Processing $file ...";
sed -i 's|VUE_APP_API_BASE_URL|'${VUE_APP_API_BASE_URL}'|g' $file
sed -i 's|VUE_APP_CLIENT_ID|'${VUE_APP_CLIENT_ID}'|g' $file
done
echo "Starting Nginx"
nginx -g 'daemon off;'
This enables me to have runtime configurable image that I can run on many environments. I know it is a bit of a hack. But have seen many people do it this way.
Hope this helps someone.
Create config file
In public folder: public/config.js
const config = (() => {
return {
"VUE_CONFIG_APP_API": "...",
};
})();
Update index.html
Update public/index.html to contain following at the end of head:
<!-- docker configurable variables -->
<script src="<%= BASE_URL %>config.js"></script>
There is no need to update vue.config.js as we are using the public folder for configuration.
ESLint
ESLint would give us error of usage of undefined variable. Therefore we define global variable in .eslintrc.js file:
globals: {
config: "readable",
},
Usage
Eg. in the store src/store/user.js
export const actions = {
async LoadUsers({ dispatch }) {
return await dispatch(
"axios/get",
{
url: config.VUE_CONFIG_APP_API + "User/List",
},
{ root: true }
);
},
...
K8S ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: fe-config
namespace: ...
data:
config.js: |
var config = (() => {
return {
"VUE_CONFIG_APP_API": "...",
};
})();
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
volumes:
- name: config-volume
configMap:
name: fe-config
containers:
- ...
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/config.js
subPath: config.js
I had the same problem in my current project and found out that it is not possible to access environment variables at runtime at the moment so I end up with the solution of creating .env files or local environment variables that, as you said, are used at the build time.
I recently created a set of plugins to solve this problem elegantly:
No global namespace pollution.
No import statement is required.
Almost zero configuration.
Out-of-the-box support for Vue CLI (also works with Webpack, Rollup and Vite, CSR, SSR and SSG, and unit testing tools. Powered by Unplugin and Babel).
You can access environment variables (heavily inspired by Vite) this way:
// src/index.js
console.log(`Hello, ${import.meta.env.HELLO}.`);
During production, it will be temporarily replaced with a placeholder:
// dist/index.js
console.log(`Hello, ${"__import_meta_env_placeholder__".HELLO}.`);
Finally, you can use built-in script to replace placeholders with real environment variables in your system (e.g., read environment variables in your k8s pod):
// dist/index.js
console.log(`Hello, ${{ HELLO: "import-meta-env" }.HELLO}.`);
// > Hello, import-meta-env.
You can see more info at https://iendeavor.github.io/import-meta-env/ .
And there is a Docker setup example and Vue CLI setup example.
Hope this helps someone who needs this.
I got it to work with the solution proposed by #Hendrik M Halkow.
But I stored the config.js in the static folder. By doing that, I don't have to care about not minifying the file.
Then include it like this:
<script src="<%= BASE_URL %>static/config.js"></script>
and use this volume mount configuration:
...
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/static/config.js
subPath: config.js
If you are using VueJs3 + Vite3 + TypeScript, you can do this:
create app.config.ts (not JS)
export const clientId = import.meta.env.VITE_CLIENT_ID
export const baseURL = import.meta.env.VITE_API_BASE_URL
export default {
clientId,
baseURL,
}
Replace values in assets subdir: (improved shell script)
#!/bin/sh
# #see https://stackoverflow.com/questions/18185305/storing-bash-output-into-a-variable-using-eval
ROOT_DIR=/usr/share/nginx/html
# Replace env vars in JavaScript files
echo "Replacing env constants in JS"
keys="VITE_CLIENT_ID
VITE_API_BASE_URL"
for file in $ROOT_DIR/assets/index*.js* ;
do
echo "Processing $file ...";
for key in $keys
do
value=$(eval echo \$$key)
echo "replace $key by $value"
sed -i 's#'"$key"'#'"$value"'#g' $file
done
done
echo "Starting Nginx"
nginx -g 'daemon off;'
In the Dockerfile don't forget "RUN chmod u+x"
# build
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY ./docker/nginx.conf /etc/nginx/conf.d/default.conf
COPY ./docker/entrypoint.sh /entrypoint.sh
COPY ./docker/entrypoint.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/entrypoint.sh
EXPOSE 80
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
then you be able to import it in any TypeScript file in the project
import { baseURL } from '#/app.config';
#see update of NehaM response: Pass environment variable into a Vue App at runtime

Resources