I have a docker container running via docker-compose with the command as running nestjs in dev mode:
FROM node:16-alpine as base
RUN apk add --no-cache libc6-compat tini
FROM base as dev
ENV NODE_ENV development
USER node
WORKDIR /home/node
# copy all files over
COPY --chown=node:node ./ ./
RUN mkdir -p ./my-app/dist/shared/grpc
RUN chown -R node:node ./my-app/dist
RUN chown -R node:node ./my-app/dist/shared/grpc
My grpc files are in a shared project. the full structure is:
services/shared
services/my-app
The shared project has grpc files inside the directory shared/grpc.
NestJS copies these over to it's dist folder when building. As this is dev, that's every code change or on docker error:
my-app/nest-cli.json:
{
"collection": "#nestjs/schematics",
"sourceRoot": "src",
"compilerOptions": {
"assets": [
{
"include": "../../shared/grpc/*.proto",
"outDir": "./dist/shared/grpc"
}
],
"watchAssets": true
},
"entryFile": "/my-app/src/main"
}
NestJS seems to detect 0 errors, but fails on the copying of the grpc files:
my-app_1 | [7:06:11 AM] Found 0 errors. Watching for file changes.
my-app_1 |
my-app_1 | node:fs:1828
my-app_1 | handleErrorFromBinding(ctx);
my-app_1 | ^
my-app_1 |
my-app_1 | Error [ShellJSInternalError]: EPERM: operation not permitted, chmod 'dist/shared/grpc/apps.proto'
my-app_1 | at Object.chmodSync (node:fs:1828:3)
my-app_1 | at copyFileSync (/home/node/my-app/node_modules/shelljs/src/cp.js:78:8)
my-app_1 | at /home/node/my-app/node_modules/shelljs/src/cp.js:298:7
my-app_1 | at Array.forEach (<anonymous>)
my-app_1 | at Object._cp (/home/node/my-app/node_modules/shelljs/src/cp.js:243:11)
my-app_1 | at Object.cp (/home/node/my-app/node_modules/shelljs/src/common.js:384:25)
my-app_1 | at AssetsManager.actionOnFile (/home/node/my-app/node_modules/#nestjs/cli/lib/compiler/assets-manager.js:95:19)
my-app_1 | at FSWatcher.<anonymous> (/home/node/my-app/node_modules/#nestjs/cli/lib/compiler/assets-manager.js:70:47)
my-app_1 | at FSWatcher.emit (node:events:520:28)
my-app_1 | at FSWatcher.emitWithAll (/home/node/my-app/node_modules/chokidar/index.js:540:8) {
my-app_1 | errno: -1,
my-app_1 | syscall: 'chmod',
my-app_1 | code: 'EPERM',
my-app_1 | path: 'dist/shared/grpc/apps.proto'
my-app_1 | }
my-app_1 | error Command failed with exit code 1.
my-app_1 | info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
my-app_1 | yarn run v1.22.17
However I can't work out why - it has ownership of all the folders it needs to access. Here's my docker-compose:
version: '3.8'
services:
my-app:
build:
context: .
target: dev
dockerfile: ./my-app/Dockerfile
restart: always
user: node
ports:
- 3003:3000
volumes:
- ./my-app:/home/node/my-app
- ./shared:/home/node/shared
working_dir: /home/node/my-app
command: yarn run start:dev
volumes:
my-app:
Edit 1
After removing USER node from Dockerfile and docker-compose this works. But that means node is running as root which is not OK and not a secure solution. I have even tried adding RUN chmod -R 777 /home/node within Dockerfile and that doesnt work. There must be something behind the scenes in nestjs that needs specific permissions but I can't figure out what.
Bit late to the party, but it was my nest-cli.json file that was causing the issue for me.
under compilerOptions I listed assets that aren't normally watched by Nest (ie: .proto or in my case .ejs files). Removing that line fixed it for me, so it had to do with me running the app in --watch mode and it needing to hot reload that file and not having permissions to copy it.
nest-cli.json:
{
"collection": "#nestjs/schematics",
"sourceRoot": "src",
"compilerOptions": {
"assets": [
"app/**/**/*.ejs"
],
"watchAssets": true
}
}
Related
This is the dockerfile:
FROM node
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "npm","start" ]
this is the docker-compose file:
userPortal:
image: userportal:latest
ports:
- 3001:3001
links:
- apiServer
command: ["npm","start"]
this is the docker-compose ps:
localdeployment_userPortal_1 docker-entrypoint.sh npm start Up 0.0.0.0:3001->3001/tcp
this is the package-json:
"scripts": {
"start": "set PORT=3001 && react-scripts start",
...}
this is the container's logs:
userPortal_1 |
userPortal_1 | > user-portal#0.1.0 start
userPortal_1 | > set PORT=3001 && react-scripts start
userPortal_1 |
userPortal_1 | (node:31) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
userPortal_1 | (Use `node --trace-deprecation ...` to show where the warning was created)
userPortal_1 | (node:31) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
userPortal_1 | Starting the development server...
and this is what I get when I try to access localhost:3001
[1]: https://i.stack.imgur.com/lwaOR.png
When I use npm start without docker it works fine so it is not a proxy problem.
After upgrading from Node 14.15 => 16.18.1, I have started getting a Error: EACCES: permission denied, scandir '/root/.npm/_logs' error when trying to run ESLint in the GitHub Actions CI/CD workflow.
Note: I am running this inside a Docker container. I am able to run the ESLint script locally in a container built from the same image, but when running in Github Actions, the workflow fails with the following error:
Creating csps-lint ... done
Attaching to csps-lint
csps-lint |
csps-lint | > lint
csps-lint | > bash ./scripts/lint.sh
csps-lint |
csps-lint | Running Prettier to identify code formatting issues...
csps-lint |
csps-lint | Checking formatting...
csps-lint | All matched files use Prettier code style!
csps-lint |
csps-lint | Running ESLint to identify code quality issues...
csps-lint | npm WARN logfile Error: EACCES: permission denied, scandir '/root/.npm/_logs'
csps-lint | npm WARN logfile error cleaning log files [Error: EACCES: permission denied, scandir '/root/.npm/_logs'] {
csps-lint | npm WARN logfile errno: -13,
csps-lint | npm WARN logfile code: 'EACCES',
csps-lint | npm WARN logfile syscall: 'scandir',
csps-lint | npm WARN logfile path: '/root/.npm/_logs'
csps-lint | npm WARN logfile }
csps-lint | npm ERR! code EACCES
csps-lint | npm ERR! syscall mkdir
csps-lint | npm ERR! path /root/.npm/_cacache/tmp
csps-lint | npm ERR! errno -13
csps-lint | npm ERR!
csps-lint | npm ERR! Your cache folder contains root-owned files, due to a bug in
csps-lint | npm ERR! previous versions of npm which has since been addressed.
csps-lint | npm ERR!
csps-lint | npm ERR! To permanently fix this problem, please run:
csps-lint | npm ERR! sudo chown -R 1001:123 "/root/.npm"
csps-lint |
csps-lint | npm ERR! Log files were not written due to an error writing to the directory: /root/.npm/_logs
csps-lint | npm ERR! You can rerun the command with `--loglevel=verbose` to see the logs in your terminal
csps-lint | ESLint found code quality issues. Please address any errors above and try again.
csps-lint |
csps-lint exited with code 1
1
Aborting on container exit...
Error: Process completed with exit code 1.
My GH Actions workflow is this:
name: test-eslint-fix
on:
push:
branches:
- eslint-fix
jobs:
login:
runs-on: ubuntu-latest
steps:
- uses: docker/login-action#v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
linting:
needs: login
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- run: LINT_COMMAND=lint docker-compose -f docker-comp-lint.yml up --abort-on-container-
Here is the Dockerfile that the image is created from:
# start FROM a base layer of node v16.18.1
FROM node:16.18.1
# confirm installation
RUN node -v
RUN npm -v
# Globally install webpack in the container for webpack dev server
RUN npm install webpack -g
# Install db-migrate
RUN npm install db-migrate#0.11.11 -g
# Set up a WORKDIR for application in the container
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json /usr/src/app/
# npm install to create node_modules in the container
RUN npm install
# EXPOSE your server port
EXPOSE 3000
This is my docker-compose file:
version: '3'
services:
csps-lint:
image: <<my image>>
container_name: csps-lint
ports:
- '3002:3002'
volumes:
- ./:/usr/src/app
- node_modules:/usr/src/app/node_modules
environment:
- NODE_ENV=test
- LINT_COMMAND=${LINT_COMMAND}
command: npm run ${LINT_COMMAND}
volumes:
node_modules:
And this is the bash script that is running the ESLint command:
# Run Prettier in check mode and store status
echo -e "\033[1;33mRunning Prettier to identify code formatting issues...\033[0m\n"
prettier --check .
PRETTIER_STATUS=$?
# Run ESLint in check mode and store status
echo -e "\n\033[1;33mRunning ESLint to identify code quality issues...\033[0m"
npx eslint .
ESLINT_STATUS=$?
I have tried adding the recommended fix from the error message in multiple places, but nothing seems to resolve the issue. I've put this command in the lint.sh script, I've added it to the Dockerfile, I've tried adding in the docker-compose file as well, but I'm unable to resolve the issue no matter where I place the command.
sudo chown -R 1001:123 "/root/.npm"
In a few of the instances, there was no access to sudo, so I'm wondering if I somehow need to install that first? Do I even need to be setting these permissions? Is there something that has changed from Node v14 to Node v16 that I'm just overlooking?
I've tried to use the solutions provided in other posts with similar error messages for other packages, but none of the solutions seem to fix this issue.
I feel like there must just be some very small thing I've missed, but the part that I'm really lost on is that I haven't changed this workflow other than upgrading the Node.js version in the Dockerfile. Also, I am able to run this workflow locally without issue in a container created from the same image, but inside GH Actions I can't seem to get past this flow.
When running the following command locally docker exec csps-lint ls -la /root/.npm/_logs, this is my output:
total 1080
drwxr-xr-x 1 root root 4096 Dec 2 23:02 .
drwxr-xr-x 1 root root 4096 Dec 2 20:15 ..
-rw-r--r-- 1 root root 1589 Nov 15 06:46 2022-11-15T06_46_44_197Z-debug-0.log
-rw-r--r-- 1 root root 1589 Dec 2 20:15 2022-12-02T20_15_05_450Z-debug-0.log
-rw-r--r-- 1 root root 65102 Dec 2 20:15 2022-12-02T20_15_05_684Z-debug-0.log
-rw-r--r-- 1 root root 59037 Dec 2 20:15 2022-12-02T20_15_09_141Z-debug-0.log
-rw-r--r-- 1 root root 943351 Dec 2 20:15 2022-12-02T20_15_14_433Z-debug-0.log
-rw-r--r-- 1 root root 1706 Dec 2 20:23 2022-12-02T20_23_05_075Z-debug-0.log
-rw-r--r-- 1 root root 1756 Dec 2 20:23 2022-12-02T20_23_16_787Z-debug-0.log
-rw-r--r-- 1 root root 1599 Dec 2 23:02 2022-12-02T23_02_23_793Z-debug-0.log
UPDATE: So I have discovered that when I run the script in the csps-lint folder locally, the user is root, but when it is run in the CI/CD, it runs as an unnamed user with ID 1001. I'm really unclear why the user would be different when the container is built from the same image no matter where it's being run.
I believe this issue was caused by npm updates no longer running in root. After much back and forth and trying 101 different things, I was able to resolve the issue by:
Adding the following command to the end of my dockerfile
RUN chown -R node:node /usr/src/app
Adding a user key to my docker-compose.yml file
version: '3'
services:
csps-lint:
image: <<my image>>
user: node
container_name: csps-lint
ports:
- '3002:3002'
volumes:
- ./:/usr/src/app
- node_modules:/usr/src/app/node_modules
environment:
- NODE_ENV=test
- LINT_COMMAND=${LINT_COMMAND}
command: npm run ${LINT_COMMAND}
volumes:
node_modules:
I'm trying to build a simple next.js app with Docker compose but it keeps failing on docker-compose build with an exit code 135. I'm running it on a Mac M1 Pro (if that is relevant).
I couldn't find any resources pointing to an exit code 135 though.
This is the docker-compose.yaml
version: '3'
services:
next-app:
image: node:18-alpine
volumes:
- ./:/site
command: >
sh -c "npm install && npm run build && yarn start -H 0.0.0.0 -p 80"
working_dir: /site
ports:
- 80:80
And the logs:
[+] Running 1/0
⠿ Container next-app Created 0.0s
Attaching to next-app
next-app |
next-app | up to date, audited 551 packages in 3s
next-app |
next-app | 114 packages are looking for funding
next-app | run `npm fund` for details
next-app |
next-app | 5 moderate severity vulnerabilities
next-app |
next-app | To address all issues (including breaking changes), run:
next-app | npm audit fix --force
next-app |
next-app | Run `npm audit` for details.
next-app |
next-app | > marketing-site-v2#0.1.0 build
next-app | > next build
next-app |
next-app | info - Linting and checking validity of types...
next-app |
next-app | ./pages/cloud.tsx
next-app | 130:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app | 133:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app | 150:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app |
next-app | ./pages/index.tsx
next-app | 176:10 Warning: Image elements must have an alt prop, either with meaningful text, or an empty string for decorative images. jsx-a11y/alt-text
next-app |
next-app | ./components/main-content-display.tsx
next-app | 129:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app |
next-app | info - Need to disable some ESLint rules? Learn more here: https://nextjs.org/docs/basic-features/eslint#disabling-rules
next-app | info - Creating an optimized production build...
next-app | Bus error
next-app exited with code 135
Without knowing exactly what's in your package.json file - I would try this.
Spin up your vanilla node:18-alpine image without installing dependencies via the adjusted compose file below.
version: '3'
services:
next-app:
image: node:18-alpine
container_name: my_test_container
volumes:
- ./:/site
command: >
sh -c "tail -f /dev/null"
working_dir: /site
ports:
- 80:80
The command being used here
sh -c "tail -f /dev/null"
is a popular vanilla option for keeping a container up and running when using compose (and when not executing some other command e.g,. npm start that would keep the container running otherwise).
I have also added a container_name for reference here.
Next, enter the container and try running each command in your original sh sequentially (starting with npm install). See if one of those commands is a problem.
You can enter the container (using the container_name above) via the command below to test
docker container exec -u 0 -it my_test_container bash
As an aside, at some point I would pull commands like npm install from your compose file back to a Dockerfile defining your image (here node:18-alpine) and any additional custom installs you need for your application (here contained in package.json).
You did use sh commend in docker-compose which is not a good practice to use docker.
You need docker-compose.yml along with Dockerfile as mentioned below.
docker-compose.yml
version: "3"
services:
next-app:
build: .
ports:
- "80:80"
in Dockerfile
FROM node:16.15.1-alpine3.16 as site
WORKDIR /usr/src/site
COPY site/ .
RUN npm install
RUN npm run build
EXPOSE 80
CMD npm start
after this changes you just need a single command to start server.
docker-compose up --build -d
For couple of hours I am struggling with docker compose. I am building angular app. And I could see the files in the dist directory. Now I want to share these files with the nginx container. I thought the shared volume will do it. But when I add
services:
client:
volumes:
- static:/app/client/dist
nginx:
volumes:
- static:share/user/nginx/html
volumes:
static:
an try docker-compose up --build
I got this error
client_1 | EBUSY: resource busy or locked, rmdir '/app/client/dist'
client_1 | Error: EBUSY: resource busy or locked, rmdir '/app/client/dist'
client_1 | at Object.fs.rmdirSync (fs.js:863:18)
client_1 | at rmdirSync (/app/client/node_modules/fs-extra/lib/remove/rimraf.js:276:13)
client_1 | at Object.rimrafSync [as removeSync] (/app/client/node_modules/fs-extra/lib/remove/rimraf.js:252:7)
client_1 | at Class.run (/app/client/node_modules/#angular/cli/tasks/build.js:29:16)
client_1 | at Class.run (/app/client/node_modules/#angular/cli/commands/build.js:250:40)
client_1 | at resolve (/app/client/node_modules/#angular/cli/ember-cli/lib/models/command.js:261:20)
client_1 | at new Promise (<anonymous>)
client_1 | at Class.validateAndRun (/app/client/node_modules/#angular/cli/ember-cli/lib/models/command.js:240:12)
client_1 | at Promise.resolve.then.then (/app/client/node_modules/#angular/cli/ember-cli/lib/cli/cli.js:140:24)
client_1 | at <anonymous>
client_1 | npm ERR! code ELIFECYCLE
client_1 | npm ERR! errno 1
client_1 | npm ERR! app#0.0.0 build: `ng build --prod`
client_1 | npm ERR! Exit status 1
client_1 | npm ERR!
client_1 | npm ERR! Failed at the app#0.0.0 build-prod script.
client_1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
Any help is fully appreciated
I believe this is, as the error suggests, a deadlock situation. Your docker-compose file has 2 services that start approximately, if not, simultaneously. Both of them has some sort of hold on the Docker volume (named, "static"). When Angular executes ng build, by default, --deleteOutputPath is set to true. And when it attempts to delete the output directory, the error message that you received occurs.
If deleteOutputPath is set to false the issue should be resolved. Perhaps that is sufficient for your needs. If not, as an alternative, having the --outputPath set to a temp directory within the project directory and after Angular builds, move the contents into the Docker volume. If the temp directory path is out/dist and the volume maps to dist this can be used:
ng build && cp -rf ./out/dist/* ./dist
However, that alternative solution is really just working around the issue. To make note, the docker-compose depends_on key will not help in this situation as it simply signifies a dependency and nothing to do with "readiness" of the dependent service.
And also to make note, executing docker volume rm <name> will have no consequences as a solution here. Remember, both services have a hold on the volume as one is trying to remove it.
Just a thought, haven't tested, as another alternative solution is to delete the contents within the output path. And set the deleteOutputPath to false, since Angular seems to be attempting to delete the directory itself.
Update:
So removing the contents in the output path seems to work! As I mentioned, set deleteOutputPath to false. And in your package.json file, in the scripts object, having something similar to this:
{
"scripts": {
"build:production": "rm -rf ./dist/* && ng build --configuration production",
}
}
You can try to solve it without using named volumes:
services:
client:
volumes:
- ./static-content:client/app/dist
nginx:
volumes:
- ./static-content:share/user/nginx/html
I had an existing Ionic app which I have dockerized. The build and up commands are successful and I can access the app at http://localhost:8100/ionic-lab. However, hot reload doesn't work. Whenever I edit an HTML or CSS, those changes are nor reflected.
My dockerfile:
FROM node:8
COPY package.json /opt/library/
WORKDIR /opt/library
RUN npm install -g cordova ionic && cordova telemetry off
# && echo n | ionic start dockerized-ionic-app --skip-npm --v2 --ts
RUN npm install && npm cache verify
COPY . /opt/library
#CMD ["ionic", "serve", "--all"]
And docker-compose.yml:
app:
build: .
ports:
- '8100:8100'
- '35729:35729'
volumes:
- .:/opt/library
- /opt/library/node_modules
command: ionic serve --lab
Why is it happening? What is missing?
UPDATE:
Output of docker-compose build --no-cache
D:\Development\personal_projects\library>docker-compose build --no-cache
Building app
Step 1/6 : FROM node:8
---> b87c2ad8344d Step 2/6 : COPY package.json /opt/library/
---> 4422d0333b92
Step 3/6 : WORKDIR /opt/library
Removing intermediate container 1cfdd60477f9 ---> 1ca3dc5f5bd6 Step 4/6 : RUN npm install -g cordova ionic && cordova telemetry off
---> Running in d7e9bf4e6d7b
/usr/local/bin/cordova -> /usr/local/lib/node_modules/cordova/bin/cordova
/usr/local/bin/ionic -> /usr/local/lib/node_modules/ionic/bin/ionic
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.1.3 (node_modules/ionic/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
+ cordova#8.0.0
+ ionic#3.19.1
added 660 packages in 29.173s
You have been opted out of telemetry. To change this, run: cordova telemetry on.
Removing intermediate container d7e9bf4e6d7b
---> 3fedee0878af
Step 5/6 : RUN npm install && npm cache verify
---> Running in 8d482b23f6bb
> node-sass#4.5.3 install /opt/library/node_modules/node-sass
> node scripts/install.js
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-57_binding.node
Download complete
Binary saved to /opt/library/node_modules/node-sass/vendor/linux-x64-57/binding.node
Caching binary to /root/.npm/node-sass/4.5.3/linux-x64-57_binding.node
> uglifyjs-webpack-plugin#0.4.6 postinstall /opt/library/node_modules/uglifyjs-webpack-plugin
> node lib/post_install.js
> node-sass#4.5.3 postinstall /opt/library/node_modules/node-sass
> node scripts/build.js
Binary found at /opt/library/node_modules/node-sass/vendor/linux-x64-57/binding.node
Testing binary
Binary is fine
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.1.3 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
added 548 packages in 30.281s
Cache verified and compressed (~/.npm/_cacache):
Content verified: 1476 (55779072 bytes)
Index entries: 2306
Finished in 9.736s
Removing intermediate container 8d482b23f6bb
---> 5815e391f2c6
Step 6/6 : COPY . /opt/library
---> 5cc9637a678c
Successfully built 5cc9637a678c
Successfully tagged library_app:latest
D:\Development\personal_projects\library>
And output of docker-compose up:
D:\Development\personal_projects\library>docker-compose up
Recreating library_app_1 ... done
Attaching to library_app_1
Starting app-scripts server: --address 0.0.0.0 --port 8100 --livereload-port 35729 --dev-logger-port 53703 --nobrowser --lab - Ctrl+C to cancel
app_1 | [14:45:19] watch started ...
app_1 | [14:45:19] build dev started ...
app_1 | [14:45:19] clean started ...
app_1 | [14:45:19] clean finished in 78 ms
app_1 | [14:45:19] copy started ...
app_1 | [14:45:19] deeplinks started ...
app_1 | [14:45:20] deeplinks finished in 60 ms
app_1 | [14:45:20] transpile started ...
app_1 | [14:45:24] transpile finished in 4.54 s
app_1 | [14:45:24] preprocess started ...
app_1 | [14:45:24] preprocess finished in 1 ms
app_1 | [14:45:24] webpack started ...
app_1 | [14:45:24] copy finished in 5.33 s
app_1 | [14:45:31] webpack finished in 6.73 s
app_1 | [14:45:31] sass started ...
app_1 | [14:45:32] sass finished in 1.46 s
app_1 | [14:45:32] postprocess started ...
app_1 | [14:45:32] postprocess finished in 40 ms
app_1 | [14:45:32] lint started ...
app_1 | [14:45:32] build dev finished in 13.64 s
app_1 | [14:45:32] watch ready in 13.73 s
app_1 | [14:45:32] dev server running: http://localhost:8100/
app_1 |
[OK] Development server running!
app_1 | Local: http://localhost:8100
app_1 | External: http://172.17.0.2:8100
app_1 | DevApp: library#8100 on 1643dcb6c0d7
app_1 |
app_1 | [14:45:35] lint finished in 2.51 s
Your Dockerfile and Docker-Compose does exactly what is needed.
With the - .:/opt/library line the volume gets mounted correctly and your local changes will take effect in the container as well.
If you are on Windows the problem is that Hyper-V is not capable of propagating local file changes correctly into the container. Therefore the serve program is not able to catch file changes.
The solution is to use ng serve directly and enable polling by running ng serve with the poll flag: ng serve --poll 200 --host=0.0.0.0 --port=8100.
--poll 200 is looking actively for file changes every 200ms
--host=0.0.0.0 set the host. 0.0.0.0 is used to be reachable from other containers
--port=8100 is used to get the same port as ionic serve uses (just for convinience)
You said "hot reload doesn't work", this is correct.
if you re-build docker container then only you will see code changes, because your source code needs to get copy inside your docker-container.
just run docker-compose up -d or rebuild docker container then you should see your code changes.
You are mapping local 8100 port with cointainer 8100 port, this is ok. You are running ionic from a container, in an External way.
Try with “ionic serve --external”