Dockerized Ionic app hot reload not working - docker

I had an existing Ionic app which I have dockerized. The build and up commands are successful and I can access the app at http://localhost:8100/ionic-lab. However, hot reload doesn't work. Whenever I edit an HTML or CSS, those changes are nor reflected.
My dockerfile:
FROM node:8
COPY package.json /opt/library/
WORKDIR /opt/library
RUN npm install -g cordova ionic && cordova telemetry off
# && echo n | ionic start dockerized-ionic-app --skip-npm --v2 --ts
RUN npm install && npm cache verify
COPY . /opt/library
#CMD ["ionic", "serve", "--all"]
And docker-compose.yml:
app:
build: .
ports:
- '8100:8100'
- '35729:35729'
volumes:
- .:/opt/library
- /opt/library/node_modules
command: ionic serve --lab
Why is it happening? What is missing?
UPDATE:
Output of docker-compose build --no-cache
D:\Development\personal_projects\library>docker-compose build --no-cache
Building app
Step 1/6 : FROM node:8
---> b87c2ad8344d Step 2/6 : COPY package.json /opt/library/
---> 4422d0333b92
Step 3/6 : WORKDIR /opt/library
Removing intermediate container 1cfdd60477f9 ---> 1ca3dc5f5bd6 Step 4/6 : RUN npm install -g cordova ionic && cordova telemetry off
---> Running in d7e9bf4e6d7b
/usr/local/bin/cordova -> /usr/local/lib/node_modules/cordova/bin/cordova
/usr/local/bin/ionic -> /usr/local/lib/node_modules/ionic/bin/ionic
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.1.3 (node_modules/ionic/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
+ cordova#8.0.0
+ ionic#3.19.1
added 660 packages in 29.173s
You have been opted out of telemetry. To change this, run: cordova telemetry on.
Removing intermediate container d7e9bf4e6d7b
---> 3fedee0878af
Step 5/6 : RUN npm install && npm cache verify
---> Running in 8d482b23f6bb
> node-sass#4.5.3 install /opt/library/node_modules/node-sass
> node scripts/install.js
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-57_binding.node
Download complete
Binary saved to /opt/library/node_modules/node-sass/vendor/linux-x64-57/binding.node
Caching binary to /root/.npm/node-sass/4.5.3/linux-x64-57_binding.node
> uglifyjs-webpack-plugin#0.4.6 postinstall /opt/library/node_modules/uglifyjs-webpack-plugin
> node lib/post_install.js
> node-sass#4.5.3 postinstall /opt/library/node_modules/node-sass
> node scripts/build.js
Binary found at /opt/library/node_modules/node-sass/vendor/linux-x64-57/binding.node
Testing binary
Binary is fine
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.1.3 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
added 548 packages in 30.281s
Cache verified and compressed (~/.npm/_cacache):
Content verified: 1476 (55779072 bytes)
Index entries: 2306
Finished in 9.736s
Removing intermediate container 8d482b23f6bb
---> 5815e391f2c6
Step 6/6 : COPY . /opt/library
---> 5cc9637a678c
Successfully built 5cc9637a678c
Successfully tagged library_app:latest
D:\Development\personal_projects\library>
And output of docker-compose up:
D:\Development\personal_projects\library>docker-compose up
Recreating library_app_1 ... done
Attaching to library_app_1
Starting app-scripts server: --address 0.0.0.0 --port 8100 --livereload-port 35729 --dev-logger-port 53703 --nobrowser --lab - Ctrl+C to cancel
app_1 | [14:45:19] watch started ...
app_1 | [14:45:19] build dev started ...
app_1 | [14:45:19] clean started ...
app_1 | [14:45:19] clean finished in 78 ms
app_1 | [14:45:19] copy started ...
app_1 | [14:45:19] deeplinks started ...
app_1 | [14:45:20] deeplinks finished in 60 ms
app_1 | [14:45:20] transpile started ...
app_1 | [14:45:24] transpile finished in 4.54 s
app_1 | [14:45:24] preprocess started ...
app_1 | [14:45:24] preprocess finished in 1 ms
app_1 | [14:45:24] webpack started ...
app_1 | [14:45:24] copy finished in 5.33 s
app_1 | [14:45:31] webpack finished in 6.73 s
app_1 | [14:45:31] sass started ...
app_1 | [14:45:32] sass finished in 1.46 s
app_1 | [14:45:32] postprocess started ...
app_1 | [14:45:32] postprocess finished in 40 ms
app_1 | [14:45:32] lint started ...
app_1 | [14:45:32] build dev finished in 13.64 s
app_1 | [14:45:32] watch ready in 13.73 s
app_1 | [14:45:32] dev server running: http://localhost:8100/
app_1 |
[OK] Development server running!
app_1 | Local: http://localhost:8100
app_1 | External: http://172.17.0.2:8100
app_1 | DevApp: library#8100 on 1643dcb6c0d7
app_1 |
app_1 | [14:45:35] lint finished in 2.51 s

Your Dockerfile and Docker-Compose does exactly what is needed.
With the - .:/opt/library line the volume gets mounted correctly and your local changes will take effect in the container as well.
If you are on Windows the problem is that Hyper-V is not capable of propagating local file changes correctly into the container. Therefore the serve program is not able to catch file changes.
The solution is to use ng serve directly and enable polling by running ng serve with the poll flag: ng serve --poll 200 --host=0.0.0.0 --port=8100.
--poll 200 is looking actively for file changes every 200ms
--host=0.0.0.0 set the host. 0.0.0.0 is used to be reachable from other containers
--port=8100 is used to get the same port as ionic serve uses (just for convinience)

You said "hot reload doesn't work", this is correct.
if you re-build docker container then only you will see code changes, because your source code needs to get copy inside your docker-container.
just run docker-compose up -d or rebuild docker container then you should see your code changes.

You are mapping local 8100 port with cointainer 8100 port, this is ok. You are running ionic from a container, in an External way.
Try with “ionic serve --external”

Related

Getting an npm permissions error in CI/CD after upgrading to Node v16 - working fine locally running container from same image

After upgrading from Node 14.15 => 16.18.1, I have started getting a Error: EACCES: permission denied, scandir '/root/.npm/_logs' error when trying to run ESLint in the GitHub Actions CI/CD workflow.
Note: I am running this inside a Docker container. I am able to run the ESLint script locally in a container built from the same image, but when running in Github Actions, the workflow fails with the following error:
Creating csps-lint ... done
Attaching to csps-lint
csps-lint |
csps-lint | > lint
csps-lint | > bash ./scripts/lint.sh
csps-lint |
csps-lint | Running Prettier to identify code formatting issues...
csps-lint |
csps-lint | Checking formatting...
csps-lint | All matched files use Prettier code style!
csps-lint |
csps-lint | Running ESLint to identify code quality issues...
csps-lint | npm WARN logfile Error: EACCES: permission denied, scandir '/root/.npm/_logs'
csps-lint | npm WARN logfile error cleaning log files [Error: EACCES: permission denied, scandir '/root/.npm/_logs'] {
csps-lint | npm WARN logfile errno: -13,
csps-lint | npm WARN logfile code: 'EACCES',
csps-lint | npm WARN logfile syscall: 'scandir',
csps-lint | npm WARN logfile path: '/root/.npm/_logs'
csps-lint | npm WARN logfile }
csps-lint | npm ERR! code EACCES
csps-lint | npm ERR! syscall mkdir
csps-lint | npm ERR! path /root/.npm/_cacache/tmp
csps-lint | npm ERR! errno -13
csps-lint | npm ERR!
csps-lint | npm ERR! Your cache folder contains root-owned files, due to a bug in
csps-lint | npm ERR! previous versions of npm which has since been addressed.
csps-lint | npm ERR!
csps-lint | npm ERR! To permanently fix this problem, please run:
csps-lint | npm ERR! sudo chown -R 1001:123 "/root/.npm"
csps-lint |
csps-lint | npm ERR! Log files were not written due to an error writing to the directory: /root/.npm/_logs
csps-lint | npm ERR! You can rerun the command with `--loglevel=verbose` to see the logs in your terminal
csps-lint | ESLint found code quality issues. Please address any errors above and try again.
csps-lint |
csps-lint exited with code 1
1
Aborting on container exit...
Error: Process completed with exit code 1.
My GH Actions workflow is this:
name: test-eslint-fix
on:
push:
branches:
- eslint-fix
jobs:
login:
runs-on: ubuntu-latest
steps:
- uses: docker/login-action#v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
linting:
needs: login
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- run: LINT_COMMAND=lint docker-compose -f docker-comp-lint.yml up --abort-on-container-
Here is the Dockerfile that the image is created from:
# start FROM a base layer of node v16.18.1
FROM node:16.18.1
# confirm installation
RUN node -v
RUN npm -v
# Globally install webpack in the container for webpack dev server
RUN npm install webpack -g
# Install db-migrate
RUN npm install db-migrate#0.11.11 -g
# Set up a WORKDIR for application in the container
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json /usr/src/app/
# npm install to create node_modules in the container
RUN npm install
# EXPOSE your server port
EXPOSE 3000
This is my docker-compose file:
version: '3'
services:
csps-lint:
image: <<my image>>
container_name: csps-lint
ports:
- '3002:3002'
volumes:
- ./:/usr/src/app
- node_modules:/usr/src/app/node_modules
environment:
- NODE_ENV=test
- LINT_COMMAND=${LINT_COMMAND}
command: npm run ${LINT_COMMAND}
volumes:
node_modules:
And this is the bash script that is running the ESLint command:
# Run Prettier in check mode and store status
echo -e "\033[1;33mRunning Prettier to identify code formatting issues...\033[0m\n"
prettier --check .
PRETTIER_STATUS=$?
# Run ESLint in check mode and store status
echo -e "\n\033[1;33mRunning ESLint to identify code quality issues...\033[0m"
npx eslint .
ESLINT_STATUS=$?
I have tried adding the recommended fix from the error message in multiple places, but nothing seems to resolve the issue. I've put this command in the lint.sh script, I've added it to the Dockerfile, I've tried adding in the docker-compose file as well, but I'm unable to resolve the issue no matter where I place the command.
sudo chown -R 1001:123 "/root/.npm"
In a few of the instances, there was no access to sudo, so I'm wondering if I somehow need to install that first? Do I even need to be setting these permissions? Is there something that has changed from Node v14 to Node v16 that I'm just overlooking?
I've tried to use the solutions provided in other posts with similar error messages for other packages, but none of the solutions seem to fix this issue.
I feel like there must just be some very small thing I've missed, but the part that I'm really lost on is that I haven't changed this workflow other than upgrading the Node.js version in the Dockerfile. Also, I am able to run this workflow locally without issue in a container created from the same image, but inside GH Actions I can't seem to get past this flow.
When running the following command locally docker exec csps-lint ls -la /root/.npm/_logs, this is my output:
total 1080
drwxr-xr-x 1 root root 4096 Dec 2 23:02 .
drwxr-xr-x 1 root root 4096 Dec 2 20:15 ..
-rw-r--r-- 1 root root 1589 Nov 15 06:46 2022-11-15T06_46_44_197Z-debug-0.log
-rw-r--r-- 1 root root 1589 Dec 2 20:15 2022-12-02T20_15_05_450Z-debug-0.log
-rw-r--r-- 1 root root 65102 Dec 2 20:15 2022-12-02T20_15_05_684Z-debug-0.log
-rw-r--r-- 1 root root 59037 Dec 2 20:15 2022-12-02T20_15_09_141Z-debug-0.log
-rw-r--r-- 1 root root 943351 Dec 2 20:15 2022-12-02T20_15_14_433Z-debug-0.log
-rw-r--r-- 1 root root 1706 Dec 2 20:23 2022-12-02T20_23_05_075Z-debug-0.log
-rw-r--r-- 1 root root 1756 Dec 2 20:23 2022-12-02T20_23_16_787Z-debug-0.log
-rw-r--r-- 1 root root 1599 Dec 2 23:02 2022-12-02T23_02_23_793Z-debug-0.log
UPDATE: So I have discovered that when I run the script in the csps-lint folder locally, the user is root, but when it is run in the CI/CD, it runs as an unnamed user with ID 1001. I'm really unclear why the user would be different when the container is built from the same image no matter where it's being run.
I believe this issue was caused by npm updates no longer running in root. After much back and forth and trying 101 different things, I was able to resolve the issue by:
Adding the following command to the end of my dockerfile
RUN chown -R node:node /usr/src/app
Adding a user key to my docker-compose.yml file
version: '3'
services:
csps-lint:
image: <<my image>>
user: node
container_name: csps-lint
ports:
- '3002:3002'
volumes:
- ./:/usr/src/app
- node_modules:/usr/src/app/node_modules
environment:
- NODE_ENV=test
- LINT_COMMAND=${LINT_COMMAND}
command: npm run ${LINT_COMMAND}
volumes:
node_modules:

Docker-compose up fails with exit code 135

I'm trying to build a simple next.js app with Docker compose but it keeps failing on docker-compose build with an exit code 135. I'm running it on a Mac M1 Pro (if that is relevant).
I couldn't find any resources pointing to an exit code 135 though.
This is the docker-compose.yaml
version: '3'
services:
next-app:
image: node:18-alpine
volumes:
- ./:/site
command: >
sh -c "npm install && npm run build && yarn start -H 0.0.0.0 -p 80"
working_dir: /site
ports:
- 80:80
And the logs:
[+] Running 1/0
⠿ Container next-app Created 0.0s
Attaching to next-app
next-app |
next-app | up to date, audited 551 packages in 3s
next-app |
next-app | 114 packages are looking for funding
next-app | run `npm fund` for details
next-app |
next-app | 5 moderate severity vulnerabilities
next-app |
next-app | To address all issues (including breaking changes), run:
next-app | npm audit fix --force
next-app |
next-app | Run `npm audit` for details.
next-app |
next-app | > marketing-site-v2#0.1.0 build
next-app | > next build
next-app |
next-app | info - Linting and checking validity of types...
next-app |
next-app | ./pages/cloud.tsx
next-app | 130:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app | 133:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app | 150:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app |
next-app | ./pages/index.tsx
next-app | 176:10 Warning: Image elements must have an alt prop, either with meaningful text, or an empty string for decorative images. jsx-a11y/alt-text
next-app |
next-app | ./components/main-content-display.tsx
next-app | 129:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app |
next-app | info - Need to disable some ESLint rules? Learn more here: https://nextjs.org/docs/basic-features/eslint#disabling-rules
next-app | info - Creating an optimized production build...
next-app | Bus error
next-app exited with code 135
Without knowing exactly what's in your package.json file - I would try this.
Spin up your vanilla node:18-alpine image without installing dependencies via the adjusted compose file below.
version: '3'
services:
next-app:
image: node:18-alpine
container_name: my_test_container
volumes:
- ./:/site
command: >
sh -c "tail -f /dev/null"
working_dir: /site
ports:
- 80:80
The command being used here
sh -c "tail -f /dev/null"
is a popular vanilla option for keeping a container up and running when using compose (and when not executing some other command e.g,. npm start that would keep the container running otherwise).
I have also added a container_name for reference here.
Next, enter the container and try running each command in your original sh sequentially (starting with npm install). See if one of those commands is a problem.
You can enter the container (using the container_name above) via the command below to test
docker container exec -u 0 -it my_test_container bash
As an aside, at some point I would pull commands like npm install from your compose file back to a Dockerfile defining your image (here node:18-alpine) and any additional custom installs you need for your application (here contained in package.json).
You did use sh commend in docker-compose which is not a good practice to use docker.
You need docker-compose.yml along with Dockerfile as mentioned below.
docker-compose.yml
version: "3"
services:
next-app:
build: .
ports:
- "80:80"
in Dockerfile
FROM node:16.15.1-alpine3.16 as site
WORKDIR /usr/src/site
COPY site/ .
RUN npm install
RUN npm run build
EXPOSE 80
CMD npm start
after this changes you just need a single command to start server.
docker-compose up --build -d

why does yarn --watch exit (send SIGTERM)

I have a Docker installation that I would like to start with docker compose up (and not have to run 2 extra ttys ) so I added a Procfile.dev looking like this
web: bin/rails server -p 3000 -b '0.0.0.0'
js: yarn build_js --watch
css: yarn build_css --watch
The output is, however, less than enjoyable
√ mindling % docker compose up
[+] Running 3/0
⠿ Container mindling_redis Running 0.0s
⠿ Container mindling_db Running 0.0s
⠿ Container mindling_mindling_1 Created 0.0s
Attaching to mindling_db, mindling_1, mindling_redis
mindling_1 | 19:54:04 web.1 | started with pid 16
mindling_1 | 19:54:04 js.1 | started with pid 19
mindling_1 | 19:54:04 css.1 | started with pid 22
mindling_1 | 19:54:06 css.1 | yarn run v1.22.17
mindling_1 | 19:54:06 js.1 | yarn run v1.22.17
mindling_1 | 19:54:06 js.1 | $ esbuild app/javascript/*.* --bundle --outdir=app/assets/builds --watch
mindling_1 | 19:54:06 css.1 | $ tailwindcss -i ./app/assets/stylesheets/application.tailwind.css -o ./app/assets/builds/application.css --watch
mindling_1 | 19:54:08 js.1 | Done in 2.02s.
mindling_1 | 19:54:08 js.1 | exited with code 0
mindling_1 | 19:54:08 system | sending SIGTERM to all processes
mindling_1 | 19:54:08 web.1 | terminated by SIGTERM
mindling_1 | 19:54:09 css.1 | terminated by SIGTERM
mindling_1 exited with code 0
I've tried running a Bash in the application container - and calling the Procfile in a tty by itself looks more or less like this:
root#facfb249dc6b:/app# foreman start -f Procfile.dev
20:11:45 web.1 | started with pid 12
20:11:45 js.1 | started with pid 15
20:11:45 css.1 | started with pid 18
20:11:48 css.1 | yarn run v1.22.17
20:11:48 js.1 | yarn run v1.22.17
20:11:48 css.1 | $ tailwindcss -i ./app/assets/stylesheets/application.tailwind.css -o ./app/assets/builds/application.css --watch
20:11:49 js.1 | $ esbuild app/javascript/*.* --bundle --outdir=app/assets/builds --watch
20:11:50 js.1 | [watch] build finished, watching for changes...
20:11:53 web.1 | => Booting Puma
20:11:53 web.1 | => Rails 7.0.0 application starting in development
20:11:53 web.1 | => Run `bin/rails server --help` for more startup options
20:11:57 web.1 | Puma starting in single mode...
20:11:57 web.1 | * Puma version: 5.5.2 (ruby 3.0.3-p157) ("Zawgyi")
20:11:57 web.1 | * Min threads: 5
20:11:57 web.1 | * Max threads: 5
20:11:57 web.1 | * Environment: development
20:11:57 web.1 | * PID: 22
20:11:57 web.1 | * Listening on http://0.0.0.0:3000
20:11:57 web.1 | Use Ctrl-C to stop
20:11:58 css.1 |
20:11:58 css.1 | Rebuilding...
20:11:59 css.1 | Done in 1066ms.
^C20:13:23 system | SIGINT received, starting shutdown
20:13:23 web.1 | - Gracefully stopping, waiting for requests to finish
20:13:23 web.1 | === puma shutdown: 2021-12-22 20:13:23 +0000 ===
20:13:23 web.1 | - Goodbye!
20:13:23 web.1 | Exiting
20:13:24 system | sending SIGTERM to all processes
20:13:25 web.1 | terminated by SIGINT
20:13:25 js.1 | terminated by SIGINT
20:13:25 css.1 | terminated by SIGINT
root#facfb249dc6b:/app#
What is going on? It works when doing it 'by hand' but if I let docker-compose rip the processes somehow terminates!?!
I have isolated the issue to the build_css script in package.json (or at least it does keep going if I comment that line in the Procfile.dev)
All the 'dirty linen'
My package.json looks like this
{
...8<...
"scripts": {
"build_js": "esbuild app/javascript/*.* --bundle --outdir=app/assets/builds",
"build_css": "tailwindcss -i ./app/assets/stylesheets/application.tailwind.css -o ./app/assets/builds/application.css"
},
...8<...
}
My containers are exceptionally boring, looking like almost everybody else's:
FROM ruby:3.0.3
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y nodejs yarn
WORKDIR /app
COPY src/Gemfile /app/Gemfile
COPY src/Gemfile.lock /app/Gemfile.lock
RUN gem install bundler foreman && bundle install
EXPOSE 3000
ENTRYPOINT [ "entrypoint.sh" ]
version: "3.9"
db:
build: mysql
image: mindling_db
container_name: mindling_db
command: [ "--default-authentication-plugin=mysql_native_password" ]
ports:
- "3306:3306"
volumes:
- ~/src/mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: mindling_development
mindling:
platform: linux/x86_64
build: .
volumes:
- ./src:/app
ports:
- "3000:3000"
depends_on:
- db
and finally my entrypoint.sh
#!/usr/bin/env bash
rm -rf /app/tmp/pids/server.pid
foreman start -f Procfile.dev
Allow me to give credit to they who deserve it!! The correct answer was provided by earlopain in this issue on rails/rails
It's actually an almost embarrassingly easy fix - once you know it :)
Add tty: true to your docker-compose.yml - like this
mindling:
platform: linux/x86_64
build: .
tty: true
volumes:
- ./src:/app
ports:
- "3000:3000"
depends_on:
- db
Thanks Earlopain & #walt_die, you saved my day. Writing this answer because I had a bit of explanation which didn't fit in the comment.
Just like yours, when trying to run rails in docker using docker-compose the problem I was facing was that CMD bin/dev in dockerfile was constantly crashing, although it worked when ran manually via bash.
The issue was not with tailwindcss but esbuild instead. This line js: yarn build --watch in Procfile.dev was failing because it runs esbuild app/javascript/*.* --bundle --sourcemap --outdir=app/assets/builds --public-path=assets under the hood, and as mentioned by evanw in esbuild issue esbuild exits when stdin is closed.
So, the solution of adding tty: true to docker-compose.yml as above works.
Alternatively, one can also remove/comment out this line js: yarn build --watch from Procfile.dev works. But this won't compile the JS changes. So, one can jump into bash of the running container and manually run yarn build --watch

Why can't I deploy a Vue web application with Docker?

Windows 10 Pro
Docker for Windows
I'm following a tutorial that should allow me to build and deploy a simple Vue.js application via Docker. I'm able to deploy the Django app as my backend and access via port 8000, but the frontend deployment fails. To be more specific, the frontend is built and then exits seconds later. Logs/Commands below
UPDATE:
I migrated to Linux and haven't looked back since. No problems in an Ubuntu environment. Docker for Windows might still need some work.
Docker log:
Recreating frontend ... done Attaching to db, backend, frontend
frontend | Cache verified and compressed (~/.npm/_cacache):
frontend | Content verified: 0 (0 bytes)
frontend | Index entries: 0
frontend | Finished in 0.027s
frontend | npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.2.9 (node_modules/fsevents):
frontend | npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.2.9: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
frontend |
frontend | audited 928740 packages in 26.389s
frontend | found 1 high severity vulnerability
frontend | run `npm audit fix` to fix them, or `npm audit` for details
frontend |
frontend | > code#0.1.0 serve /app
frontend | > vue-cli-service serve
frontend |
frontend | /app/node_modules/.bin/vue-cli-service: line 1: XSym: not found
frontend | /app/node_modules/.bin/vue-cli-service: line 2: 0042: not found
frontend | /app/node_modules/.bin/vue-cli-service: line 3: 9b6d22955dddeb6cfbc52c6ade7b03d0: not found
frontend | /app/node_modules/.bin/vue-cli-service: line 4: ../#vue/cli-service/bin/vue-cli-service.js: not found
frontend | npm ERR! code ELIFECYCLE
frontend | npm ERR! syscall spawn
frontend | npm ERR! file sh
frontend | npm ERR! errno ENOENT
frontend | npm ERR! code#0.1.0 serve: `vue-cli-service serve`
frontend | npm ERR! spawn ENOENT
frontend | npm ERR!
frontend | npm ERR! Failed at the code#0.1.0 serve script.
frontend | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
frontend |
frontend | npm ERR! A complete log of this run can be found in:
frontend | npm ERR! /root/.npm/_logs/2019-10-25T14_05_55_326Z-debug.log
frontend exited with code 1
npm ERR! complete log:
0 info it worked if it ends with ok
1 verbose cli [ '/usr/local/bin/node', '/usr/local/bin/npm', 'run', 'serve' ]
2 info using npm#6.11.3
3 info using node#v12.12.0
4 verbose run-script [ 'preserve', 'serve', 'postserve' ]
5 info lifecycle code#0.1.0~preserve: code#0.1.0
6 info lifecycle code#0.1.0~serve: code#0.1.0
7 verbose lifecycle code#0.1.0~serve: unsafe-perm in lifecycle true
8 verbose lifecycle code#0.1.0~serve: PATH: /usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/app/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
9 verbose lifecycle code#0.1.0~serve: CWD: /app
10 silly lifecycle code#0.1.0~serve: Args: [ '-c', 'vue-cli-service serve' ]
11 info lifecycle code#0.1.0~serve: Failed to exec serve script
12 verbose stack Error: code#0.1.0 serve: `vue-cli-service serve`
12 verbose stack spawn ENOENT
12 verbose stack at ChildProcess.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:48:18)
12 verbose stack at ChildProcess.emit (events.js:210:5)
12 verbose stack at maybeClose (internal/child_process.js:1021:16)
12 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:283:5)
13 verbose pkgid code#0.1.0
14 verbose cwd /app
15 verbose Linux 4.9.184-linuxkit
16 verbose argv "/usr/local/bin/node" "/usr/local/bin/npm" "run" "serve"
17 verbose node v12.12.0
18 verbose npm v6.11.3
19 error code ELIFECYCLE
20 error syscall spawn
21 error file sh
22 error errno ENOENT
23 error code#0.1.0 serve: `vue-cli-service serve`
23 error spawn ENOENT
24 error Failed at the code#0.1.0 serve script.
24 error This is probably not a problem with npm. There is likely additional logging output above.
25 verbose exit [ 1, true ]
Frontend project creation commands:
# docker run --rm -it -v ~/dockerproject/frontend:/code node:12.12.0-alpine /bin/sh
# cd code
# npm i -g vue #vue/cli
# vue create .
Vue Project creation prompt:
Vue CLI v4.0.4
? Please pick a preset: Manually select features
? Check the features needed for your project: Babel, PWA, Router, Vuex, CSS Pre-processors, Linter, Unit, E2E
? Use history mode for router? (Requires proper server setup for index fallback in production) Yes
? Pick a CSS pre-processor (PostCSS, Autoprefixer and CSS Modules are supported by default): Sass/SCSS (with dart-sass)
? Pick a linter / formatter config: Airbnb
? Pick additional lint features: (Press <space> to select, <a> to toggle all, <i> to invert selection)Lint on save
? Pick a unit testing solution: Jest
? Pick a E2E testing solution: Nightwatch
? Pick browsers to run end-to-end test on Chrome, Firefox
? Where do you prefer placing config for Babel, PostCSS, ESLint, etc.? In package.json
? Save this as a preset for future projects? No
? Pick the package manager to use when installing dependencies: NPM
Docker-compose Build command:
docker-compose -f docker-compose.yml up --build
Project Folder:
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 10/24/2019 4:41 PM backend
d----- 10/24/2019 3:53 PM env
d----- 10/25/2019 7:33 AM frontend
d----- 10/25/2019 9:05 AM npmlogs
-a---- 10/25/2019 8:58 AM 674 docker-compose.dev.yml
-a---- 10/24/2019 3:53 PM 242 docker-compose.yml
-a---- 10/23/2019 10:07 AM 52 README.md
frontend folder:
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 10/25/2019 7:19 AM node_modules
d----- 10/25/2019 7:13 AM public
d----- 10/25/2019 7:13 AM src
d----- 10/25/2019 7:13 AM tests
-a---- 10/25/2019 7:13 AM 160 .editorconfig
-a---- 10/25/2019 7:21 AM 326 .gitignore
-a---- 10/25/2019 7:22 AM 75 .gitkeep
-a---- 10/25/2019 7:15 AM 76 babel.config.js
-a---- 10/25/2019 7:26 AM 272 Dockerfile
-a---- 10/25/2019 7:15 AM 577145 package-lock.json
-a---- 10/25/2019 7:13 AM 1745 package.json
-a---- 10/25/2019 7:15 AM 423 README.md
-a---- 10/25/2019 7:33 AM 152 start_dev.sh
Dockerfile in frontend folder:
FROM node:12.12.0-alpine
# make the 'app' folder the current working directory
WORKDIR /app/
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# expose port 8080 to the host
EXPOSE 8080
CMD ["sh", "start_dev.sh"]
start_dev.sh in frontend folder:
#!/bin/bash
# https://docs.npmjs.com/cli/cache
npm cache verify
# install project dependencies
npm install
# run the development server
npm run serve
Docker-compose file in root folder:
version: '3'
services:
db:
container_name: db
image: postgres
networks:
- main
backend:
container_name: backend
build: ./backend
command: /start.sh
volumes:
- .:/code
ports:
- "8000:8000"
networks:
- main
depends_on:
- db
frontend:
container_name: frontend
build:
context: ./frontend
volumes:
- ./frontend:/app/frontend:ro
- '/app/node_modules'
- ./npmlogs:/root/.npm/_logs/
ports:
- "8080:8080"
networks:
- main
depends_on:
- backend
- db
environment:
- NODE_ENV=development
networks:
main:
driver: bridge

Sharing files between two containers

For couple of hours I am struggling with docker compose. I am building angular app. And I could see the files in the dist directory. Now I want to share these files with the nginx container. I thought the shared volume will do it. But when I add
services:
client:
volumes:
- static:/app/client/dist
nginx:
volumes:
- static:share/user/nginx/html
volumes:
static:
an try docker-compose up --build
I got this error
client_1 | EBUSY: resource busy or locked, rmdir '/app/client/dist'
client_1 | Error: EBUSY: resource busy or locked, rmdir '/app/client/dist'
client_1 | at Object.fs.rmdirSync (fs.js:863:18)
client_1 | at rmdirSync (/app/client/node_modules/fs-extra/lib/remove/rimraf.js:276:13)
client_1 | at Object.rimrafSync [as removeSync] (/app/client/node_modules/fs-extra/lib/remove/rimraf.js:252:7)
client_1 | at Class.run (/app/client/node_modules/#angular/cli/tasks/build.js:29:16)
client_1 | at Class.run (/app/client/node_modules/#angular/cli/commands/build.js:250:40)
client_1 | at resolve (/app/client/node_modules/#angular/cli/ember-cli/lib/models/command.js:261:20)
client_1 | at new Promise (<anonymous>)
client_1 | at Class.validateAndRun (/app/client/node_modules/#angular/cli/ember-cli/lib/models/command.js:240:12)
client_1 | at Promise.resolve.then.then (/app/client/node_modules/#angular/cli/ember-cli/lib/cli/cli.js:140:24)
client_1 | at <anonymous>
client_1 | npm ERR! code ELIFECYCLE
client_1 | npm ERR! errno 1
client_1 | npm ERR! app#0.0.0 build: `ng build --prod`
client_1 | npm ERR! Exit status 1
client_1 | npm ERR!
client_1 | npm ERR! Failed at the app#0.0.0 build-prod script.
client_1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
Any help is fully appreciated
I believe this is, as the error suggests, a deadlock situation. Your docker-compose file has 2 services that start approximately, if not, simultaneously. Both of them has some sort of hold on the Docker volume (named, "static"). When Angular executes ng build, by default, --deleteOutputPath is set to true. And when it attempts to delete the output directory, the error message that you received occurs.
If deleteOutputPath is set to false the issue should be resolved. Perhaps that is sufficient for your needs. If not, as an alternative, having the --outputPath set to a temp directory within the project directory and after Angular builds, move the contents into the Docker volume. If the temp directory path is out/dist and the volume maps to dist this can be used:
ng build && cp -rf ./out/dist/* ./dist
However, that alternative solution is really just working around the issue. To make note, the docker-compose depends_on key will not help in this situation as it simply signifies a dependency and nothing to do with "readiness" of the dependent service.
And also to make note, executing docker volume rm <name> will have no consequences as a solution here. Remember, both services have a hold on the volume as one is trying to remove it.
Just a thought, haven't tested, as another alternative solution is to delete the contents within the output path. And set the deleteOutputPath to false, since Angular seems to be attempting to delete the directory itself.
Update:
So removing the contents in the output path seems to work! As I mentioned, set deleteOutputPath to false. And in your package.json file, in the scripts object, having something similar to this:
{
"scripts": {
"build:production": "rm -rf ./dist/* && ng build --configuration production",
}
}
You can try to solve it without using named volumes:
services:
client:
volumes:
- ./static-content:client/app/dist
nginx:
volumes:
- ./static-content:share/user/nginx/html

Resources