Docker-compose up fails with exit code 135 - docker

I'm trying to build a simple next.js app with Docker compose but it keeps failing on docker-compose build with an exit code 135. I'm running it on a Mac M1 Pro (if that is relevant).
I couldn't find any resources pointing to an exit code 135 though.
This is the docker-compose.yaml
version: '3'
services:
next-app:
image: node:18-alpine
volumes:
- ./:/site
command: >
sh -c "npm install && npm run build && yarn start -H 0.0.0.0 -p 80"
working_dir: /site
ports:
- 80:80
And the logs:
[+] Running 1/0
⠿ Container next-app Created 0.0s
Attaching to next-app
next-app |
next-app | up to date, audited 551 packages in 3s
next-app |
next-app | 114 packages are looking for funding
next-app | run `npm fund` for details
next-app |
next-app | 5 moderate severity vulnerabilities
next-app |
next-app | To address all issues (including breaking changes), run:
next-app | npm audit fix --force
next-app |
next-app | Run `npm audit` for details.
next-app |
next-app | > marketing-site-v2#0.1.0 build
next-app | > next build
next-app |
next-app | info - Linting and checking validity of types...
next-app |
next-app | ./pages/cloud.tsx
next-app | 130:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app | 133:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app | 150:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app |
next-app | ./pages/index.tsx
next-app | 176:10 Warning: Image elements must have an alt prop, either with meaningful text, or an empty string for decorative images. jsx-a11y/alt-text
next-app |
next-app | ./components/main-content-display.tsx
next-app | 129:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app |
next-app | info - Need to disable some ESLint rules? Learn more here: https://nextjs.org/docs/basic-features/eslint#disabling-rules
next-app | info - Creating an optimized production build...
next-app | Bus error
next-app exited with code 135

Without knowing exactly what's in your package.json file - I would try this.
Spin up your vanilla node:18-alpine image without installing dependencies via the adjusted compose file below.
version: '3'
services:
next-app:
image: node:18-alpine
container_name: my_test_container
volumes:
- ./:/site
command: >
sh -c "tail -f /dev/null"
working_dir: /site
ports:
- 80:80
The command being used here
sh -c "tail -f /dev/null"
is a popular vanilla option for keeping a container up and running when using compose (and when not executing some other command e.g,. npm start that would keep the container running otherwise).
I have also added a container_name for reference here.
Next, enter the container and try running each command in your original sh sequentially (starting with npm install). See if one of those commands is a problem.
You can enter the container (using the container_name above) via the command below to test
docker container exec -u 0 -it my_test_container bash
As an aside, at some point I would pull commands like npm install from your compose file back to a Dockerfile defining your image (here node:18-alpine) and any additional custom installs you need for your application (here contained in package.json).

You did use sh commend in docker-compose which is not a good practice to use docker.
You need docker-compose.yml along with Dockerfile as mentioned below.
docker-compose.yml
version: "3"
services:
next-app:
build: .
ports:
- "80:80"
in Dockerfile
FROM node:16.15.1-alpine3.16 as site
WORKDIR /usr/src/site
COPY site/ .
RUN npm install
RUN npm run build
EXPOSE 80
CMD npm start
after this changes you just need a single command to start server.
docker-compose up --build -d

Related

Docker container running but can't access it in the browser

This is the dockerfile:
FROM node
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "npm","start" ]
this is the docker-compose file:
userPortal:
image: userportal:latest
ports:
- 3001:3001
links:
- apiServer
command: ["npm","start"]
this is the docker-compose ps:
localdeployment_userPortal_1 docker-entrypoint.sh npm start Up 0.0.0.0:3001->3001/tcp
this is the package-json:
"scripts": {
"start": "set PORT=3001 && react-scripts start",
...}
this is the container's logs:
userPortal_1 |
userPortal_1 | > user-portal#0.1.0 start
userPortal_1 | > set PORT=3001 && react-scripts start
userPortal_1 |
userPortal_1 | (node:31) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
userPortal_1 | (Use `node --trace-deprecation ...` to show where the warning was created)
userPortal_1 | (node:31) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
userPortal_1 | Starting the development server...
and this is what I get when I try to access localhost:3001
[1]: https://i.stack.imgur.com/lwaOR.png
When I use npm start without docker it works fine so it is not a proxy problem.

Getting an npm permissions error in CI/CD after upgrading to Node v16 - working fine locally running container from same image

After upgrading from Node 14.15 => 16.18.1, I have started getting a Error: EACCES: permission denied, scandir '/root/.npm/_logs' error when trying to run ESLint in the GitHub Actions CI/CD workflow.
Note: I am running this inside a Docker container. I am able to run the ESLint script locally in a container built from the same image, but when running in Github Actions, the workflow fails with the following error:
Creating csps-lint ... done
Attaching to csps-lint
csps-lint |
csps-lint | > lint
csps-lint | > bash ./scripts/lint.sh
csps-lint |
csps-lint | Running Prettier to identify code formatting issues...
csps-lint |
csps-lint | Checking formatting...
csps-lint | All matched files use Prettier code style!
csps-lint |
csps-lint | Running ESLint to identify code quality issues...
csps-lint | npm WARN logfile Error: EACCES: permission denied, scandir '/root/.npm/_logs'
csps-lint | npm WARN logfile error cleaning log files [Error: EACCES: permission denied, scandir '/root/.npm/_logs'] {
csps-lint | npm WARN logfile errno: -13,
csps-lint | npm WARN logfile code: 'EACCES',
csps-lint | npm WARN logfile syscall: 'scandir',
csps-lint | npm WARN logfile path: '/root/.npm/_logs'
csps-lint | npm WARN logfile }
csps-lint | npm ERR! code EACCES
csps-lint | npm ERR! syscall mkdir
csps-lint | npm ERR! path /root/.npm/_cacache/tmp
csps-lint | npm ERR! errno -13
csps-lint | npm ERR!
csps-lint | npm ERR! Your cache folder contains root-owned files, due to a bug in
csps-lint | npm ERR! previous versions of npm which has since been addressed.
csps-lint | npm ERR!
csps-lint | npm ERR! To permanently fix this problem, please run:
csps-lint | npm ERR! sudo chown -R 1001:123 "/root/.npm"
csps-lint |
csps-lint | npm ERR! Log files were not written due to an error writing to the directory: /root/.npm/_logs
csps-lint | npm ERR! You can rerun the command with `--loglevel=verbose` to see the logs in your terminal
csps-lint | ESLint found code quality issues. Please address any errors above and try again.
csps-lint |
csps-lint exited with code 1
1
Aborting on container exit...
Error: Process completed with exit code 1.
My GH Actions workflow is this:
name: test-eslint-fix
on:
push:
branches:
- eslint-fix
jobs:
login:
runs-on: ubuntu-latest
steps:
- uses: docker/login-action#v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
linting:
needs: login
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- run: LINT_COMMAND=lint docker-compose -f docker-comp-lint.yml up --abort-on-container-
Here is the Dockerfile that the image is created from:
# start FROM a base layer of node v16.18.1
FROM node:16.18.1
# confirm installation
RUN node -v
RUN npm -v
# Globally install webpack in the container for webpack dev server
RUN npm install webpack -g
# Install db-migrate
RUN npm install db-migrate#0.11.11 -g
# Set up a WORKDIR for application in the container
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json /usr/src/app/
# npm install to create node_modules in the container
RUN npm install
# EXPOSE your server port
EXPOSE 3000
This is my docker-compose file:
version: '3'
services:
csps-lint:
image: <<my image>>
container_name: csps-lint
ports:
- '3002:3002'
volumes:
- ./:/usr/src/app
- node_modules:/usr/src/app/node_modules
environment:
- NODE_ENV=test
- LINT_COMMAND=${LINT_COMMAND}
command: npm run ${LINT_COMMAND}
volumes:
node_modules:
And this is the bash script that is running the ESLint command:
# Run Prettier in check mode and store status
echo -e "\033[1;33mRunning Prettier to identify code formatting issues...\033[0m\n"
prettier --check .
PRETTIER_STATUS=$?
# Run ESLint in check mode and store status
echo -e "\n\033[1;33mRunning ESLint to identify code quality issues...\033[0m"
npx eslint .
ESLINT_STATUS=$?
I have tried adding the recommended fix from the error message in multiple places, but nothing seems to resolve the issue. I've put this command in the lint.sh script, I've added it to the Dockerfile, I've tried adding in the docker-compose file as well, but I'm unable to resolve the issue no matter where I place the command.
sudo chown -R 1001:123 "/root/.npm"
In a few of the instances, there was no access to sudo, so I'm wondering if I somehow need to install that first? Do I even need to be setting these permissions? Is there something that has changed from Node v14 to Node v16 that I'm just overlooking?
I've tried to use the solutions provided in other posts with similar error messages for other packages, but none of the solutions seem to fix this issue.
I feel like there must just be some very small thing I've missed, but the part that I'm really lost on is that I haven't changed this workflow other than upgrading the Node.js version in the Dockerfile. Also, I am able to run this workflow locally without issue in a container created from the same image, but inside GH Actions I can't seem to get past this flow.
When running the following command locally docker exec csps-lint ls -la /root/.npm/_logs, this is my output:
total 1080
drwxr-xr-x 1 root root 4096 Dec 2 23:02 .
drwxr-xr-x 1 root root 4096 Dec 2 20:15 ..
-rw-r--r-- 1 root root 1589 Nov 15 06:46 2022-11-15T06_46_44_197Z-debug-0.log
-rw-r--r-- 1 root root 1589 Dec 2 20:15 2022-12-02T20_15_05_450Z-debug-0.log
-rw-r--r-- 1 root root 65102 Dec 2 20:15 2022-12-02T20_15_05_684Z-debug-0.log
-rw-r--r-- 1 root root 59037 Dec 2 20:15 2022-12-02T20_15_09_141Z-debug-0.log
-rw-r--r-- 1 root root 943351 Dec 2 20:15 2022-12-02T20_15_14_433Z-debug-0.log
-rw-r--r-- 1 root root 1706 Dec 2 20:23 2022-12-02T20_23_05_075Z-debug-0.log
-rw-r--r-- 1 root root 1756 Dec 2 20:23 2022-12-02T20_23_16_787Z-debug-0.log
-rw-r--r-- 1 root root 1599 Dec 2 23:02 2022-12-02T23_02_23_793Z-debug-0.log
UPDATE: So I have discovered that when I run the script in the csps-lint folder locally, the user is root, but when it is run in the CI/CD, it runs as an unnamed user with ID 1001. I'm really unclear why the user would be different when the container is built from the same image no matter where it's being run.
I believe this issue was caused by npm updates no longer running in root. After much back and forth and trying 101 different things, I was able to resolve the issue by:
Adding the following command to the end of my dockerfile
RUN chown -R node:node /usr/src/app
Adding a user key to my docker-compose.yml file
version: '3'
services:
csps-lint:
image: <<my image>>
user: node
container_name: csps-lint
ports:
- '3002:3002'
volumes:
- ./:/usr/src/app
- node_modules:/usr/src/app/node_modules
environment:
- NODE_ENV=test
- LINT_COMMAND=${LINT_COMMAND}
command: npm run ${LINT_COMMAND}
volumes:
node_modules:

why docker does not recognize /bin/sh -c as a valid entrypoint?

I have a simple nginx container trying to run from docker-compose..
version: "3.3"
services:
nginx:
image: nginx
privileged: true
entrypoint: ["/bin/sh -c"]
command: ["ls -lha ~"]
but it fails with:
docker-compose up -d
ERROR: for junk_nginx_1 Cannot start service nginx: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh -c": stat /bin/sh -c: no such file or directory: unknown
I thought it was because /bin/sh doesn't exists in the image, but it certainly does. removing the -c gives me the following error:
# this time the container runs, this is in the container logs.
/bin/sh: 0: Can't open ls -lha ~
so /bin/sh does exists within the image. what am I doing wrong?
When you use the array form of Compose command: and entrypoint: (and, similarly, the JSON-array form of Dockerfile CMD, ENTRYPOINT, and RUN), you are responsible for breaking up the input into words. Each item in the array is a word, just as though it was quoted in the shell, and includes any spaces, punctuation, and other characters.
So when you say
entrypoint: ["/bin/sh -c"]
That is one word, not a command and its argument, and you are telling the shell to look for an executable program named sh -c (including space hyphen c as part of the filename) in the /bin directory. Since that's not there, you get an error.
You shouldn't usually need to override entrypoint: in a Compose setup. In your case, the only shell expansion you need is the home directory ~ but that's not well-defined in Docker. You should be able to just write
command: ls -lha /usr/share/nginx/html
or in array form
command: ["ls", "-lha", "/usr/share/nginx/html"]
# (or other YAML syntaxes with fewer quotes or more lines)
or if you really need the sh -c wrapper
command: /bin/sh -c 'ls -lha ~'
command: ["/bin/sh", "-c", "ls -lha ~"]
command:
- /bin/sh
- -c
- >-
ls -lha ~;
echo these lines get folded together;
nginx -g 'daemon off;'
You're using the stock Docker Hub nginx image; also consider whether docker-compose run might be an easier way to run a one-off command
docker-compose run nginx \
ls -lha /usr/share/nginx/html
If it's your own image, try hard to avoid needing to override ENTRYPOINT. Make CMD be a complete command; if you need an ENTRYPOINT, a shell script that ends with exec "$#" so that it runs the CMD is a typical pattern.
See entrypoint usage:
entrypoint: ["php", "-d", "memory_limit=-1", "vendor/bin/phpunit"]
Also see command usage:
command: ["bundle", "exec", "thin", "-p", "3000"]
So, your error means your syntax not ok, the correct one for you is:
version: "3.3"
services:
nginx:
image: nginx
privileged: true
entrypoint: ["/bin/sh", "-c"]
command: ["ls", "-lha", "~"]
The execution:
$ docker-compose up
Creating network "20210812_default" with the default driver
Creating 20210812_nginx_1 ... done
Attaching to 20210812_nginx_1
nginx_1 | bin
nginx_1 | boot
nginx_1 | dev
nginx_1 | docker-entrypoint.d
nginx_1 | docker-entrypoint.sh
nginx_1 | etc
nginx_1 | home
nginx_1 | lib
nginx_1 | lib64
nginx_1 | media
nginx_1 | mnt
nginx_1 | opt
nginx_1 | proc
nginx_1 | root
nginx_1 | run
nginx_1 | sbin
nginx_1 | srv
nginx_1 | sys
nginx_1 | tmp
nginx_1 | usr
nginx_1 | var
20210812_nginx_1 exited with code 0

Docker Image Ubuntu 20.04 input stty: 'standard input': Inappropriate ioctl for device

I have built a Docker Compose project which worked without problems in my container until Ubuntu 18.04.
Now I have updated the project to version Ubunut 20.04 and I get the following error message when starting a shell script Docker Entrypoint:
#!/usr/bin/env bash
if [ ! -f /usr/local/etc/piler/config-site.php ]
then
sleep 20
cd /root/mailpiler/piler/
cp util/postinstall.sh util/postinstall.sh.bak
sed -i "s/ SMARTHOST=.*/ SMARTHOST="\"$MAILSERVER_DOMAIN\""/" util/postinstall.sh
sed -i 's/ WWWGROUP=.*/ WWWGROUP="www-data"/' util/postinstall.sh
sed -i "s/ "
echo -e "y\n\n$PILER_DB_HOST\n$MYSQL_DATABASE\n$MYSQL_USER\n$MYSQL_PASSWORD\n$MYSQL_ROOT_PASSWORD\n\n\nY\nY\n" | make postinstall
# echo -e "y\n$PILER_DB_HOST\n\n$MYSQL_DATABASE\n$MYSQL_USER\n$MYSQL_PASSWORD\n$MYSQL_ROOT_PASSWORD\n\n\n\ny\ny\n" | make postinstall
fi
The command echo -e "y\n\n$PILER_DB_HOST\n$MYSQL_DATABASE\n$MYSQL_USER\n$MYSQL_PASSWORD\n$MYSQL_ROOT_PASSWORD\n\n\nY\nY\n" | make postinstall went without problems so far but since Ubuntu 20.04 the following error is coming:
app_1 | This is the postinstall utility for piler
app_1 | It should be run only at the first install. DO NOT run on an existing piler installation!
app_1 |
app_1 |
app_1 | Continue? [Y/N] [N]
app_1 |
app_1 | Please enter the webserver groupname [www-data]
app_1 | Please enter mysql hostname [localhost]
app_1 | Please enter mysql database [piler]
app_1 | Please enter mysql user name [piler] stty:
'standard input': Inappropriate ioctl for device
'standard input': Inappropriate ioctl for device
app_1 | make: *** [Makefile:126: postinstall] Error 1
The script:
https://bitbucket.org/jsuto/piler/src/master/util/postinstall.sh.in
I'm sure it has something to do with the stty mode and Docker, but I don't know how to fix it.
By the way the script works without dockerized under Ubuntu 20.04 without problems.
My Docker-Compose file:
version: "3.2"
services:
app:
build:
context: ./docker/images
stdin_open: true
tty: true
env_file: .env
depends_on:
- db
ports:
- "25:25"
- "80:80"
restart: always
volumes:
- ./docker/data/piler-local:/usr/local/etc/piler
- ./docker/data/piler-data:/var/piler
db:
image: mariadb:10.4
env_file: .env
restart: always
ports:
- "3306:3306"
volumes:
- ./docker/data/mysql-data:/var/lib/mysql

Sharing files between two containers

For couple of hours I am struggling with docker compose. I am building angular app. And I could see the files in the dist directory. Now I want to share these files with the nginx container. I thought the shared volume will do it. But when I add
services:
client:
volumes:
- static:/app/client/dist
nginx:
volumes:
- static:share/user/nginx/html
volumes:
static:
an try docker-compose up --build
I got this error
client_1 | EBUSY: resource busy or locked, rmdir '/app/client/dist'
client_1 | Error: EBUSY: resource busy or locked, rmdir '/app/client/dist'
client_1 | at Object.fs.rmdirSync (fs.js:863:18)
client_1 | at rmdirSync (/app/client/node_modules/fs-extra/lib/remove/rimraf.js:276:13)
client_1 | at Object.rimrafSync [as removeSync] (/app/client/node_modules/fs-extra/lib/remove/rimraf.js:252:7)
client_1 | at Class.run (/app/client/node_modules/#angular/cli/tasks/build.js:29:16)
client_1 | at Class.run (/app/client/node_modules/#angular/cli/commands/build.js:250:40)
client_1 | at resolve (/app/client/node_modules/#angular/cli/ember-cli/lib/models/command.js:261:20)
client_1 | at new Promise (<anonymous>)
client_1 | at Class.validateAndRun (/app/client/node_modules/#angular/cli/ember-cli/lib/models/command.js:240:12)
client_1 | at Promise.resolve.then.then (/app/client/node_modules/#angular/cli/ember-cli/lib/cli/cli.js:140:24)
client_1 | at <anonymous>
client_1 | npm ERR! code ELIFECYCLE
client_1 | npm ERR! errno 1
client_1 | npm ERR! app#0.0.0 build: `ng build --prod`
client_1 | npm ERR! Exit status 1
client_1 | npm ERR!
client_1 | npm ERR! Failed at the app#0.0.0 build-prod script.
client_1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
Any help is fully appreciated
I believe this is, as the error suggests, a deadlock situation. Your docker-compose file has 2 services that start approximately, if not, simultaneously. Both of them has some sort of hold on the Docker volume (named, "static"). When Angular executes ng build, by default, --deleteOutputPath is set to true. And when it attempts to delete the output directory, the error message that you received occurs.
If deleteOutputPath is set to false the issue should be resolved. Perhaps that is sufficient for your needs. If not, as an alternative, having the --outputPath set to a temp directory within the project directory and after Angular builds, move the contents into the Docker volume. If the temp directory path is out/dist and the volume maps to dist this can be used:
ng build && cp -rf ./out/dist/* ./dist
However, that alternative solution is really just working around the issue. To make note, the docker-compose depends_on key will not help in this situation as it simply signifies a dependency and nothing to do with "readiness" of the dependent service.
And also to make note, executing docker volume rm <name> will have no consequences as a solution here. Remember, both services have a hold on the volume as one is trying to remove it.
Just a thought, haven't tested, as another alternative solution is to delete the contents within the output path. And set the deleteOutputPath to false, since Angular seems to be attempting to delete the directory itself.
Update:
So removing the contents in the output path seems to work! As I mentioned, set deleteOutputPath to false. And in your package.json file, in the scripts object, having something similar to this:
{
"scripts": {
"build:production": "rm -rf ./dist/* && ng build --configuration production",
}
}
You can try to solve it without using named volumes:
services:
client:
volumes:
- ./static-content:client/app/dist
nginx:
volumes:
- ./static-content:share/user/nginx/html

Resources