I have a simple nginx container trying to run from docker-compose..
version: "3.3"
services:
nginx:
image: nginx
privileged: true
entrypoint: ["/bin/sh -c"]
command: ["ls -lha ~"]
but it fails with:
docker-compose up -d
ERROR: for junk_nginx_1 Cannot start service nginx: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh -c": stat /bin/sh -c: no such file or directory: unknown
I thought it was because /bin/sh doesn't exists in the image, but it certainly does. removing the -c gives me the following error:
# this time the container runs, this is in the container logs.
/bin/sh: 0: Can't open ls -lha ~
so /bin/sh does exists within the image. what am I doing wrong?
When you use the array form of Compose command: and entrypoint: (and, similarly, the JSON-array form of Dockerfile CMD, ENTRYPOINT, and RUN), you are responsible for breaking up the input into words. Each item in the array is a word, just as though it was quoted in the shell, and includes any spaces, punctuation, and other characters.
So when you say
entrypoint: ["/bin/sh -c"]
That is one word, not a command and its argument, and you are telling the shell to look for an executable program named sh -c (including space hyphen c as part of the filename) in the /bin directory. Since that's not there, you get an error.
You shouldn't usually need to override entrypoint: in a Compose setup. In your case, the only shell expansion you need is the home directory ~ but that's not well-defined in Docker. You should be able to just write
command: ls -lha /usr/share/nginx/html
or in array form
command: ["ls", "-lha", "/usr/share/nginx/html"]
# (or other YAML syntaxes with fewer quotes or more lines)
or if you really need the sh -c wrapper
command: /bin/sh -c 'ls -lha ~'
command: ["/bin/sh", "-c", "ls -lha ~"]
command:
- /bin/sh
- -c
- >-
ls -lha ~;
echo these lines get folded together;
nginx -g 'daemon off;'
You're using the stock Docker Hub nginx image; also consider whether docker-compose run might be an easier way to run a one-off command
docker-compose run nginx \
ls -lha /usr/share/nginx/html
If it's your own image, try hard to avoid needing to override ENTRYPOINT. Make CMD be a complete command; if you need an ENTRYPOINT, a shell script that ends with exec "$#" so that it runs the CMD is a typical pattern.
See entrypoint usage:
entrypoint: ["php", "-d", "memory_limit=-1", "vendor/bin/phpunit"]
Also see command usage:
command: ["bundle", "exec", "thin", "-p", "3000"]
So, your error means your syntax not ok, the correct one for you is:
version: "3.3"
services:
nginx:
image: nginx
privileged: true
entrypoint: ["/bin/sh", "-c"]
command: ["ls", "-lha", "~"]
The execution:
$ docker-compose up
Creating network "20210812_default" with the default driver
Creating 20210812_nginx_1 ... done
Attaching to 20210812_nginx_1
nginx_1 | bin
nginx_1 | boot
nginx_1 | dev
nginx_1 | docker-entrypoint.d
nginx_1 | docker-entrypoint.sh
nginx_1 | etc
nginx_1 | home
nginx_1 | lib
nginx_1 | lib64
nginx_1 | media
nginx_1 | mnt
nginx_1 | opt
nginx_1 | proc
nginx_1 | root
nginx_1 | run
nginx_1 | sbin
nginx_1 | srv
nginx_1 | sys
nginx_1 | tmp
nginx_1 | usr
nginx_1 | var
20210812_nginx_1 exited with code 0
Related
I have created this Dockerfile:
FROM couchdb:latest
EXPOSE 5984
COPY local.ini /opt/couchdb/etc/
But even though I specified [admins] inside of the local.ini, I still get this error at launch:
[error] 2022-11-06T17:55:49.799365Z nonode#nohost emulator -------- Error in process <0.15793.0> with exit value:
{database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,400}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,375}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,404}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,97}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,198}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,145}]}]}
What do I need to do in order to avoid this error?
Dockerfile
FROM couchdb:latest
EXPOSE 5984
COPY setup.sh setup.sh
RUN sh setup.sh
setup.sh
#!/bin/sh -xe
cat >/opt/couchdb/etc/local.ini <<EOF
[couchdb]
single_node=true
[admins]
dbadmin = $(base32 /dev/random |head -1|cut -c-24)
EOF
nohup bash -c "/docker-entrypoint.sh /opt/couchdb/bin/couchdb &"
sleep 15
curl -X PUT http://127.0.0.1:5984/_users
curl -X PUT http://127.0.0.1:5984/_replicator
I'm trying to build a simple next.js app with Docker compose but it keeps failing on docker-compose build with an exit code 135. I'm running it on a Mac M1 Pro (if that is relevant).
I couldn't find any resources pointing to an exit code 135 though.
This is the docker-compose.yaml
version: '3'
services:
next-app:
image: node:18-alpine
volumes:
- ./:/site
command: >
sh -c "npm install && npm run build && yarn start -H 0.0.0.0 -p 80"
working_dir: /site
ports:
- 80:80
And the logs:
[+] Running 1/0
⠿ Container next-app Created 0.0s
Attaching to next-app
next-app |
next-app | up to date, audited 551 packages in 3s
next-app |
next-app | 114 packages are looking for funding
next-app | run `npm fund` for details
next-app |
next-app | 5 moderate severity vulnerabilities
next-app |
next-app | To address all issues (including breaking changes), run:
next-app | npm audit fix --force
next-app |
next-app | Run `npm audit` for details.
next-app |
next-app | > marketing-site-v2#0.1.0 build
next-app | > next build
next-app |
next-app | info - Linting and checking validity of types...
next-app |
next-app | ./pages/cloud.tsx
next-app | 130:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app | 133:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app | 150:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app |
next-app | ./pages/index.tsx
next-app | 176:10 Warning: Image elements must have an alt prop, either with meaningful text, or an empty string for decorative images. jsx-a11y/alt-text
next-app |
next-app | ./components/main-content-display.tsx
next-app | 129:6 Warning: Do not use <img>. Use Image from 'next/image' instead. See: https://nextjs.org/docs/messages/no-img-element #next/next/no-img-element
next-app |
next-app | info - Need to disable some ESLint rules? Learn more here: https://nextjs.org/docs/basic-features/eslint#disabling-rules
next-app | info - Creating an optimized production build...
next-app | Bus error
next-app exited with code 135
Without knowing exactly what's in your package.json file - I would try this.
Spin up your vanilla node:18-alpine image without installing dependencies via the adjusted compose file below.
version: '3'
services:
next-app:
image: node:18-alpine
container_name: my_test_container
volumes:
- ./:/site
command: >
sh -c "tail -f /dev/null"
working_dir: /site
ports:
- 80:80
The command being used here
sh -c "tail -f /dev/null"
is a popular vanilla option for keeping a container up and running when using compose (and when not executing some other command e.g,. npm start that would keep the container running otherwise).
I have also added a container_name for reference here.
Next, enter the container and try running each command in your original sh sequentially (starting with npm install). See if one of those commands is a problem.
You can enter the container (using the container_name above) via the command below to test
docker container exec -u 0 -it my_test_container bash
As an aside, at some point I would pull commands like npm install from your compose file back to a Dockerfile defining your image (here node:18-alpine) and any additional custom installs you need for your application (here contained in package.json).
You did use sh commend in docker-compose which is not a good practice to use docker.
You need docker-compose.yml along with Dockerfile as mentioned below.
docker-compose.yml
version: "3"
services:
next-app:
build: .
ports:
- "80:80"
in Dockerfile
FROM node:16.15.1-alpine3.16 as site
WORKDIR /usr/src/site
COPY site/ .
RUN npm install
RUN npm run build
EXPOSE 80
CMD npm start
after this changes you just need a single command to start server.
docker-compose up --build -d
Here, I deployed 2 containers with --scale flag
docker-compose up -d --scale gitlab-runner=2
2.Two containers are being deployed with names scalecontainer_gitlab-runner_1 and scalecontainer_gitlab-runner_2 resp.
I want to map different volume for each container.
/srv/gitlab-runner/config_${DOCKER_SCALE_NUM}:/etc/gitlab-runner
Getting this error:
WARNING: The DOCKER_SCALE_NUM variable is not set. Defaulting to a blank string.
Is there any way, I can map different volume for separate container .
services:
gitlab-runner:
image: "gitlab/gitlab-runner:latest"
restart: unless-stopped
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- /srv/gitlab-runner/config_${DOCKER_SCALE_NUM}:/etc/gitlab-runner
version: "3.5"
I don't think you can, there's an open request on this here. Here I will try to describe an alternative method for getting what you want.
Try creating a symbolic link from within the container that links to the directory you want. You can determine the "number" of the container after it's constructed by reading the container name from docker API and taking the final segment. To do this you have to mount the docker socket into the container, which has big security implications.
Setup
Here is a simple script to get the number of the container (Credit Tony Guo).
get-name.sh
DOCKERINFO=$(curl -s --unix-socket /run/docker.sock http://docker/containers/$HOSTNAME/json)
ID=$(python3 -c "import sys, json; print(json.loads(sys.argv[1])[\"Name\"].split(\"_\")[-1])" "$DOCKERINFO")
echo "$ID"
Then we have a simple entrypoint file which gets the container number, creates the specific config directory if it doesn't exist, and links its specific config directory to a known location (/etc/config in this example).
entrypoint.sh
#!/bin/sh
# Get the number of this container
NAME=$(get-name)
CONFIG_DIR="/config/config_${NAME}"
# Create a config dir for this container if none exists
mkdir -p "$CONFIG_DIR"
# Create a sym link from a well known location to our individual config dir
ln -s "$CONFIG_DIR" /etc/config
exec "$#"
Next we have a Dockerfile to build our image, we need to set the entrypoint and install curl and python for it to work. Also copy in our get-name.sh script.
Dockerfile
FROM alpine
COPY entrypoint.sh entrypoint.sh
COPY get-name.sh /usr/bin/get-name
RUN apk update && \
apk add \
curl \
python3 \
&& \
chmod +x entrypoint.sh /usr/bin/get-name
ENTRYPOINT ["/entrypoint.sh"]
Last, a simple compose file that specifies our service. Note that the docker socket is mounted, as well as ./config which is where our different config directories go.
docker-compose.yml
version: '3'
services:
app:
build: .
command: tail -f
volumes:
- /run/docker.sock:/run/docker.sock:ro
- ./config:/config
Example
# Start the stack
$ docker-compose up -d --scale app=3
Starting volume-per-scaled-container_app_1 ... done
Starting volume-per-scaled-container_app_2 ... done
Creating volume-per-scaled-container_app_3 ... done
# Check config directory on our host, 3 new directories were created.
$ ls config/
config_1 config_2 config_3
# Check the /etc/config directory in container 1, see that it links to the config_1 directory
$ docker exec volume-per-scaled-container_app_1 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_1
# Container 2
$ docker exec volume-per-scaled-container_app_2 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_2
# Container 3
$ docker exec volume-per-scaled-container_app_3 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_3
Notes
I think gitlab/gitlab-runner has its own entrypoint file so you may need to chain them.
You'll need to adapt this example to your specific setup/locations.
I have the following structure,
cht#dell-laptop [docker-compose-test] $ tree
.
├── config
│ └── file
├── docker-compose.yml
└── Dockerfile
1 directory, 3 files
cht#dell-laptop [docker-compose-test] $ cat Dockerfile
FROM docker/compose:alpine-1.27.4
ADD . /infra
WORKDIR /infra
ENTRYPOINT ["docker-compose", "up"]
cht#dell-laptop [docker-compose-test] $ cat docker-compose.yml
version: "3.8"
services:
test:
image: debian:buster-slim
volumes:
- ./config:/config
command: sh -c "ls -l /config && cat /config/file"
network_mode: host
cht#dell-laptop [docker-compose-test] $ cat config/file
123
docker-compose up works as I would expect, from my host machine,
cht#dell-laptop [docker-compose-test] $ docker-compose up
Starting docker-compose-test_test_1 ... done
Attaching to docker-compose-test_test_1
test_1 | total 4
test_1 | -rw-rw-r-- 1 1000 1000 4 Nov 26 16:52 file
test_1 | 123
docker-compose-test_test_1 exited with code 0
But when I run the same thing from docker/compose:alpine-1.27.4, weirdness happens,
cht#dell-laptop [docker-compose-test] $ docker run -v /var/run/docker.sock:/var/run/docker.sock compose-weird
Starting infra_test_1 ... done
Attaching to infra_test_1
test_1 | total 0
test_1 | cat: /config/file: No such file or directory
infra_test_1 exited with code 1
Same thing happens from inside the container,
cht#dell-laptop [docker-compose-test] $ docker run --entrypoint sh --rm -it -v /var/run/docker.sock:/var/run/docker.sock compose-weird
/infra # docker-compose up
Starting infra_test_1 ... done
Attaching to infra_test_1
test_1 | total 0
test_1 | cat: /config/file: No such file or directory
infra_test_1 exited with code 1
My hunch is that the bind mount to my host's Docker socket is somehow confusing the location of directories. Unfortunately, I can't find any information about what's going wrong with search engines.
Just another topic on this matter, but what's the best way of outputting docker container command STDOUT/ERR to a file other than running the command such as
bash -c "node cluster.js >> /var/log/cluster/console.log 2>&1"
What I don't like of the above is the fact that it results in 1 additional process, so finally I get 2 processes instead of 1, and my master cluster process is not the one with PID=1.
If I try
exec node cluster.js >> /var/log/cluster/console.log 2>&1
I get this error:
Error response from daemon: Cannot start container node:
exec: "node cluster.js >> /var/log/cluster/console.log 2>&1": executable file not found in $PATH
I am starting my container via docker-compose:
version: '3'
services:
node:
image: custom
build:
context: .
args:
ENVIRONMENT: production
restart: always
volumes:
- ./logs:/var/log/cluster
command: bash -c "node cluster.js >> /var/log/cluster/console.log 2>&1"
ports:
- "443:443"
- "80:80"
When I docker-compose exec node ps -fax | grep -v grep | grep node I get 1 extra process:
1 ? Ss 0:00 bash -c node cluster.js >> /srv/app/cluster/cluster.js
5 ? Sl 0:00 node cluster.js
15 ? Sl 0:01 \_ /usr/local/bin/node /srv/app/cluster/cluster.js
20 ? Sl 0:01 \_ /usr/local/bin/node /srv/app/cluster/cluster.js
As you can see, the bash -c starts 1 process which on the other hand forks the main node process. In docker container the process started by the command always has PID=1, that's what I want the node process to be. But it will be 5, 6, etc.
Thanks for the reply. I managed to solve the issue by creating a bash file that starts my node cluster with exec:
# start-cluster.sh
exec node cluster.js >> /var/log/cluster/console.log 2>&1
And in docker-compose file:
# docker-compose.yml
command: bash -c "./start-cluster.sh"
Starting the cluster with exec replaces the shell with node process and this way it has always PID=1 and my logs are output to file.