sh: Syntax error: "&&" unexpected from chgrp -R 0 <app/csv> && - docker

I am trying to run following docker, but getting the error /bin/sh: 1: Syntax error: "&&" unexpected. Its coming from the last && when i try to run chgrp
Here is my docker file.
# The builder from node image
FROM node:14.5.0-buster as swi
RUN apt-get update && apt-get install
# Move our files into directory name "app"
WORKDIR /app
COPY package.json /app/
RUN cd /app && npm install
COPY . /app
RUN cd /app && npm run build
RUN chgrp -R 0 <app/csv> && \
chmod -R g=u <app/csv>
EXPOSE 5001
CMD [ "node", "server.js" ]
Tried removing && as \ means run next command but getting the same error

The angle brackets are wrong - you should just use the directory name as-is:
RUN chgrp -R 0 app/csv && \
chmod -R g=u app/csv

You should remove the angle brackets from the paths. Furthermore, since the WORKDIR is set to /app, we can just chgroup and chown the folder csv. I would also suggest to remove the linebreak and wirte it as one-liner:
RUN chgrp -R 0 csv && chmod -R g=u csv
We can simplify the docker file overall by using the fact that the WORKDIR is set to /app:
# The builder from node image
FROM node:14.5.0-buster as swi
RUN apt-get update && apt-get install
# Move our files into directory name "app"
WORKDIR /app
COPY package.json .
npm install
COPY . .
RUN npm run build
RUN chgrp -R 0 csv && chmod -R g=u csv
EXPOSE 5001
CMD [ "node", "server.js" ]

Related

Next.js build in Dockerfile failing because of missing .next/cache folder

I'm experiencing an strange situation where the same Dockerfile and CodeBuild project config is failing prod but not in dev.
FROM public.ecr.aws/docker/library/node:14-alpine AS deps
WORKDIR /app
COPY package*.json yarn.lock ./
RUN yarn install --production=false --pure-lockfile --ignore-engines
FROM public.ecr.aws/docker/library/node:14-alpine AS builder
WORKDIR /app
RUN apk add --no-cache libc6-compat curl
COPY . .
COPY --from=deps /app/package.json .
COPY --from=deps /app/node_modules ./node_modules
RUN DEBUG=graphql:errors,graphql:queries,cache:redis DEBUG_COLORS=true npm run build --debug
RUN addgroup --system --gid 1001 app_user \
&& adduser --system --uid 1001 --shell /bin/bash -D app_user \
&& chown -R app_user:app_user /app \
&& echo "export PATH=$PATH:/usr/bin/curl" >> /etc/profile
USER app_user
HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD curl -f http://localhost:8080/blog || exit 1
EXPOSE 8080 6379
ENTRYPOINT [ "npm", "start"]
I'm getting the error: [Error: ENOENT: no such file or directory, stat '/app/.next/cache']
In detail this is the error output:
Build error occurred
[Error: ENOENT: no such file or directory, stat '/app/.next/cache'] {
errno: -2,
code: 'ENOENT',
syscall: 'stat',
path: '/app/.next/cache'
}
npm ERR! code ELIFECYCLE
I've thought about permissions issue, but by the time build command runs, root user is still in action.
By default, if a directory isn't present, Docker doesn't create it automatically. So you need to create it and give app_user write permission to it.
# COPY commands
RUN mkdir -p .next/cache && chown -R app_user .next/cache && chmod -R 644 .next/cache
# Debug command

Docker build command fails on COPY

Hi I have a docker file which is failing on the COPY command. It was running fine initially but then it suddenly crashed during the build process. The Docker file basically sets up the dev environment and authenticate with GCP.
FROM ubuntu:16.04
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# Update and Install packages
RUN apt-get update -y \
&& apt-get install -y \
curl \
wget \
tar \
xz-utils \
bc \
build-essential \
cmake \
curl \
zlib1g-dev \
libssl-dev \
libsqlite3-dev \
python3-pip \
python3-setuptools \
unzip \
g++ \
git \
python-tk
# Install Python 3.6.5
RUN wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz \
&& tar -xvf Python-${PYTHON_VERSION}.tar.xz \
&& rm -rf Python-${PYTHON_VERSION}.tar.xz \
&& cd Python-${PYTHON_VERSION} \
&& ./configure \
&& make install \
&& cd / \
&& rm -rf Python-${PYTHON_VERSION}
# Install pip
RUN curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py \
&& rm get-pip.py
# Add SNI support to Python
RUN pip --no-cache-dir install \
pyopenssl \
ndg-httpsclient \
pyasn1
## Download and Install Google Cloud SDK
RUN mkdir -p /usr/local/gcloud \
&& curl https://sdk.cloud.google.com > install.sh \
&& bash install.sh --disable-prompts --install-dir=${DIRECTORY}
# Adding the package path to directory
ENV PATH $PATH:${DIRECTORY}/google-cloud-sdk/bin
# working directory
WORKDIR /usr/src/app
COPY requirements.txt ./ \
testproject-264512-9de8b1b35153.json ./
It fails at this step :
Step 13/21 : COPY requirements.txt ./ testproject-264512-9de8b1b35153.json ./
COPY failed: stat /var/lib/docker/tmp/docker-builder942576416/testproject-264512-9de8b1b35153.json: no such file or directory
Any leads in this would be helpful.
How are you running docker build command?
In docker best practices I've read that docker fails if you try to build your image from stdin using -
Attempting to build a Dockerfile that uses COPY or ADD will fail if this syntax is used. The following example illustrates this:
# create a directory to work in
mkdir example
cd example
# create an example file
touch somefile.txt
docker build -t myimage:latest -<<EOF
FROM busybox
COPY somefile.txt .
RUN cat /somefile.txt
EOF
# observe that the build fails
...
Step 2/3 : COPY somefile.txt .
COPY failed: stat /var/lib/docker/tmp/docker-builder249218248/somefile.txt: no such file or directory
I've reproduced issue... Here is my Dockerfile:
FROM alpine:3.7
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# working directory
WORKDIR /usr/src/app
COPY kk.txt ./ \
kk.2.txt ./
If I create image by running docker build -t testImage:1 [DOCKERFILE_FOLDER], docker creates image and works correctly.
However if I try the same command from stdin as:
docker build -t test:2 - <<EOF
FROM alpine:3.7
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
WORKDIR /usr/src/app
COPY kk.txt ./ kk.2.txt ./
EOF
I get the following error:
Step 1/6 : FROM alpine:3.7
---> 6d1ef012b567
Step 2/6 : ENV PYTHON_VERSION="3.6.5"
---> Using cache
---> 734d2a106144
Step 3/6 : ENV BUCKET_NAME='detection-sandbox'
---> Using cache
---> 18fba29fffdc
Step 4/6 : ENV DIRECTORY='/usr/local/gcloud'
---> Using cache
---> d926a3b4bc85
Step 5/6 : WORKDIR /usr/src/app
---> Using cache
---> 57a1868f5f27
Step 6/6 : COPY kk.txt ./ kk.2.txt ./
COPY failed: stat /var/lib/docker/tmp/docker-builder518467298/kk.txt: no such file or directory
It seems that docker build images from /var/lib/docker/tmp/ if you build image from stdin, thus ADD or COPY commands don't work.
Incorrect path in source is a common error.
Use
COPY ./directory/testproject-264512-9de8b1b35153.json /dir/
instead of
COPY testproject-264512-9de8b1b35153.json /dir/

Convert from multi stage build to single

As i'm limited to use docker 1.xxx instead of 17x on my cluster, I need some help on how to convert this multi stage build to a valid build for the older docker version.
Could someone help me?
FROM node:9-alpine as deps
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
FROM deps as test
RUN rm -r ./prod_node_modules \
&& npm run lint
FROM node:9-alpine
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
COPY --from=deps /app .
COPY --from=deps /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Currently it gives me error on "FROM node:9-alpine as deps"
"FROM node:9-alpine as deps" means you are defining an intermediate image that you will be able to COPY from COPY --from=deps.
Having a single image means you don't need to COPY --from anymore, and you don't need "as deps" since everything happens in the same image (which will be bigger as a result)
So:
FROM node:9-alpine
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
RUN rm -r ./prod_node_modules \
&& npm run lint
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
RUN cp -r /app .
RUN cp -r /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Only one FROM here.

Docker entry point not being found

My docker image keeps returning the following error when I attempt to run it.
/bin/sh: 1: [python,: not found
Here is my dockerfile code
FROM ubuntu
RUN apt-get update \
&& apt-get install -y \
python3 \
python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY src/ src/
ENTRYPOINT ["python", "src/main.py"]
Here my project is my hierarchy
docker-flowcell-restore
docker-flowcell-restore
requirements.txt
Dockerfile
src
__init__.py
main.py
Thanks.
Try replacing ENTRYPOINT with the following one. (changing to " rather than ')
ENTRYPOINT ["python", "src/main.py"]
The issue was fixed by adding
COPY /src/* $INSTALL_PATH/src/
ENTRYPOINT python3 src/main.py

docker-compose: Service 'web' failed to build

I'm trying to install apach2, libapache2-mod-wsgi-py3 and openssl in the container. I've removed some packages and fix typos in Dockerfile but the error is still there.
When i run docker-compose build my setup is running ok, until it hit the part in the Dockerfile where I'm initializing this install and I've got this error:
E: Unable to locate package RUN
E: Unable to locate package apt-get
E: Unable to locate package install
ERROR: Service 'web' failed to build: The command '/bin/sh -c apt-get update && apt-get install -y apache2 libapache2-mod-wsgi-py3 curl dpgk-sig RUN apt-get install -yq openssh-server' returned a non-zero code: 100
You can check the whole installation process here, and this is my Dockerfile:
FROM ubuntu:16.04
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN cat /etc/passwd
RUN cat /etc/group
RUN apt-get update && apt-get install -y \
apache2 \
libapache2-mod-wsgi-py3 \
RUN apt-get install -y openssl
RUN mkdir /var/run/sshd
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code
EXPOSE 80
ADD config/apache/000-default.conf /etc/apache/sites-available/000-default.conf
ADD config/start.sh /tmp/start.sh
ADD src /var/www
RUN chown -R root:www-data /var/www
RUN chmod u+rwx,g+rx,o+rx /var/www
RUN find /var/www -type d -exec chmod u+rwx,g+rx,o+rx {} +
RUN find /var/www -type f -exec chmod u+rw,g+rw,o+r {} +
#essentially: CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"]
CMD ["/tmp/start.sh"]
Can someone explain me why is this happening, and how to fix it, thanks.
Your problem is this line:
libapache2-mod-wsgi-py3 \
The \ is a continuation and the next thing it sees is RUN so is treating that like a package (which it can't find). Lose the \ and it should work fine.

Resources