Hi I got a project with a dockerfile, and I am trying to build the dockerfile to run the project in the environment it was created but I seem to get an error at step 5 of the build and when I look at the dockerfile I find the code a bit strange/I dont understand it at that point.
This is the dockerfile:
FROM node:8.10-alpine
ENV NODE_ENV development
# Create app directory
WORKDIR /var/app
# Install Node packages
COPY package.json package.json
RUN apk install git \
&& npm i \
&& apk del .gyp\
&& mv /var/app/node_modules /node_modules \
&& rm -rf /var/cache/apk/* \
&& apk del git
# Bundle app source
COPY . .
#COPY entrypoint.sh entrypoint.sh
# Expose port
EXPOSE 88
#ENTRYPOINT ["./entrypoint.sh"]
CMD ["npm", "run", "dev"]
This is the error i am getting:
Step 5/8 : RUN apk install git && npm i && apk del .gyp && mv /var/app/node_modules /node_modules && rm -rf /var/cache/apk/* && apk del git
---> Running in 251259cdb8a2
apk-tools 2.7.5, compiled for x86_64.
Then I get a bunch of text which resembles what you get if you type -help on something and then at the end i get:
This apk has coffee making abilities.
The command '/bin/sh -c apk install git && npm i && apk del .gyp && mv /var/app/node_modules /node_modules && rm -rf /var/cache/apk/* && apk del git' returned a non-zero code: 1
This seems to be the problematic part:
RUN apk install git \
&& npm i \
&& apk del .gyp\
&& mv /var/app/node_modules /node_modules \
&& rm -rf /var/cache/apk/* \
&& apk del git
Just try adding an additional line before using apk and see if it fixes
RUN echo "ipv6" >> /etc/modules
RUN apk install git \
Reference: link
Note: Breaking the problematic step into multiple steps like
RUN apk install git
RUN npm i
RUN apk del .gyp
RUN mv /var/app/node_modules /node_modules
RUN rm -rf /var/cache/apk/*
RUN apk del git
will help to locate the point of problem more accurately.
You have 2 issues. One is with the command apk del .gyp (which return code is different from 0) , and the other is related to the fact that you do not mount correctly your folder.
# apk del .gyp
# echo $?
1
Besides, there is no such thing as /var/app/node_modules mounted in the container:
# ls /var/app/node_modules
# ls: /var/app/node_modules: No such file or directory
What you would do, is
Make sure you mount correctly /var/app/node_modules in the container
I am not sure what the command apk del .gyp is doing, but you may need to investigate it. It does not seems to work properly.
Related
I have a docker file in which I do wget to copy something in the image. But the build is failing giving 'wget command not found'. WHen i googled I found suggestions to install wget like below
RUN apt update && apt upgrade
RUN apt install wget
Docker File:
FROM openjdk:17
LABEL maintainer="app"
ARG uname
ARG pwd
RUN useradd -ms /bin/bash -u 1000 user1
COPY . /app
WORKDIR /app
RUN ./gradlew build -PmavenUsername=$uname -PmavenPassword=$pwd
ARG YOURKIT_VERSION=2021.11
ARG POLARIS_YK_DIR=YourKit-JavaProfiler-2019.8
RUN wget https://www.yourkit.com/download/docker/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip --no-check-certificate -P /tmp/ && \
unzip /tmp/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip -d /usr/local && \
mv /usr/local/YourKit-JavaProfiler-${YOURKIT_VERSION} /usr/local/$POLARIS_YK_DIR && \
rm /tmp/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip
EXPOSE 10001
EXPOSE 8080
EXPOSE 5005
USER 1000
ENTRYPOINT ["sh", "/docker_entrypoint.sh"]
On doing this I am getting error app-get not found. Can some one suggest any solution.
The openjdk image you use is based on Oracle Linux which uses microdnf rather than apt as it's package manager.
To install wget (and unzip which you also need), you can add this to your Dockerfile:
RUN microdnf update \
&& microdnf install --nodocs wget unzip \
&& microdnf clean all \
&& rm -rf /var/cache/yum
The commands clean up the package cache after installing, to keep the image size as small as possible.
Am new to docker and aws. I am trying to create a Jmeter Image and pass on the JMX script during runtime. For that, i thought copying files from S3 inside a container will be a best fit. So initially i tried to copy the files from s3 to my local host using the below command
aws s3 cp s3://bucketname/sample.jmx .
I was able to download the file successfully into my local system.
After then i have created a docker images with latest AWS CLI installed and tried the same, the message shows "download: s3://bucketname/sample.jmx to current folder " but am not able to see the file.
But on the other hand, i was able to copy the file from docker to S3 using the command
aws s3 cp /tmp/sample.jmx s3://bucketname/
Further details :
Image on - alpine:3.12.4
Credentials - Passed inline with the docker run command like below
docker run -it --rm -e AWS_DEFAULT_REGION='us-east-2' -e AWS_ACCESS_KEY_ID='aaaaaa' -e AWS_SECRET_ACCESS_KEY='dsfssdfds' dockerimage aws s3 cp s3://bucketname/sample.jmx /tmp
Complete Docker file :
FROM alpine:3.12.4
ARG JMETER_VERSION="5.3"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
# Install extra packages
# Set TimeZone, See: https://github.com/gliderlabs/docker-alpine/issues/136#issuecomment-612751142
ARG TZ="Europe/Amsterdam"
ENV TZ ${TZ}
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& apk add --no-cache nss \
&& rm -rf /var/cache/apk/* \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
# TODO: plugins (later)
# && unzip -oq "/tmp/dependencies/JMeterPlugins-*.zip" -d $JMETER_HOME
# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN
RUN apk update && \
apk add --no-cache python3 py3-pip\
&& pip3 install --upgrade pip
RUN pip3 --no-cache-dir install --upgrade awscli
ENV PATH $PATH:/usr/bin/aws
CMD ["/bin/bash"]
I would really need some help here.
While trying to build the following Dockerfile
FROM ruby:2.7.1-alpine3.12
RUN apk update && \
apk --no-cache add git
# Install Syslog
RUN wget -O /usr/remote_syslog_linux_i386.tar.gz https://github.com/papertrail/remote_syslog2/releases/download/v0.20/remote_syslog_linux_i386.tar.gz && \
tar xzf /usr/remote_syslog_linux_i386.tar.gz -C /usr
I get the following error
However, the following Dockerfile
FROM ruby:2.7.1-alpine3.12
RUN apk update && \
apk --no-cache add git
# Install Syslog
RUN wget -O /usr/remote_syslog_linux_i386.tar.gz https://github.com/papertrail/remote_syslog2/releases/download/v0.20/remote_syslog_linux_i386.tar.gz
RUN tar xzf /usr/remote_syslog_linux_i386.tar.gz -C /usr
doesn't run into any error.
What am I missing?
I'm going crazy trying to ADD a directory from my host machine to my docker container. When building the container with docker-compose up --build, it seems to ADD just fine, but when I try to access module in my app.py file, I get the ModuleNotFoundError
My DockerFile contains the following:
FROM python:3.7-alpine
RUN apk update && \
apk add --virtual build-deps gcc musl-dev && \
apk add --no-cache postgresql-dev && \
apk add alsa-lib-dev && \
apk add pulseaudio-dev && \
apk add postgresql-dev && \
apk add ffmpeg-dev && \
apk add ffmpeg && \
rm -rf /var/cache/apk/*
COPY /scraper/requirements.txt requirements.txt
RUN pip install -r requirements.txt
ADD /common/testmodel /scraper/testmodel
WORKDIR home/scraper/
ENTRYPOINT ["python3", "-u", "app.py"]
CMD gunicorn -b 0.0.0.0:5000 --access-logfile - "app:app"
Then when building the image, the log shows:
Step 6/9 : ADD /common/testmodel home/scraper/testmodel
---> a7b27854d751
My project structure looks like the following:
-common
-testmodel
-test.py
-scraper
-DockerFile
-requirements
-docker-compose.yml
But in my app.py file, when I run from testmodel.test import TestClass I get ModuleNotFoundError: No module named 'testmodel'
Any help with this problem is greatly appreciated as this how now taken up a much larger chunk of my day that I ever thought it would. Thank you very much.
I may be missing some context but I think you've several issues:
You COPY /scraper... and ADD /common... -- are these directories hanging from root on your local machine?
You set WORKDIR after COPY and ADD but generally (although not required), you'd set this first as a default destination and then you could COPY something . and ADD something . and these destinations (.) would refer to your WORKDIR
You use /home/scraper as your WORKDIR but you don't copy and add your files into it. It will be empty at this point.
Your ENTRYPOINT references app.py but your file is called test.py
One useful debugging tool is to shell into containers to e.g. examine the directory structure to confirm it's as expected. Assuming your image is called scraper, you could:
docker build \
--tag=scraper \
--file=scraper/Dockerfile \
. # Don't forget the period ;-)
Then Alpine's shell is called ash:
docker run \
--interactive \
--tty \
scraper:latest ash
Or, if your Dockerfile has an ENTRYPOINT, then override it using:
docker run \
--interactive \
--tty \
--entrypoint=ash \
scraper:latest
and then you could browse the container's directory structure:
You'll default to /home/scraper (WORKDIR):
/home/scraper # ls -l
total 0
You may examine /scraper using:
/home/scraper # apk install tree
/home/scraper # tree /scraper
/scraper
└── testmodel
└── test.py
1 directory, 1 file
I'm not entirely clear as to what would be the correct solution for you but I hope this helps get you progressed:
FROM python:3.7-alpine
RUN apk update && \
apk add --virtual build-deps gcc musl-dev && \
apk add --no-cache postgresql-dev && \
apk add alsa-lib-dev && \
apk add pulseaudio-dev && \
apk add postgresql-dev && \
apk add ffmpeg-dev && \
apk add ffmpeg && \
rm -rf /var/cache/apk/*
WORKDIR home/scraper/
COPY scraper/requirements.txt .
RUN pip install -r requirements.txt
ADD common/testmodel .
ENTRYPOINT ["python3", "-u", "test.py"]
CMD gunicorn -b 0.0.0.0:5000 --access-logfile - "test:app"
As i'm limited to use docker 1.xxx instead of 17x on my cluster, I need some help on how to convert this multi stage build to a valid build for the older docker version.
Could someone help me?
FROM node:9-alpine as deps
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
FROM deps as test
RUN rm -r ./prod_node_modules \
&& npm run lint
FROM node:9-alpine
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
COPY --from=deps /app .
COPY --from=deps /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Currently it gives me error on "FROM node:9-alpine as deps"
"FROM node:9-alpine as deps" means you are defining an intermediate image that you will be able to COPY from COPY --from=deps.
Having a single image means you don't need to COPY --from anymore, and you don't need "as deps" since everything happens in the same image (which will be bigger as a result)
So:
FROM node:9-alpine
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
RUN rm -r ./prod_node_modules \
&& npm run lint
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
RUN cp -r /app .
RUN cp -r /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Only one FROM here.