Cannot COPY from previous stage in DockerFIle - docker

This looks like a common issue so I checked a few SO posts but none of them solved my problem.
Here is my Dockerfile:
# MkDocs container
FROM python:3-alpine AS build-env
RUN apk add bash
RUN pip install --upgrade pip
RUN pip install pymdown-extensions \
&& pip install mkdocs \
&& pip install mkdocs-material \
&& pip install mkdocs-rtd-dropdown \
&& pip install mkdocs-git-revision-date-plugin \
&& pip install mkdocs-git-revision-date-localized-plugin \
&& pip install mkdocs-redirects
# executed at ~/Developer/MkDocs
RUN mkdir -p /home/mkdocs/
WORKDIR /home/mkdocs/
COPY . .
RUN mkdocs build -s
WORKDIR /
# Nginx container
FROM nginx:1.21.6-alpine
RUN apk add bash
EXPOSE 80
RUN cat /etc/nginx/nginx.conf
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /
RUN mkdir -p /home/mkdocs
COPY --from=build-env /home/mkdocs/site/ /home/mkdocs/
RUN mv /home/mkdocs/* /usr/share/nginx/html/
RUN chown nginx:nginx /usr/share/nginx/html/*
USER nginx:nginx
And here is the command to run docker:
docker run -it --name mkdocs -p 8789:80 nginx
Running localhost:8789 only shows the default nginx homepage, not the built one of MkDocs. I also run docker exec -it --user root <PID> bash to check the directory /usr/share/nginx/html/ but the copied files are not there.
My other checks:
First, I'm 100% sure that the files built in the first stage works and exists
Second, this is what completely frustrated me out. If I run the docker using docker run -it --entrypoint=/bin/bash mkdocs:v1, I can actually see the built MkDocs files:
bash-5.1$ ls /usr/share/nginx/html
404.html 50x.html assets index.html search sitemap.xml sitemap.xml.gz
bash-5.1$

Related

Dockerfile for Meteor 2.2 proyect

I have been trying for almost 3 weeks to build and run a meteor app (bundle) using Docker, I have tried all the majors recommendations in Meteor forums, Stackoverflow and official documentarion with no success, first I tried to make the bundle and put inside the docker but have awful results, then I realized what I need to do is to make the bundle inside the docker and use a multistage Dockerfile, here is the one I am using right now:
FROM chneau/meteor:alpine as meteor
USER 0
RUN mkdir -p /build /app
RUN chown 1000 -R /build /app
WORKDIR /app
COPY --chown=1000:1000 ./app .
COPY --chown=1000:1000 ./app/packages.json ./packages/
RUN rm -rf node_modules/*
RUN rm -rf .meteor/local/*
USER 1000
RUN meteor update --packages-only
RUN meteor npm ci
RUN meteor build --architecture=os.linux.x86_64 --directory /build
FROM node:lts-alpine as mid
USER 0
RUN apk add --no-cache python make g++
RUN mkdir -p /app
RUN chown 1000 -R /app
WORKDIR /app
COPY --chown=1000:1000 --from=meteor /build/bundle .
USER 1000
WORKDIR /app/programs/server
RUN rm -rf node_modules
RUN rm -f package-lock.json
RUN npm i
FROM node:lts-alpine
USER 0
ENV TZ=America/Santiago
RUN apk add -U --no-cache tzdata && cp /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir -p /app
RUN chown 1000 -R /app
WORKDIR /app
COPY --chown=1000:1000 --from=mid /app .
USER 1000
ENV LC_ALL=C.UTF-8
ENV ROOT_URL=http://localhost/
ENV MONGO_URL=mongodb://locahost:21027/meteor
ENV PORT=3000
EXPOSE 3000
ENTRYPOINT [ "/usr/local/bin/node", "/app/main.js" ]
if I build with docker build -t my-image:v1 . then run my app with docker run -d --env-file .dockerenv --publish 0.0.0.0:3000:3000 --name my-bundle my-image:v1 it exposes port:3000 but when I try to navigate with my browser to http://127.0.0.1:3000 it redirects to https://localhost, if i do docker exec -u 0 -it my-bundle sh, then apk add curl, then curl 127.0.0.1:3000 I can see the meteor app running inside docker.
Has anyone had this issue before, maybe I am missing some configuration? my bundle also works fine outside docker with node main.js in the bundle folder and can visit http://127.0.0.1:3000 with my browser.
Thanks in advance

Build NextJS Docker image with nginx server

I am new to docker and trying to learn it by it's documentation. AS i need to create a NextJS build using docker image for nginx server i have followed the below process.
Install the nginx
Seeding the port 80 to 3000 in the default config.
Symlink the out directory to base nginx directory
CMD to take care the production build and symlinking of the out directory.
FROM node:alpine AS deps
RUN apk add --no-cache libc6-compat git
RUN apt-get install nginx -y
WORKDIR /sample-app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
FROM node:alpine AS builder
WORKDIR /sample-app
COPY . .
COPY --from=deps /sample-app/node_modules ./node_modules
RUN yarn build
FROM node:alpine AS runner
WORKDIR /sample-app
ENV NODE_ENV production
RUN ls -SF /sample-app/out /usr/share/nginx/html
RUN -p 3000:80 -v /sample-app/out:/usr/share/nginx/html:ro -d nginx
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /sample-app/out
USER nextjs
CMD ["nginx -g daemon=off"]
While running the docker build shell script command as sudo docker build . -t sample-app it throws the error The command '/bin/sh -c apt-get install nginx -y' returned a non-zero code: 127
I do not have much experience with alpine images, but I think that you have to use apk (Alpine Package Keeper) for installing packages
try apk add nginx instead of apt-get install nginx -y

Docker Toolbox not updating changes even after machine remove

Using DockerToolbox, I've been trying for the past few days to update my container to run in heroku.
I cant seem to update the code in the container.
Here are some of things I've tried:
in Docker file change COPY . /app to ADD . /app
Removed docker machine and create a virtualbox
`docker-machine rm default`
`docker-machine create --drive virtualbox default`
build/run docker image
`docker build --no-cache -t appname`
`docker run -it -p 8888:8080 appname`
Also tried docker build --no-cache .
Docker File
FROM python:3.6
# create and set working directory
RUN mkdir /app
WORKDIR /app
# Add current directory code to working directory
ADD . /app/
# set default environment variables
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
ENV PORT 8080
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure --prefix=/usr && \
make && \
make install
COPY . /app
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
RUN pip install ta-lib
EXPOSE 8080
CMD gunicorn appname.wsgi:application --bind 0.0.0.0:$PORT

Docker run Cron in a container with other service using proxy

I have a django application in a docker container. I built the image using the
docker build --build-arg http_proxy=$http_proxy \
--build-arg https_proxy=$https_proxy \
--build-arg no_proxy=$no_proxy \
-t <tag> .
And I have my proxy variables set in my current terminal session using
export http_proxy=http://user:pass#proxy.company.com:8099/
export https_proxy=http://user:pass#proxy.company.com:8099/
export no_proxy=*.local,localhost,169.254.169.254,*.abc.company.com,*.cloud.company.com
Below is the Dockerifle:
FROM artifactory.cloud.company.com/amazonlinux:2.0.20181010
ENV PIP_INDEX_URL https://artifactory.cloud.company.com/artifactory/api/pypi/pypi-internalfacing/simple/
RUN yum install -y python3 python3-devel python3-setuptools python3-pip git gcc
RUN pip3 install --upgrade --trusted-host artifactory.cloud.company.com pip setuptools
RUN amazon-linux-extras install nginx1.12
RUN yum install -y python2-pip
RUN pip2 install supervisor -i https://artifactory.cloud.company.com/artifactory/api/pypi/pypi-internalfacing/simple/ --trusted-host artifactory.cloud.company.com
RUN pip3 install uwsgi
RUN pip3 install django requests python-decouple
RUN mkdir -p /ASVDASHBOARD
# Application folder on the server with absolute path.
ADD ./ASVDASHBOARD /ASVDASHBOARD
WORKDIR /ASVDASHBOARD
RUN mkdir -p /etc/supervisor/
RUN cp ASVDASHBOARD_nginx.conf /etc/nginx/conf.d/default.conf
RUN cp ASVDASHBOARD_supervisor.conf /etc/supervisor/supervisord.conf
#RUN cp ASVDASHBOARD_supervisor.conf /etc/supervisord.d/
RUN chown -R nginx:nginx /ASVDASHBOARD && \
mkdir -p /ASVDASHBOARD/logs/ && \
touch /ASVDASHBOARD/logs/dashboard.log
RUN python3 manage.py makemigrations && \
python3 manage.py migrate --run-syncdb && \
python3 manage.py migrate
RUN python3 manage.py update_data
RUN mkdir /run/uwsgi && chmod -R 777 /run/uwsgi
EXPOSE 8080
CMD ["supervisord", "-c", "/etc/supervisor/supervisord.conf"]
I also run the container using
docker run -it -d --name final_dashboardd -e http_proxy -e https_proxy -e no_proxy -p 8080:8080 py37:v3
Now, I want to run a cron job in the container to run this command python3 /pathtofile/manage.py update_data. To just run manually, I will have to attach a shell terminal to the container using exec, set the proxy and then run the command, it works fine.
How do I set/pass proxy to run this cron job now.
I tried
*/1 * * * * python3 /pathtofile/manage.py update_data
Which didnt work. How to pass proxy here, Since I am attached to a terminal and have my proxy variables setup. it works But how to setup proxy for cron.

How to copy a folder from docker to host while configuring custom image

I try to create my own Docker image. After it runs, as the result, the archive folder is created in the container. And I need automatically copy that archive folder to my host.
What is important I have to configure this process before creating an image in my Docker file.
Below you can see what is already in my Docker file:
FROM python:3.6.5-slim
RUN apt-get update && apt-get install -y gcc && apt-get autoclean -y
WORKDIR /my-tests
COPY jobfile.py testbed.yaml requirements.txt rabbit.py ./
RUN pip install --upgrade pip wheel setuptools && \
pip install --no-cache-dir -r requirements.txt && \
rm -v requirements.txt
VOLUME /my-tests/archive
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini","--"]
CMD ["easypy", "jobfile.py","-testbed_file","testbed.yaml"]
While running the container, map any folder on the host to the archive folder of the container, using -v /usr/host_folder:/my-tests/archive . Any thing which is created inside the container at /my-tests/archive will now be available at /usr/host_folder on the host.
Or use the following command to copy the files using scp. You can create a script which first runs the container, then runs the docker exec command.
docker exec -it <container-name> scp /my-tests/archive <host-ip>:/host_path -r

Resources