Please see the command below:
docker build -t iansbasecontainer:v1 -f DotnetDebug.Dockerfile .
It creates one container as shown below:
DotnetDebug.Dockerfile looks like this:
FROM microsoft/aspnetcore:2.0 AS base
# Install the SSHD server
RUN apt-get update \
&& apt-get install -y --no-install-recommends openssh-server \
&& mkdir -p /run/sshd \
&& echo "root:Docker!" | chpasswd
#Copy settings file. See elsewhere to find them.
COPY sshd_config /etc/ssh/sshd_config
COPY authorized_keys root/.ssh/authorized_keys
# Install Visual Studio Remote Debugger
RUN apt-get install zip unzip
RUN curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l ~/vsdbg
EXPOSE 2222
I then run this command:
docker build -t iansimageusesbasecontainer:v1 -f DebugASP.Dockerfile .
However, two images appear:
DebugASP.Dockerfile looks like this:
FROM iansbasecontainer:v1 AS base
WORKDIR /app
MAINTAINER Vladimir Vladimir#akopyan.me
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY ./DebugSample .
RUN dotnet restore
FROM build AS publish
RUN dotnet publish -c Debug -o /app
FROM base AS final
COPY --from=publish /app /app
COPY ./StartSSHAndApp.sh /app
EXPOSE 5000
CMD /app/StartSSHAndApp.sh
#If you wish to only have SSH running and start
#your service when you start debugging
#then use just the SSH server, you don't need the script
#CMD ["/usr/sbin/sshd", "-D"]
Why do two images appear? Please note I am relatively new to Docker so this may be a simple answer. I have spent the last few hours Googling it.
Also why is the repository and tag set to: .
Why do two images appear?
As mentioned here:
When using multi-stage builds, each stage produces a new image. That image is stored in the local image cache and will be used on subsequent builds (as part of the caching mechanism). You can run each build-stage (and/or tag the stage, if desired).
Read more about multi-stage builds here.
Docker produces intermediate(aka <none>:<none>) images for each layer, which are later used for final image. You can actually see them if execute docker images -a command.
But what you see is called dangling image. It happens, because some intermediate image is no longer used by final image. In case of multi-stage builds -- images for previous stages are not used in final image, so they become dangling.
Dangling images are useless and use your space, so it's recommended to regularly get rid of them(it's called pruning). You can do that with command:
docker image prune
Related
I've been trying to get this running for many MANY hours. I've been scouting docker docs, github repos and other stuff but I can't get it working for some reason.
My dockerfile:
FROM mattrayner/lamp:latest-1804
WORKDIR /app
RUN wget -O /tmp/lwt.zip http://downloads.sourceforge.net/project/lwt/lwt_v_1_6_3.zip && \
yes A | unzip /tmp/lwt.zip &&\
rm /tmp/lwt.zip &&\
mv connect_xampp.inc.php connect.inc.php
EXPOSE 80
CMD ["/run.sh"]
It build normally without any errors but when I run the image nothing appears in the /app directory and I get just a basic Welcome to LAMP view on my browser.
Though,
If I do docker run -p "80:80" -it -v ${PWD}/app:/app mattrayner/lamp:latest-1804 /bin/bash, cd /app, copy and paste
wget -O /tmp/lwt.zip http://downloads.sourceforge.net/project/lwt/lwt_v_1_6_3.zip && \
yes A | unzip /tmp/lwt.zip &&\
rm /tmp/lwt.zip &&\
mv connect_xampp.inc.php connect.inc.php
it still doesn't work BUT if I exit and run the same docker run command it works.
Docker LAMP instructions also state to do exactly as I have done:
FROM mattrayner/lamp:latest-1804
# Your custom commands
CMD ["/run.sh"]
As I followed these instructions I thought that everything would work nicely.
What's the catch here? It has something to do with the intermediate containers probably but I can't comprehend it (I'm not a devops or developer by trade, just a hobbyist).
That happens because you're doing this:
Download a file (wget ...) in your /app dir in your docker image.
After that, you're overwritting this /app dir when you mount volume, with content of your $PWD/app.
If you are installing something doing docker build in some dirs, don't mount into the same path.
If you need something in the same path, you can mount some concrete files, but not the whole dir, or it will override what you had in your docker image when container is created.
You can do wget somewhere else or download it into your ${PWD}/app and then mount it.
I'm beginner with Docker, and I'm trying to build an image in two stages, as explained here: https://docs.docker.com/develop/develop-images/multistage-build/
You can selectively copy artifacts from one stage to another
Looking at the examples given there, I had thought that one could build some files during a first stage, and then make them available for the next one:
FROM golang:1.7.3 AS builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
(Example taken from the above-linked page)
Isn't that what the COPY app.go . and the COPY --from=builder /go/src/github.com/alexellis/href-counter/app . are supposed to do?
I probably have a complete misunderstanding of what is going on, because when I try to do something similar (see below), it seems that the COPY command from the first stage is not able to see the files that have just been built (I can confirm that they have been actually built using a RUN ls step, but then I get a lstat <the file>: no such file or directory error).
And indeed, most other information I can gather regarding COPY (except the examples in the above link) rather suggest that COPY is actually meant to copy files from the directory where the docker build command was launched, not from within the build environment.
Here is my Dockerfile:
FROM haskell:8.6.5 as haskell
RUN git clone https://gitlab+deploy-token-75:sakyTxfe-PxPHDwqsoGm#gitlab.pasteur.fr/bli/bioinfo_utils.git
WORKDIR bioinfo_utils/remove-duplicates-from-sorted-fastq/Haskell
RUN stack --resolver ghc-8.6.5 build && \
stack --resolver ghc-8.6.5 install --local-bin-path .
RUN pwd; echo "---"; ls
COPY remove-duplicates-from-sorted-fastq .
FROM python:3.7-buster
RUN python3.7 -m pip install snakemake
RUN mkdir -p /opt/bin
COPY --from=haskell /bioinfo_utils/remove-duplicates-from-sorted-fastq/Haskell/remove-duplicates-from-sorted-fastq /opt/bin/remove-duplicates-from-sorted-fastq
CMD ["/bin/bash"]
And here is how the build ends when I run docker build . from the directory containing the Dockerfile:
Step 5/11 : RUN pwd; echo "---"; ls
---> Running in 28ff49fe9150
/bioinfo_utils/remove-duplicates-from-sorted-fastq/Haskell
---
LICENSE
Setup.hs
install.sh
remove-duplicates-from-sorted-fastq
remove-duplicates-from-sorted-fastq.cabal
src
stack.yaml
---> f551efc6bba2
Removing intermediate container 28ff49fe9150
Step 6/11 : COPY remove-duplicates-from-sorted-fastq .
lstat remove-duplicates-from-sorted-fastq: no such file or directory
How am I supposed to proceed to have the built file available for the next stage?
Well, apparently, I was mislead by the COPY step used in the first stage in the doc example. In my case, this is actually useless, and I can just COPY --from=haskell in my second stage, without any COPY in the first stage.
The following Dockerfile builds without issues:
FROM haskell:8.6.5 as haskell
RUN git clone https://gitlab+deploy-token-75:sakyTxfe-PxPHDwqsoGm#gitlab.pasteur.fr/bli/bioinfo_utils.git
WORKDIR bioinfo_utils/remove-duplicates-from-sorted-fastq/Haskell
RUN stack --resolver ghc-8.6.5 build && \
stack --resolver ghc-8.6.5 install --local-bin-path .
FROM python:3.7-buster
RUN python3.7 -m pip install snakemake
RUN mkdir -p /opt/bin
COPY --from=haskell /bioinfo_utils/remove-duplicates-from-sorted-fastq/Haskell/remove-duplicates-from-sorted-fastq /opt/bin/remove-duplicates-from-sorted-fastq
CMD ["/bin/bash"]
I have a Dockerfile with the following:
FROM i386/alpine
WORKDIR /tmp/src/mybuild
ADD . /tmp/src/mybuild
FROM travisci/travis-build
My goal is to end up with a 32 bit image which contains the stuff from both images. After running sudo docker build --rm --tag travis-32 ., the "image" is built, but when I run sudo docker run -it travis-32 /bin/bash, I end up in a bash terminal and typing uname -m gives me x86_64, which is clearly not 32 bit as I was expecting.
How can I make this work?
what's wrong with the Dockerfile:
The Dokcerfile mentioned in the question is a multi-stage build file. Every stage starts with FROM, where one is supposed to copy the artifacts from the first build stage into the second, in the goal to achieve a smaller docker image to use in production at the end.
in the mentioned Dokcerfile:
the first stage copies some files to an i386/alpine image:
FROM i386/alpine
WORKDIR /tmp/src/mybuild
ADD . /tmp/src/mybuild
then everything done is ignored, and another image is taken instead:
FROM travisci/travis-build
so the end result is an exact copy of travisci/travis-build.
Regarding the 32b,64b question:
Usually what is compiled under 32b works only under 32b, and in order for it to run under 64b, you need to compile it under 64b (except for some languages like go, where you can define the target platform), so one need to be careful.
Example:
Take a look, and notice where the artifacts are moved from stage to another using the COPY --from= statement:
FROM golang:1.7.3 as builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
More info can be found in the official docs: docker multistage-builds
hope this helps.
I did a basic search in the community and could not find a suitable answer, so I am asking here. Sorry if it was asked earlier.
Basically , I am working on a certain project and we keep changing code at a regular interval . So ,we need to build docker image everytime due to that we need to install dependencies from requirement.txt from scratch which took around 10 min everytime.
How can I perform direct change to docker image and also how to configure entrypoint(in Docker File) which reflect changes in Pre-Build docker image
You don't edit an image once it's been built. You always run docker build from the start; it always runs in a clean environment.
The flip side of this is that Docker caches built images. If you had image 01234567, ran RUN pip install -r requirements.txt, and got image 2468ace0 out, then the next time you run docker build it will see the same source image and the same command, and skip doing the work and jump directly to the output images. COPY or ADD files that change invalidates the cache for future steps.
So the standard pattern is
FROM node:10 # arbitrary choice of language
WORKDIR /app
# Copy in _only_ the requirements and package lock files
COPY package.json yarn.lock ./
# Install dependencies (once)
RUN yarn install
# Copy in the rest of the application and build it
COPY src/ src/
RUN yarn build
# Standard application metadata
EXPOSE 3000
CMD ["yarn", "start"]
If you only change something in your src tree, docker build will skip up to the COPY step, since the package.json and yarn.lock files haven't changed.
In my case, I was facing the same, after minor changes, i was building the image again and again.
My old DockerFile
FROM python:3.8.0
WORKDIR /app
# Install system libraries
RUN apt-get update && \
apt-get install -y git && \
apt-get install -y gcc
# Install project dependencies
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt --use-deprecated=legacy-resolver
# Don't use terminal buffering, print all to stdout / err right away
ENV PYTHONUNBUFFERED 1
COPY . .
so what I did, created a base image file first like this (Avoided the last line, did not copy my code)
FROM python:3.8.0
WORKDIR /app
# Install system libraries
RUN apt-get update && \
apt-get install -y git && \
apt-get install -y gcc
# Install project dependencies
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt --use-deprecated=legacy-resolver
# Don't use terminal buffering, print all to stdout / err right away
ENV PYTHONUNBUFFERED 1
and then build this image using
docker build -t my_base_img:latest -f base_dockerfile .
then the final Dockerfile
FROM my_base_img:latest
WORKDIR /app
COPY . .
And as my from this image, I was not able to up the container, issues with my copied python code, so you can edit the image/container code, to fix the issues in the container, by this mean i avoided the task of building images again and again.
When my code got fixed, I copied the changes from container to my code base and then finally, I created the final image.
There are 4 Steps
Start the image you want to edit (e.g. docker run ...)
Modify the running image by shelling into it with docker exec -it <container-id> (you can get the container id with docker ps)
Make any modifications (install new things, make a directory or file)
In a new terminal tab/window run docker commit c7e6409a22bf my-new-image (substituting in the container id of the container you want to save)
An example
# Run an existing image
docker run -dt existing_image
# See that it's running
docker ps
# CONTAINER ID IMAGE COMMAND CREATED STATUS
# c7e6409a22bf existing-image "R" 6 minutes ago Up 6 minutes
# Shell into it
docker exec -it c7e6409a22bf bash
# Make a new directory for demonstration purposes
# (note that this is inside the existing image)
mkdir NEWDIRECTORY
# Open another terminal tab/window, and save the running container you modified
docker commit c7e6409a22bf my-new-image
# Inspect to ensure it saved correctly
docker image ls
# REPOSITORY TAG IMAGE ID CREATED SIZE
# existing-image latest a7dde5d84fe5 7 minutes ago 888MB
# my-new-image latest d57fd15d5a95 2 minutes ago 888MB
I am trying to add Glide to my Golang project but I'm not getting my container working. I am currently using:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN mkdir -p $$GOPATH/bin
RUN curl https://glide.sh/get | sh
RUN go get github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
RUN glide update && fresh -c ../runner.conf main.go
as per #craigchilds94's post. When I run
docker build -t docker_test .
It all works. However, when I change the last line from RUN glide ... to CMD glide ... and then start the container with:
docker run -it --volume=$(PWD):/go docker_test
It gives me an error: /bin/sh: glide: not found. Ignoring the glide update and directly starting fresh results in the same: /bin/sh fresh: not found.
The end goal is to be able to mount a volume (for the live-reload) and be able to use it in docker-compose so I want to be able to build it, but I do not understand what is going wrong.
This should probably work for your purposes:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN go get -u github.com/Masterminds/glide
RUN go get -u github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
ENTRYPOINT $GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
As far as I know you don't need to run the glide update after you've just installed glide. You can check this Dockerfile I wrote that uses glide:
https://github.com/timogoosen/dockerfiles/blob/master/btcd/Dockerfile
and here is the REAMDE: https://github.com/timogoosen/dockerfiles/blob/master/btcd/README.md
This article gives a good overview of the difference between: CMD, RUN and entrypoint: http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
To quote from the article:
"RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages."
In my opinion installing packages and libraries can happen with RUN.
For starting your binary or commands I would suggest use ENTRYPOINT see:"ENTRYPOINT configures a container that will run as an executable." you could use CMD too for running:
$GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
something like this might work, I didn't test this part:
CMD ["$GOPATH/bin/fresh", "-c", "/go/src/runner.conf /go/src/main.go"]