From the following image: https://registry.hub.docker.com/u/cloudesire/activemq/dockerfile/
If I wanted to override the ACTIVEMQ_VERSION environment variable in my child docker file, I assumed I would be able to do something like the following:
FROM cloudesire/activemq:latest
MAINTAINER abc <abc#xyz.co.uk>
ENV ACTIVEMQ_VERSION 5.9.1
ADD ./src/main/resources/* /opt/activemq/conf/
However this does not seem to work. Admittedly I am new to Docker and have obviously misunderstood something. Please could someone explain why this does not work, and how/if I can achieve it another way?
That won't work. The ACTIVEMQ_VERSION has already been used by the cloudesire/activemq:latest image build to populate its image layers. All the ActiveMQ installation files based on version 5.11.1 are already extracted in their corresponding directories.
In your Dockerfile you only can build on top of what has already been build there and add your files. Your own Dockerfile build will not re-run the build instructions described in their Dockerfile.
If you need to have your own cloudesire/activemq image based on version 5.9.1 you need to clone their Dockerfile, adjust the version there and build it locally. So you could base your other Dockerfile on it.
Related
Hopefully someone can help me see the wood for the trees as they say!
I am no Linux expert and therefore I am probably missing something very obvious.
I have a dockerfile which contains the following:
FROM node:9.8.0-alpine as node-webapi
EXPOSE 3000
LABEL authors="David Sheardown"
COPY ["package.json", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . /home/vsts/work/1/s/
CMD ["node", "index.js"]
I then have an Azure pipeline setup as the following image shows:
My issue seems to be the build process cannot find the dockerfile itself:
##[error]Unhandled: No Dockerfile matching /home/vsts/work/1/s/**/Dockerfile was found.
Again, apologies in advance for my lack of Linux knowledge.. there is something silly I have done or not done ;)
P.S: I forgot to mention in Azure Pipelines I am using "Hosted Linux Preview"
-- UPDATE --
This is the get sources stage:
I would recommend adding the exact path to where the docker file resides on your repository .
Dockerfile: subpath/Dockerfile`
You're misusing this absolute path, both within the dockerfile and in the docker build task:
/home/vsts/work/1/s/
That is a path that exists on the build agent (not within the dockerfile) - but it may or may not exist on any given pipeline run. If the agent happens to use work directory 2, or 3, or any other number, then your path will be invalid. If you want to run this pipeline on a different type of agent, then your path will be invalid.
If you want to use a dockerfile in your checked out code, then you should do so by using a relative path (based on the root of your code repository), for example:
buildinfo/docker/Dockerfile
Note: that was just an example, to show the kind of path you should use; here you should be using the actual relative path in your actual code repo.
I'm working on building a website in Go, which is hosted on my home server via docker.
What I'm trying to do:
I make changes to my website/server locally, then push them to github. I'd like to write a dockerfile such that it pulls this data from my github, builds the image, which my docker-compose file will then use to create the container.
Unfortunately, all of my attempts have been somewhat close but wrong.
FROM golang:1.8-onbuild
MAINTAINER <my info>
RUN go get <my github url>
ENV webserver_path /website/
ENV PATH $PATH: webserver_path
COPY website/ .
RUN go build .
ENTRYPOINT ./website
EXPOSE <ports>
This file is kind of a combination of a few small guides I found through google searches, but none quite gave me the information I needed and it never quite worked.
I'm hoping somebody with decent docker experience can just put a Dockerfile together for me to use as a guide so I can find what I'm doing wrong? I think what I'm looking for can be done in only a few lines, and mine is a little more verbose than needed.
ADDITIONAL BUT PROBABLY UNNECESSARY INFORMATION BELOW
Project layout:
Data: is where my go files are Sidenote: This was throwing me errors when trying to build image, something about not being in the environment path. Not sure if that is helpful
Static: CSS, JS, Images
TPL: go template files
Main.go: launches server/website
There are several strategies:
Using of pre-build app. Build your app using
go build command according to target system architecture and OS (using GOOS and GOARCH system variable for example) then use COPY docker command to move this builded file (with assets and templates) to your WORKDIR and finally run it via CMD or ENTRYPOINT (last is preferable). Dockerfile for this example will look like:
FROM scratch
ENV PORT 8000 EXPOSE $PORT
COPY advent / CMD ["/advent"]
Build by dockerfile. Typical Dockerfile:
# Start from a Debian image with the latest version of Go installed
# and a workspace (GOPATH) configured at /go.
FROM golang
# Copy the local package files to the container's workspace.
ADD . /go/src/github.com/golang/example/outyet
# Build the outyet command inside the container.
# (You may fetch or manage dependencies here,
# either manually or with a tool like "godep".)
RUN go install github.com/golang/example/outyet
# Run the outyet command by default when the container starts.
ENTRYPOINT /go/bin/outyet
# Document that the service listens on port 8080.
EXPOSE 8080
Using GitHub. Build your app and pull to dockerhub as ready to use image.
Github supports Webhooks which can be used to do all sorts of things automagically when you push to a git repo. Since you're already running a web server on your home box, why don't you have Github send a POST request to that when it receives a commit on master and have your home box re-download the git repo and restart web services from that?
I was able to solve my issue by just creating an automated build through docker hub, and just using this for my dockerfile:
FROM golang-onbuild
EXPOSE <ports>
It isn't exactly the correct answer to my question, but it is an effective workaround. The automated build connects with my github repo the way I was hoping my dockerfile would.
I'm trying to build a ruby-on-rails project, using rails 1.9.3 on Debian image.
After I've built it, using dockerfile, it appears that a directory is missing. So the container doesn't start. So, can I add it manually? I've tried to use "docker run -it sh" to run it as shell, but for some reason, after I add a directory with mkdir it vanishes, when I exit.
I'm kinda new to this stuff (just did some tutorials), so apologize for any mixed up details.
You are going to need to add the dir, and then commit the changes in the container to make a new image out of it to use the directory in the new image. Its much better to use a repeatable DockerFile to create the image
Documentation for DockerFile -> https://docs.docker.com/engine/reference/builder/
Have a look at the documentation for commit here -> https://docs.docker.com/engine/reference/commandline/commit/
I have a fairly simple Dockerfile and now would like to build a docker image using rules_docker.
Trying to use container_image, it seems like I cannot use the Dockerfile as input. Is there any way to build with a Dockerfile?
Update: There is now a rule called dockerfile_image. Read here for more details: https://github.com/bazelbuild/rules_docker/blob/master/contrib/dockerfile_build.bzl#L15
I think it's by design not allowed due to non-hermetic nature of Dockerfile. We can RUN any command in Dockerfile, including the ones non-hermetic (can't always been reproduced)
Further discussion here:
https://github.com/bazelbuild/rules_docker/issues/173
https://blog.bazel.build/2015/07/28/docker_build.html
For an assignment the marker requires of me to create a dockerfile to build my project's container, however I have a fairly complex set of tasks I need to have work in the right way together for my dockerfile to be of any use to me, so I am currently building a file that takes 30 minutes each time just to see if minor changes affect the outcome in the right way, so my question is, is there a better way of doing this?
The Dockerfile best practices, or an earlier question might help: Creating a Dockerfile - docker starts from scratch on each new build
In my experience, a full build every time means you're working against docker's caching mechanism, usually by having COPY . . early in the Dockerfile.
If the files copied into the image are then used to drive a package manager, or download other sources - try copying just the script or requirements file, then using it, then copying the rest of the sources.
a simplified python example, restated from the best practices link:
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
With that structure, as long as requirements.txt does not change, the first COPY and following RUN command use cached layers and rebuilds are much faster.
The first tip is using COPY/ADD for artifacts that need to be download when docker builds.
The second tip is, you can create one Dockerfile for each step and reuse them in next steps.
for example, if you want to install postgres db, and install wildfly in your image. You can start creating a Dockerfile for postgresDB only, and build it to make your-postgres docker image.
Then create another Dockerfile which reuse your-postgres image by
FROM your-postgres
.....
and so on...