Why does this docker image give back this error: Unable to access jarfile /home/server.jar - docker

FROM "this line works but cant show code"
RUN yum install -y java-1.8.0-openjdk.x86_64 && yum clean all
COPY /resources/accounts.txt /home/resources/accounts.txt
COPY elk_casino_server /home/elk_casino_server
CMD ["jar","cvmf","/home/elk_casino_server/src/META-INF/MANIFEST.MF","/home/server.jar","/home/elk_casino_server/src/Main.class"]
CMD ["java","-jar","/home/server.jar"]

Please take a little more time to format your code snippets correctly and to make sure you ask a clear question.
Your Dockerfile uses the COPY instruction to copy two resources into your container image:
/resources/accounts.txt (available within the image at /home/resources/accounts.txt)
/elk_casino_server (available within the image at /home/elk_casino_server)
Unfortunately, your CMD instructions are trying to execute something very different. Only one command instruction can be defined and the latter will be accepted, which is:
CMD ["java","-jar","/home/server.jar"]
At no point do you copy /home/server.jar into your container image.

The parameter order of the char command seems to be wrong. The manifest-addition should come after the jar-file, not before it.
jar cfm jar-file manifest-addition input-file(s)
see: Packaging Programs in JAR Files: Modifying a Manifest File
Also: If there are more than one CMD, the last one overrides the others. Since I think you want to pack the jar at build time, RUN might be a better choice.
Both points combined:
RUN jar cvmf /home/server.jar /home/elk_casino_server/src/META-INF/MANIFEST.MF /home/elk_casino_server/src/Main.class

Related

docker-compose ignores ubuntu:latest in Dockerfile [duplicate]

I want to build a docker image for the Linkurious project on github, which requires both the Neo4j database, and Node.js to run.
My first approach was to declare a base image for my image, containing Neo4j. The reference docs do not define "base image" in any helpful manner:
Base image:
An image that has no parent is a base image
from which I read that I may only have a base image if that image has no base image itself.
But what is a base image? Does it mean, if I declare neo4j/neo4j in a FROM directive, that when my image is run the neo database will automatically run and be available within the container on port 7474?
Reading the Docker reference I see:
FROM can appear multiple times within a single Dockerfile in order to create multiple images. Simply make a note of the last image ID output by the commit before each new FROM command.
Do I want to create multiple images? It would seem what I want is to have a single image that contains the contents of other images e.g. neo4j and node.js.
I've found no directive to declare dependencies in the reference manual. Are there no dependencies like in RPM where in order to run my image the calling context must first install the images it needs?
I'm confused...
As of May 2017, multiple FROMs can be used in a single Dockerfile.
See "Builder pattern vs. Multi-stage builds in Docker" (by Alex Ellis) and PR 31257 by Tõnis Tiigi.
The general syntax involves adding FROM additional times within your Dockerfile - whichever is the last FROM statement is the final base image. To copy artifacts and outputs from intermediate images use COPY --from=<base_image_number>.
FROM golang:1.7.3 as builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
The result would be two images, one for building, one with just the resulting app (much, much smaller)
REPOSITORY TAG IMAGE ID CREATED SIZE
multi latest bcbbf69a9b59 6 minutes ago 10.3MB
golang 1.7.3 ef15416724f6 4 months ago 672MB
what is a base image?
A set of files, plus EXPOSE'd ports, ENTRYPOINT and CMD.
You can add files and build a new image based on that base image, with a new Dockerfile starting with a FROM directive: the image mentioned after FROM is "the base image" for your new image.
does it mean that if I declare neo4j/neo4j in a FROM directive, that when my image is run the neo database will automatically run and be available within the container on port 7474?
Only if you don't overwrite CMD and ENTRYPOINT.
But the image in itself is enough: you would use a FROM neo4j/neo4j if you had to add files related to neo4j for your particular usage of neo4j.
Let me summarize my understanding of the question and the answer, hoping that it will be useful to others.
Question: Let’s say I have three images, apple, banana and orange. Can I have a Dockerfile that has FROM apple, FROM banana and FROM orange that will tell docker to magically merge all three applications into a single image (containing the three individual applications) which I could call smoothie?
Answer: No, you can't. If you do that, you will end up with four images, the three fruit images you pulled, plus the new image based on the last FROM image. If, for example, FROM orange was the last statement in the Dockerfile without anything added, the smoothie image would just be a clone of the orange image.
Why Are They Not Merged? I Really Want It
A typical docker image will contain almost everything the application needs to run (leaving out the kernel) which usually means that they’re built from a base image for their chosen operating system and a particular version or distribution.
Merging images successfully without considering all possible distributions, file systems, libraries and applications, is not something Docker, understandably, wants to do. Instead, developers are expected to embrace the microservices paradigm, running multiple containers that talk to each other as needed.
What’s the Alternative?
One possible use case for image merging would be to mix and match Linux distributions with our desired applications, for example, Ubuntu and Node.js. This is not the solution:
FROM ubuntu
FROM node
If we don’t want to stick with the Linux distribution chosen by our application image, we can start with our chosen distribution and use the package manager to install the applications instead, e.g.
FROM ubuntu
RUN apt-get update &&\
apt-get install package1 &&\
apt-get install package2
But you probably knew that already. Often times there isn’t a snap or package available in the chosen distribution, or it’s not the desired version, or it doesn't work well in a docker container out of the box, which was the motivation for wanting to use an image. I’m just confirming that, as far as I know, the only option is to do it the long way, if you really want to follow a monolithic approach.
In the case of Node.js for example, you might want to manually install the latest version, since apt provides an ancient one, and snap does not come with the Ubuntu image. For neo4j we might have to download the package and manually add it to the image, according to the documentation and the license.
One strategy, if size does not matter, is to start with the base image that would be hardest to install manually, and add the rest on top.
When To Use Multiple FROM Directives
There is also the option to use multiple FROM statements and manually copy stuff between build stages or into your final one. In other words, you can manually merge images, if you know what you're doing. As per the documentation:
Optionally a name can be given to a new build stage by adding AS name
to the FROM instruction. The name can be used in subsequent FROM and
COPY --from=<name> instructions to refer to the image built in this
stage.
Personally, I’d only be comfortable using this merge approach with my own images or by following documentation from the application vendor, but it’s there if you need it or you're just feeling lucky.
A better application of this approach though, would be when we actually do want to use a temporary container from a different image, for building or doing something and discard it after copying the desired output.
Example
I wanted a lean image with gpgv only, and based on this Unix & Linux answer, I installed the whole gpg with yum and then copied only the binaries required, to the final image:
FROM docker.io/photon:latest AS builder
RUN yum install gnupg -y
FROM docker.io/photon:latest
COPY --from=builder /usr/bin/gpgv /usr/bin/
COPY --from=builder /usr/lib/libgcrypt.so.20 /usr/lib/libgpg-error.so.0 /usr/lib/
The rest of the Dockerfile continues as usual.
The first answer is too complex, historic, and uninformative for my tastes.
It's actually rather simple. Docker provides for a functionality called multi-stage builds the basic idea here is to,
Free you from having to manually remove what you don't want, by forcing you to allowlist what you do want,
Free resources that would otherwise be taken up because of Docker's implementation.
Let's start with the first. Very often with something like Debian you'll see.
RUN apt-get update \
&& apt-get dist-upgrade \
&& apt-get install <whatever> \
&& apt-get clean
We can explain all of this in terms of the above. The above command is chained together so it represents a single change with no intermediate Images required. If it was written like this,
RUN apt-get update ;
RUN apt-get dist-upgrade;
RUN apt-get install <whatever>;
RUN apt-get clean;
It would result in 3 more temporary intermediate Images. Having it reduced to one image, there is one remaining problem: apt-get clean doesn't clean up artifacts used in the install. If a Debian maintainer includes in his install a script that modifies the system that modification will also be present in the final solution (see something like pepperflashplugin-nonfree for an example of that).
By using a multi-stage build you get all the benefits of a single changed action, but it will require you to manually allowlist and copy over files that were introduced in the temporary image using the COPY --from syntax documented here. Moreover, it's a great solution where there is no alternative (like an apt-get clean), and you would otherwise have lots of un-needed files in your final image.
See also
Multi-stage builds
COPY syntax
Here is probably one of the most fundamental use cases of using multiple FROMs, aka, multi stage builds.
I want want one dockerfile, and I want to change one word and depending on what I set that word to, I get different images depending on whether I want to run, Dev or Publish the application!
Run - I just want to run the app
Dev - I want to edit the code and run the app
Publish - Run the app in production
Lets suppose we're working in the dotnet environment. Heres one single Dockerfile. Without multi stage build, there would be multiple files (builder pattern)
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/runtime:5.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["ConsoleApp1/ConsoleApp1.csproj", "ConsoleApp1/"]
RUN dotnet restore "ConsoleApp1/ConsoleApp1.csproj"
COPY . .
WORKDIR "/src/ConsoleApp1"
RUN dotnet build "ConsoleApp1.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "ConsoleApp1.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "ConsoleApp1.dll"]
Want to run the app? Leave FROM base AS final as it currently is in the dockerfile above.
Want to dev the source code in the container? Change the same line to FROM build AS final
Want to release into prod? Change the same line to FROM publish AS final
I agree with the OP, that this feature is useful for docker! Here is a different view into the same problem:
If you had multiple FROMs (or a "FROM" and multiple "MERGE"'s, for example) then you can use the docker registry versioning system for the base docker image AND other container elements, and that is the win here: I have third party development tools which do not exist in .deb format, so these tools must be installed by un-taring a tball and is HUGE, so caching on the docker host will be important but versioning/change control of the image is equally important. I (think I) can simply use "RUN git ....", and docker will deal with the caching of the new layer for me, which is what I want; because another container will have the same base image but a different set of HUGE third party tools, so the caching of the base image and the tools image is really important (the 3rd party tools tar can be as big as the base image of say ubuntu so caching of these is really important too). The (suggested) feature just allows all these elements to be managed in a central repo. versioning system.
Said a different way, why do we use FROM at all? If I were to simply git clone an ubuntu image using the RUN command for my "base image/layer", this would create a new layer and docker would cache this anyway...so is there any difference/advantage in using FROM, other than it uses dockers internal versioning system/syntax?

Creating a dockerfile to compile source code

I am trying to follow the 2 steps mentioned below:
1) Downloaded source code of
https://sourceforge.net/projects/hunspell/files/Hyphen/2.8/hyphen-2.8.8.tar.gz/download
2) Compiled it and you will get binary named example:
hyphen-2.8.8$ ./example ~/dev/smc/hyphenation/hi_IN/hyph_hi_IN.dic
~/hi_sample.text
I have downloaded and uncompressed the tar file. My question is how to create a dockerfile to automate this?
There are only 3 commands involved:
./configure
make all-recursive
make install
I can select the official python image as a base container. But how do I write the commands in a docker file?
You can do that with a RUN command:
FROM python:<version number here>
RUN ./configure && make-recursive && make install
CMD ['<some command here>']
what you use for <some command here> depends on what the image is meant to do. Remember that docker containers only run as long as that command is executing, so if you put the configure/make/install steps in a script and use that as your entry point, it's going to build your program, and then the container will halt.
Also you need to get the downloaded files into the container. That can be done using a COPY or an ADD directive (before the RUN of course). If you have the tar.gz file saved locally, then ADD will both copy the file into the container and expand it into a directory automatically. COPY will not expand it, so if you do that, you'll need to add a tar -zxvf or similar to the RUN.
If you want to download the file directly into the container, that could be done with ADD <source URL>, but in that case it won't expand it, so you'll have to do that in the RUN. COPY doesn't allow sourcing from a URL. This post explains COPY vs ADD in more detail.
You can have the three commands in a shell script and then use the following docker commands
COPY ./<path to your script>/<script-name>.sh /
ENTRYPOINT ["/<script-name>.sh"]
CMD ["run"]
For reference, you can create your docker file as they have created for one of the projects I worked on Apache Artemis Active Mq:
https://github.com/apache/activemq-artemis/blob/master/artemis-docker/Dockerfile-ubuntu

Docker and trying to build an image using Azure Pipelines

Hopefully someone can help me see the wood for the trees as they say!
I am no Linux expert and therefore I am probably missing something very obvious.
I have a dockerfile which contains the following:
FROM node:9.8.0-alpine as node-webapi
EXPOSE 3000
LABEL authors="David Sheardown"
COPY ["package.json", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . /home/vsts/work/1/s/
CMD ["node", "index.js"]
I then have an Azure pipeline setup as the following image shows:
My issue seems to be the build process cannot find the dockerfile itself:
##[error]Unhandled: No Dockerfile matching /home/vsts/work/1/s/**/Dockerfile was found.
Again, apologies in advance for my lack of Linux knowledge.. there is something silly I have done or not done ;)
P.S: I forgot to mention in Azure Pipelines I am using "Hosted Linux Preview"
-- UPDATE --
This is the get sources stage:
I would recommend adding the exact path to where the docker file resides on your repository .
Dockerfile: subpath/Dockerfile`
You're misusing this absolute path, both within the dockerfile and in the docker build task:
/home/vsts/work/1/s/
That is a path that exists on the build agent (not within the dockerfile) - but it may or may not exist on any given pipeline run. If the agent happens to use work directory 2, or 3, or any other number, then your path will be invalid. If you want to run this pipeline on a different type of agent, then your path will be invalid.
If you want to use a dockerfile in your checked out code, then you should do so by using a relative path (based on the root of your code repository), for example:
buildinfo/docker/Dockerfile
Note: that was just an example, to show the kind of path you should use; here you should be using the actual relative path in your actual code repo.

building go package using docker

I am trying to dockerize the go package that I found here...
https://github.com/siddontang/go-mysql-elasticsearch
The docker image is much more convenient than installing go on all the servers. But the following dockerfile is not working.
FROM golang:1.6-onbuild
RUN go get github.com/siddontang/go-mysql-elasticsearch
RUN cd $GOPATH/src/github.com/siddontang/go-mysql-elasticsearch
RUN make
RUN ./bin/go-mysql-elasticsearch -config=./etc/river.toml
How do I build a go package directly from github using a concise dockerfile?
Update
https://hub.docker.com/r/eaglechen/go-mysql-elasticsearch/
I found the exact dockerfile that would do this. But the docker command mentioned on that page does not work. It does not start the go package nor does it start the container.
It depends on what you mean by "not working", but RUN ./bin/... means RUN from the current working directory (/go/src/app in golang/1.6/onbuild/Dockerfile).
And go build in Makefile would put the binary in
$GOPATH/src/github.com/siddontang/go-mysql-elasticsearch/bin/...
So you need to add to your Dockerfile:
WORKDIR $GOPATH/src/github.com/siddontang/go-mysql-elasticsearch
I guess this should do what I am looking for.
https://github.com/EagleChen/docker_go_mysql_elasticsearch
And I hope one day I will learn to use that little search box.

I'm trying to make the perfect docker build file, do i need to build it from scratch each time?

For an assignment the marker requires of me to create a dockerfile to build my project's container, however I have a fairly complex set of tasks I need to have work in the right way together for my dockerfile to be of any use to me, so I am currently building a file that takes 30 minutes each time just to see if minor changes affect the outcome in the right way, so my question is, is there a better way of doing this?
The Dockerfile best practices, or an earlier question might help: Creating a Dockerfile - docker starts from scratch on each new build
In my experience, a full build every time means you're working against docker's caching mechanism, usually by having COPY . . early in the Dockerfile.
If the files copied into the image are then used to drive a package manager, or download other sources - try copying just the script or requirements file, then using it, then copying the rest of the sources.
a simplified python example, restated from the best practices link:
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
With that structure, as long as requirements.txt does not change, the first COPY and following RUN command use cached layers and rebuilds are much faster.
The first tip is using COPY/ADD for artifacts that need to be download when docker builds.
The second tip is, you can create one Dockerfile for each step and reuse them in next steps.
for example, if you want to install postgres db, and install wildfly in your image. You can start creating a Dockerfile for postgresDB only, and build it to make your-postgres docker image.
Then create another Dockerfile which reuse your-postgres image by
FROM your-postgres
.....
and so on...

Resources