Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I was assigned to work with ten year old legacy Java project which generates the following artifacts.
xxx.jar
xxx.jar
xxx.jar
xxx.war
I am asked to dockerize the application and deploy it into Kubernetes. So, I am planning to build the EAR artifact using the below structure
lib
META-INF
MANIFEST.MF
application.xml
xxx.jar
xxx.jar
xxx.jar
xxx.war
My dockerfile would be something like this
FROM tibco/bwce:latest
MAINTAINER Tibco
ADD bwce-rest-bookstore-app.ear /EXPOSE 8080
docker build -t bwce-rest-bookstore-app.
am I in the right direction?
You are going in the right direction, however few things are wrong with your Dockerfile:
MAINTAINER instruction is depracted. Use LABEL instead.
ADD instruction require two arguments - source and destination. Assuimg your workdir is /:
ADD bwce-rest-bookstore-app.ear /
EXPOSE instruction must be on it's own line.
I'm not sure how EAR artifacts work, but you probably need to start your application after container is created. This can be done with CMD instruction. For example:
CMD ["/apth/to/executable","param1","param2"]
Taking all of the above into consideration your Dockerfile should look more or less like this:
FROM tibco/bwce:latest
LABEL maintainer="Tibco" #replace MAINTAINER with LABEL
ADD bwce-rest-bookstore-app.ear / #add EAR to root workdir
EXPOSE 8080
CMD ["/apth/to/executable","param1","param2"]
I strongly recommend going through Dockerfile reference.
I don't know the EAR artifact and Java, but as per Docker Docs, the ADD command can extract .tar.gz files but not .ear file format, so I think it's better to have a Dockerfile like this (see here for extract):
FROM tibco/bwce:latest
MAINTAINER Tibco # you can remove this line
ADD bwce-rest-bookstore-app.ear
RUN jar xf bwce-rest-bookstore-app.ear
EXPOSE 8080
Related
I'm trying to move my few microservices to a docker containers using docker-compose project type from Visual Studio.
I also have Service Fabric project so I have to install Service Fabric SDK into my docker containers.
That's what I do to achieve this (my dockerfile(s)):
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-nanoserver-1809 AS base
WORKDIR /app
EXPOSE 80
...
WORKDIR /temp
ADD https://aka.ms/vs/15/release/vs_buildtools.exe /temp #C:\TEMP\vs_buildtools.exe
...
The rest code doesn't matter since it crashes on line with ADD command.
The error from Output after I run this via Ctrl+F5:
3>Step 4/11 : ADD https://aka.ms/vs/15/release/vs_buildtools.exe /temp
3>Service 'bmt.microservices.snowforecastcenter' failed to build: ADD failed: CreateFile \\?\C:\ProgramData\Docker\tmp\docker-builder567273413\temp: The system cannot find the file specified.
I don't understand what I'm doing wrong and what does it mean 'system cannot find the file' since I simply load the file from the internet and place it into my newly created \temp folder (the link is valid, I checked).
Does anybody know what this might be related to?
Ok, I've accidentally fixed the problem by moving comment to next line.
From this:
ADD https://aka.ms/vs/15/release/vs_buildtools.exe /temp #C:\TEMP\vs_buildtools.exe
To this:
ADD https://aka.ms/vs/15/release/vs_buildtools.exe /temp
#C:\TEMP\vs_buildtools.exe
Then I red on official site (https://docs.docker.com/engine/reference/builder/#/from) that you cannot have inline comments since they are treated as arguments:
Docker treats lines that begin with # as a comment, unless the line is a valid parser directive. A # marker anywhere else in a line is treated as an argument.
I hope this will help other people who are new in Docker.
Hopefully someone can help me see the wood for the trees as they say!
I am no Linux expert and therefore I am probably missing something very obvious.
I have a dockerfile which contains the following:
FROM node:9.8.0-alpine as node-webapi
EXPOSE 3000
LABEL authors="David Sheardown"
COPY ["package.json", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . /home/vsts/work/1/s/
CMD ["node", "index.js"]
I then have an Azure pipeline setup as the following image shows:
My issue seems to be the build process cannot find the dockerfile itself:
##[error]Unhandled: No Dockerfile matching /home/vsts/work/1/s/**/Dockerfile was found.
Again, apologies in advance for my lack of Linux knowledge.. there is something silly I have done or not done ;)
P.S: I forgot to mention in Azure Pipelines I am using "Hosted Linux Preview"
-- UPDATE --
This is the get sources stage:
I would recommend adding the exact path to where the docker file resides on your repository .
Dockerfile: subpath/Dockerfile`
You're misusing this absolute path, both within the dockerfile and in the docker build task:
/home/vsts/work/1/s/
That is a path that exists on the build agent (not within the dockerfile) - but it may or may not exist on any given pipeline run. If the agent happens to use work directory 2, or 3, or any other number, then your path will be invalid. If you want to run this pipeline on a different type of agent, then your path will be invalid.
If you want to use a dockerfile in your checked out code, then you should do so by using a relative path (based on the root of your code repository), for example:
buildinfo/docker/Dockerfile
Note: that was just an example, to show the kind of path you should use; here you should be using the actual relative path in your actual code repo.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
When I develop my applications I have lots of things going on. They're usually micro services and each has their own build tools. For example I have to run a build script for hugo, and a build script for webpack, and some gulp tasks. I'll also have to generate some files, keys etc.
It's a huge pain to have to run these manually. When I test in dev, and staging I'm constantly rebuilding the docker containers running the same commands. It gets painful.
Are there any tools that can help with this? Where I can run one command and have it rebuild everything in my application? A bash script would work but that's not an option.
I've seen people use build scripts like in C, but I can't find anything similar for devops. Maybe docker has a tool for this?
You probably want to build your containers rather than use an image.
I'll assume you're using docker-compose or docker stack deploy to start you containers. In both scenarios, you have a .yaml file that describes your services. Let's assume that the following is part of your config right now, to deploy a service in which you'll want to run a build script for webpack, and that you're using a Node.js image as your base (and you can adapt that to your actual scenario easily):
# ...
services:
webpack:
image: node:8.12.0
# ...
# ...
Instead of using the image directly, you can specify a build context:
# ...
services:
webpack:
build:
context: ./docker/webpack
# ...
# ...
# ...
Create a directory structure accordingly so that there's a docker/webpack folder. Inside that folder, create a build-script.sh shell script with the commands you want to run, and create a Dockerfile file. This file should look like:
FROM node:8.12.0
COPY build-script.sh /tmp/build-script.sh
RUN npm install --save-dev webpack \
&& /bin/sh /tmp/build-script.sh
Then when you run docker-compose up or docker stack deploy ..., it will build a container already initialized with the content of the build-script.sh script. Obviously there is much more you can do with this Dockerfile, but for your use-case, you can start with something pretty simple. You can even avoid creating the script altogether and run all of your commands in a single huge RUN statement (using \ at the end of each line except the last one, and separating different commands with &&).
Later on, you could even build an image yourself by uploading this Dockerfile to github, and making an account on hub.docker.com and linking it to your github. You could call it something like BugHunterUK-dev-environment or something and use image: BugHunterUK-dev-environment:latest in your Yaml file.
I'm working on building a website in Go, which is hosted on my home server via docker.
What I'm trying to do:
I make changes to my website/server locally, then push them to github. I'd like to write a dockerfile such that it pulls this data from my github, builds the image, which my docker-compose file will then use to create the container.
Unfortunately, all of my attempts have been somewhat close but wrong.
FROM golang:1.8-onbuild
MAINTAINER <my info>
RUN go get <my github url>
ENV webserver_path /website/
ENV PATH $PATH: webserver_path
COPY website/ .
RUN go build .
ENTRYPOINT ./website
EXPOSE <ports>
This file is kind of a combination of a few small guides I found through google searches, but none quite gave me the information I needed and it never quite worked.
I'm hoping somebody with decent docker experience can just put a Dockerfile together for me to use as a guide so I can find what I'm doing wrong? I think what I'm looking for can be done in only a few lines, and mine is a little more verbose than needed.
ADDITIONAL BUT PROBABLY UNNECESSARY INFORMATION BELOW
Project layout:
Data: is where my go files are Sidenote: This was throwing me errors when trying to build image, something about not being in the environment path. Not sure if that is helpful
Static: CSS, JS, Images
TPL: go template files
Main.go: launches server/website
There are several strategies:
Using of pre-build app. Build your app using
go build command according to target system architecture and OS (using GOOS and GOARCH system variable for example) then use COPY docker command to move this builded file (with assets and templates) to your WORKDIR and finally run it via CMD or ENTRYPOINT (last is preferable). Dockerfile for this example will look like:
FROM scratch
ENV PORT 8000 EXPOSE $PORT
COPY advent / CMD ["/advent"]
Build by dockerfile. Typical Dockerfile:
# Start from a Debian image with the latest version of Go installed
# and a workspace (GOPATH) configured at /go.
FROM golang
# Copy the local package files to the container's workspace.
ADD . /go/src/github.com/golang/example/outyet
# Build the outyet command inside the container.
# (You may fetch or manage dependencies here,
# either manually or with a tool like "godep".)
RUN go install github.com/golang/example/outyet
# Run the outyet command by default when the container starts.
ENTRYPOINT /go/bin/outyet
# Document that the service listens on port 8080.
EXPOSE 8080
Using GitHub. Build your app and pull to dockerhub as ready to use image.
Github supports Webhooks which can be used to do all sorts of things automagically when you push to a git repo. Since you're already running a web server on your home box, why don't you have Github send a POST request to that when it receives a commit on master and have your home box re-download the git repo and restart web services from that?
I was able to solve my issue by just creating an automated build through docker hub, and just using this for my dockerfile:
FROM golang-onbuild
EXPOSE <ports>
It isn't exactly the correct answer to my question, but it is an effective workaround. The automated build connects with my github repo the way I was hoping my dockerfile would.
For an assignment the marker requires of me to create a dockerfile to build my project's container, however I have a fairly complex set of tasks I need to have work in the right way together for my dockerfile to be of any use to me, so I am currently building a file that takes 30 minutes each time just to see if minor changes affect the outcome in the right way, so my question is, is there a better way of doing this?
The Dockerfile best practices, or an earlier question might help: Creating a Dockerfile - docker starts from scratch on each new build
In my experience, a full build every time means you're working against docker's caching mechanism, usually by having COPY . . early in the Dockerfile.
If the files copied into the image are then used to drive a package manager, or download other sources - try copying just the script or requirements file, then using it, then copying the rest of the sources.
a simplified python example, restated from the best practices link:
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
With that structure, as long as requirements.txt does not change, the first COPY and following RUN command use cached layers and rebuilds are much faster.
The first tip is using COPY/ADD for artifacts that need to be download when docker builds.
The second tip is, you can create one Dockerfile for each step and reuse them in next steps.
for example, if you want to install postgres db, and install wildfly in your image. You can start creating a Dockerfile for postgresDB only, and build it to make your-postgres docker image.
Then create another Dockerfile which reuse your-postgres image by
FROM your-postgres
.....
and so on...