Copy all files of sub and nested sub directories - docker

This is my project file structure:
java-project/
├── docker.compose.yml
├── pom.xml
└── services/
├── a/
│ ├── Dockerfile
│ ├── pom.xml
│ ├── src/
│ │ ├── pom.xml
│ │ ├── xxx
│ │ └── xxx
│ └── target/
│ ├── pom.xml
│ └── xxxx
└── b/
├── Dockerfile
├── pom.xml
├── src/
│ ├── pom.xml
│ ├── xxx
│ └── xxx
└── target/
├── pom.xml
└── xxxx
I want to copy all of the contents of the services folder of the project (including all the subfolders inside the services). Basically, I want to replicate the current project structure with every file and folder in the docker image as well for the mvn build to execute successfully.
I am doing the following in the Dockerfile, but I don't see all of the contents:
COPY services/**/pom.xml ./services/
What am I doing wrong here? TIA

Let's look at your COPY instruction:
# <src> <dest>
COPY services/**/pom.xml ./services/
Under the hood, Docker reads the <src> using Go's filepath.Match method. This means that the instruction doesn't use the globstar (**) the way glob patterns do. However, your question suggests you want to copy everything inside services — not only pom.xml files.
You can copy everything inside your local services directory using:
COPY services ./services/
If you want to exclude certain subdirectories or files, you can specify this using a .dockerignore.

Related

Project including multiple Dockerfiles and apps sharing some files. How to construct it?

I have a project including multiple Dockerfiles.
The tree is like,
.
├── app1
│   ├── Dockerfile
│   ├── app.py
│   └── huge_modules/
├── app2
│   ├── Dockerfile
│   ├── app.py
│   └── huge_modules/
├── common
│   └── my_lib.py
└── deploy.sh
To build my application, common/ is necessary and we have to COPY it inside Dockerfile.
However, Dockerfile cannot afford to COPY files from its parent directory.
To be precise, it is possible if we run docker build with -f option in the project root.
But I would not like to do this because the build context will be unnecessarily large.
When building app1, I don't like to include app2/huge_modules/ in the build context (the same as when building app2).
So, I prepare a build script in each app directory.
Like this.
cd $(dirname $0)
cp ../common/* ./
docker build -t app1 .
But this solution seems ugly to me.
Is there a good solution for this case?
Build a base image containing your common library, and then build your two app images on top of that. You'll probably end up restructuring things slightly to provide a Dockerfile for your common files:
.
├── app1
│ ├── Dockerfile
│ ├── app.py
│ └── huge_modules/
├── app2
│ ├── Dockerfile
│ ├── app.py
│ └── huge_modules/
├── base
| ├── Dockerfile
| └── common
│ └── my_lib.py
└── deploy.sh
You start by building a base image:
docker build -t mybaseimage base/
And then your Dockerfile for app1 and app2 would start with:
FROM mybaseimage
One possible solution is to start the build process from the top directory, with the -f flag you mentioned, dynamically generating the .dockerignore file.
That is, lets say that you currently build app1. Then you would first create in the top directory a .dockerignore file with the content: app2, then run the build process. After finishing the build, remove the .dockerignore file.
Now you want to build app2? No problem! Similarly generate first dynamically a .dockerignore file with the content app1, build and remove the file. Voila!

Build and Deploy Multiple Docker images to kubernetes

I have an application such as below structure which multiple services has their own Dockerfile.ı would like to deploy my application via Jenkins using Helm to kubernetes but I can not decide what is the best way to handle this?
Should I try to use multi-stage builds if yes how can I handle this?
Should I create two helm charts for each of them or any way to handle this with one helm chart?
└── app-images-dashboard
├── Readme.md
├── cors-proxy
│ ├── Dockerfile
│ ├── lib
│ │ ├── cors-anywhere.js
│ │ ├── help.txt
│ │ ├── rate-limit.js
│ │ └── regexp-top-level-domain.js
│ ├── package.json
│ └── server.js
└── app-images-dashboard
├── Dockerfile
├── components
│ └── image_item.js
├── images
│ └── beta.png
├── index.html
├── main.js
└── stylesheets
└── style.css
A helm chart represent a whole application. You have 1 application with 2 slices. So you need only 1 helm chart.

Visual Studio Code remote containers and Go GOPATH

I created a Go container using Visual Studio Code remote containers extensions.
The project structure is like this:
archy-go
├── .devcontainer
│ ├── devcontainer.json
│ └── Dockerfile
└── src
└── git.mycompany.com
├── main.go
└── mycompany
└── archy-go
└── cmd
├── createBackend.go
├── createFull.go
├── create.go
├── root.go
├── say.go
└── sayhello.go
But when I try the command go build inside the git.mycompany.com folder I get an error like this:
root#b570cd82e7e5:/workspaces/archy-go/src/git.mycompany.com# go build
main.go:3:8: cannot find package "git.mycompany.com/mycompany/archy-go/cmd" in any of:
/usr/local/go/src/git.mycompany.com/mycompany/archy-go/cmd (from $GOROOT)
/go/src/git.mycompany.com/mycompany/archy-go/cmd (from $GOPATH)
What did I do wrong? Or what is the correct way to set up GOPATH in this situation?

Why docker ADD destroys folder structure?

I have the following folder structure:
> tree -L 3
├── pom.xml
├── src
│   ├── main
│   │   ├── java
│   │   └── resources
│   └── test
│   ├── groovy
│   └── resources
I tried to build docker image containing those folders together with files using the following Dockerfile:
FROM jamesdbloom/docker-java8-maven
USER root
RUN mkdir src
ADD ./src/* ./src/
ADD pom.xm
However, the structure in docker image is different. Particularly, I can no longer find main and test folders.
$ tree -L 3
.
├── pom.xml
├── src
│   ├── groovy
│   │   └── com
│   ├── java
│   │   └── com
│   └── resources
│   ├── ext_sample_input.json
│   ├── hist_sample_input.json
│   └── sample_input.json
Why is it so?
From official documentation:
Note: The directory itself is not copied, just its contents.
Change your ADD statement to:
ADD ./src ./src/

How to setup Travis in multilanguage (Nodejs client- Python server) environment?

I have a repo with a client and a server respectively based on NodeJs/Angular2 and Python/Django with approximately this folder structure:
main/
├── angular2-client/
│ ├── build/
│ ├── dist/
│ ├── node_modules/
│ ├── src/
│ └── ...
└── django-server/
├── myapp/
├── server/
├── manage.py
└── ...
So, client and server are both on the same repo and same branch. What's the correct way of setting up Travis to build/deploy client and server to 2 different providers (eg. Firebase and AWS)?
Any suggestion, documentation reference or tutorial is highly appreciated.
Thanks

Resources