- name: 'gcr.io/cloud-builders/mvn'
args: ['clean',
'package',
'-Ddockerfile.skip',
'-DskipTests'
]
- name: 'gcr.io/cloud-builders/mvn'
args: ['dockerfile:build',
'-Ddockerfile.skip',
'-DskipTests'
]
when I run these two commands on top locally, I do have the target folder with docker folder and the image-name file in it
On this step it fails:
..
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- -c
- |
docker push $(cat /workspace/target/docker/image-name)
cat: /workspace/target/docker/image-name: No such file or directory
I tried target/docker, app/target/docker
In My Dockerfile:
...
WORKDIR /app
...
ADD target/${JAR_FILE} app.jar
...
Question: how to see target folder, how to make
docker push $(cat /workspace/target/docker/image-name) work?
similar to answer here: Cloud Build fails to build the the simple build step with maven
how to see target folder
there isn't currently a way to check the remote workspace for the target folder, but you can debug with cloud-build-local and write the workspace locally.
https://cloud.google.com/cloud-build/docs/build-debug-locally
https://github.com/GoogleCloudPlatform/cloud-build-local
make sure that target/ or .jar files are not being ignored by gcloudignore or gitignore
https://cloud.google.com/sdk/gcloud/reference/topic/gcloudignore
https://github.com/GoogleCloudPlatform/cloud-builders/issues/236
I also wonder if the docker step is not picking up on what is being produced by the dockerfile plugin, does dockerfile:push work?
steps:
- name: 'gcr.io/cloud-builders/mvn'
args: ['dockerfile:build']
- name: 'gcr.io/cloud-builders/mvn'
args: ['dockerfile:push']
Related
I'm trying to create a docker image using GitHub Actions from a static web application built with npm. However, while running the dockerfile, the /dist folder is not copied into the image as expected.
This is the dockerfile:
FROM nginx:1.21.6-alpine
COPY dist /usr/share/nginx/html
And this is the action:
name: Deploy
on:
push:
tags:
- v*
jobs:
build-homolog:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup
uses: actions/setup-node#v3
with:
node-version: '16'
- name: Build
env:
NODE_ENV: homolog
run: npm install; npm run build; docker build -t my-image:1.0.0 .
The result is a working nginx but without content, it just shows its default page. When I run the npm build and the docker build locally on my machine, it works as expected. I think there is a problem with the directory structure on the GitHub Actions machine, but I can't seem to understand it.
I have a github actions workflow to build a docker image:
name: Backend-Demo Docker Image CI
on:
push:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login to Azure Container Registry
run: echo ${{ secrets.REGISTRY_PASSWORD }} | docker login ${{ secrets.LOGIN_SERVER_URL }} -u ${{ secrets.REGISTRY_USERNAME }} --password-stdin
- name: Get the version
id: vars
run: echo ::set-output name=tag::$(echo ${GITHUB_REF:10})
- name: Build the tagged Docker image
run: docker build . --file backend/Dockerfile --tag backend-demo/spring-boot:v1.0
The Dockerfile is:
FROM openjdk:14-alpine
MAINTAINER example.com
RUN mkdir -p /opt/demo-0.0.1/lib
# Setting application source code working directory
WORKDIR /opt/demo-0.0.1/
COPY target/demo-0.0.1-SNAPSHOT.jar /opt/demo-0.0.1/lib/demo-0.0.1-SNAPSHOT.jar
# ADD target/demo-0.0.1-SNAPSHOT.jar /opt/demo-0.0.1/lib/
RUN sh -c 'touch demo-0.0.1-SNAPSHOT.jar'
ENTRYPOINT ["java"]
CMD ["-jar", "/opt/demo-0.0.1/lib/demo-0.0.1-SNAPSHOT.jar"]
But when I execute this workflow I got this error at the COPY instruction:
Step 5/8 : COPY target/demo-0.0.1-SNAPSHOT.jar /opt/demo-0.0.1/lib/demo-0.0.1-SNAPSHOT.jar
COPY failed: stat /var/lib/docker/tmp/docker-builder851513197/target/demo-0.0.1-SNAPSHOT.jar: no such file or directory
##[error]Process completed with exit code 1.
I have been checking and it looks a typical error when the file we have the Dockerfile in a different directory like my instruction:
docker build . --file backend/Dockerfile --tag backend-demo/spring-boot:v1.0
I also don't have .dockerignore file and my Dockerfile is called Dockerfile precisely.
The target/demo-0.0.1-SNAPSHOT.jar file I am trying to copy is present in my github repository
Not sure what could be happening with the context, but probably this answer could be a good hint?
When you run
docker build . --file backend/Dockerfile ...
The path argument . becomes the context directory. (Docker actually sends itself a copy of this directory tree, which is where the /var/lib/docker/tmp/... path comes from.) The source arguments of COPY and ADD instructions are relative to the context directory, not relative to the Dockerfile.
If your source tree looks like
.
+-- backend
| \-- Dockerfile
\-- target
\-- demo-0.0.1-SNAPSHOT.jar
that matches the Dockerfile you show. But if instead you have
.
+-- backend
+-- Dockerfile
\-- target
\-- demo-0.0.1-SNAPSHOT.jar
you'll get the error you see.
If you don't need to refer to anything outside of the context directory, you can just change what directory you're passing to docker build
COPY target/demo-0.0.1-SNAPSHOT.jar /opt/demo-0.0.1/lib/demo-0.0.1-SNAPSHOT.jar
docker build backend ...
Or, if you do have other content you need to copy in, you need to change the COPY paths to be relative to the topmost directory.
COPY backend/target/demo-0.0.1-SNAPSHOT.jar /opt/demo-0.0.1/lib/demo-0.0.1-SNAPSHOT.jar
COPY common/config/demo.yml /opt/demo-0.0.1/etc/demo.yml
docker build . -f backend/Dockerfile ...
WORKDIR just tells you from where the other commands will be executed.An important point is WORKDIR works w.r.t docker directory,not to local/git directory.As per your example, WORDIR does not take context to /opt/demo-0.0.1/ , but just creates an empty directory as /opt/demo-0.0.1/ inside the docker. In order to make dockerfile work, you should give full path in COPY command as COPY /opt/demo-0.0.1/target/demo-0.0.1-SNAPSHOT.jar /opt/demo-0.0.1/lib/demo-0.0.1-SNAPSHOT.jar.Make sure Dockerfile is at the same level as /opt directory.
I intend to pass my npm token to gcp cloud build,
so that I can use it in a multistage build, to install private npm packages.
I have the following abridged Dockerfile:
FROM ubuntu:14.04 AS build
ARG NPM_TOKEN
RUN echo "NPM_TOKEN:: ${NPM_TOKEN}"
and the following abridged cloudbuild.yaml:
---
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: [ '-c', 'gcloud secrets versions access latest --secret=my-npm-token > npm-token.txt' ]
- name: gcr.io/cloud-builders/docker
args:
- build
- "-t"
- gcr.io/my-project/my-program
- "."
- "--build-arg NPM_TOKEN= < npm-token.txt"
- "--no-cache"
I based my cloudbuild.yaml on the documentation, but it seems like I am not able to put two and two together, as the expression: "--build-arg NPM_TOKEN= < npm-token.txt" does not work.
I have tested the DockerFile, when I directly pass in the npm token, and it works. I simply have trouble passing in a token from gcloud secrets as a build argument to docker.
Help is greatly appreciated!
Your goal is to get the secret file contents into the build argument. Therefore you have to read the file content using either NPM_TOKEN="$(cat npm-token.txt)"or NPM_TOKEN="$(< npm-token.txt)".
name: gcr.io/cloud-builders/docker
entrypoint: 'bash'
args: [ '-c', 'docker build -t gcr.io/my-project/my-program . --build-arg NPM_TOKEN="$(cat npm-token.txt)" --no-cache' ]
Note: The gcr.io/cloud-builders/docker however use exec entrypoint form. Therefore you set entrypoint to bash.
Also note that you save the secret to the build workspace (/workspace/..). This also allows you to copy the secret as a file into your container.
FROM ubuntu:14.04 AS build
ARG NPM_TOKEN
COPY npm-token.txt .
RUN echo "NPM_TOKEN:: $(cat npm-token.txt)"
I won't write your second step like you did, but like this:
- name: gcr.io/cloud-builders/docker
entrypoint: "bash"
args:
- "-c"
- |
build -t gcr.io/my-project/my-program . --build-arg NPM_TOKEN=$(cat npm-token.txt) --no-cache
I am trying to build my first Dockerfile for a Go application and use DroneCI to build pipeline.
The DroneCI configuration looks as follows:
kind: pipeline
type: docker
name: Build auto git tagger
steps:
- name: test and build
image: golang
commands:
- go mod download
- go test ./test
- go build ./cmd/git-tagger
- name: Build docker image
image: plugins/docker
pull: if-not-exists
settings:
username:
password:
repo:
dockerfile: ./build/ci/Dockerfile
registry:
auto_tag: true
trigger:
branch:
- master
I have followed the structure convention from https://github.com/golang-standards/project-layout:
The Dockerfile looks as follows so far:
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
The next step is, to copy the GO application binary into the container and here is the question, where to put the compiled binary? At the moment, it puts into the project folder.
You can specify output directory and file name wit go build -o flag. For example:
go build ./cmd/git-tagger -o ./build/package/foo
Then edit your Dockerfile:
Load the binary you've got with COPY or ADD
Execute it with ENTRYPOINT or CMD
P.S you specified Dockerfile path as ./build/ci/Dockerfile in your config. But it's in package dir on the screenshot. Don't forget that repository you linked is just smb's personal opinion and Go doesn't really enforce you to any structure, it depends on your company style standards or on your preferences. So it's not extremely important where to put the binary.
Newbie in Docker & Docker containers over here.
I'm trying to realize how can I run a script which is in the image from my bitbucket-pipeline process.
Some context about where I am and some knowledge
In a Bitbucket-Pipelines step you can add any image to run in that specific step. What I already tried and works without problem for example is get an image like alpine:node so I can run npm commands in my pipeline script:
definitions:
steps:
- step: &runNodeCommands
image: alpine/node
name: "Node commands"
script:
- npm --version
pipelines:
branches:
master:
- step: *runNodeCommands
This means that each push on master branch will run a build where using the alpine/node image we can run npm commands like npm --version and install packages.
What I've done
Now I'm working with a custom container where I'm installing a few node packages (like eslint) to run commands. I.E. eslint file1.js file2.js
Great!
What I'm trying but don't know how to
I've a local bash script awesomeScript.sh with some input params in my repository. So my bitbucket-pipelines.yml file looks like:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- ./awesomeScript.sh -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
I'm using the same awesomeScript.sh in different repositories and I want to move that functionality inside my Docker container and get rid of that script in the repository
How can I build my Dockerfile to be able to run that script "anywhere" where I use the docker image?
PS:
I've been thinking in build a node_module, installing the module in the Docker Image like the eslint module... but I would like to know if this is possible
Thanks!
If you copy awesomeScript.sh to the my-container-with-eslint Docker image then you should be able to use it without needing the script in each repository.
Somewhere in the Dockerfile for my-container-with-eslint you can copy the script file into the image:
COPY awesomeScript.sh /usr/local/bin/
Then in Bitbucket-Pipelines:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- awesomeScript -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
As peterevans said, If you copy the script to your docker image, then you should be able to use it without needing the script in each repository.
In your Dockerfile add the following line:
COPY awesomeScript.sh /usr/local/bin/ # you may use ADD too
In Bitbucket-Pipelines:
pipelines:
branches:
master:
- step:
image: <your user name>/<image name>
name: "Run script from the image"
script:
- awesomeScript -a $PARAM1 -e $PARAM2