Where is Dockerfile running in OpenShift Jenkins Pipeline - docker

I am trying to send an file to the Docker image while building Jenkins job in OpenShift.
I tried paths, changing directories and some other methods but couldn't achieve anything. I am trying to send files inside my jenkins pod to docker image. Dockerfile only copies its paths. Where is it? Where is Jenkins Pipeline stores that Dockerfile and stashed war file?
source:
dockerfile: |-
FROM wildfly
COPY ROOT.war /wildfly/standalone/deployments/ROOT.war
RUN ls -la .
CMD $STI_SCRIPTS_PATH/run
binary:
asFile: ROOT.war
type: Docker
strategy:
dockerStrategy:
from:
kind: ImageStreamTag
name: wildfly:latest
paths:
- destinationDir: test
sourcePath: /tmp/myfile/.
type: Docker
triggers: []
I am expecting to send my /tmp/myfile to the Docker image. Paths not working for me. I tried "COPY . ." but only Dockerfile and ROOT.war existing in that path. What the exact path which includes Dockerfile and ROOT.war? I am thinking about copying file manually to the that path. Is it possible or do you know any other way?

Related

Running docker from github actions can't find file added during previous step

This will be a decent read so I thank you a lot for trying to help :bow:
I am trying to write a github action configuration that does the following two tasks:
Creates a autodeploy.xar file inside the build folder
Use that folder along with all other files inside to create a docker image.
The build process can not find the folder/files that the previous step has created. So I tried three things:
Try to use the file created in the previous step (within the same job in github actions) but couldn't get it to run.
The build process threw an error that complained that the file doesn't exist: Error: buildx failed with: error: failed to solve: lstat /var/lib/docker/tmp/buildkit-mount3658977881/build/autodeploy.xar: no such file or directory
Try to build two jobs, one to initiate the file and the other that needs the first one to build the docker. However, this gave the same error as step 1.
Try to build the docker image from task 1
This step is just running a bash script from the github actions.
I tried to run docker build . from inside the shell script, but the github actions complained with "docker build" requires exactly 1 argument.
I was providing the right argument because on echoing the command I clearly saw the output docker build . --file Dockerfile --tag ***/***:latest --build-arg ADMIN_PASSWORD=***
This must be something very trivial, but I have no idea what's going wrong. And I think a solution to either one of these approaches should work.
Thanks once again for going through all this. Please find the GH actions, workflow.sh and the docker file below:
The GitHub actions yml file:
name: ci
on:
push:
branches:
- 'build'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up JDK 11
uses: actions/setup-java#v3
with:
java-version: '11'
distribution: 'temurin'
- name: Login to DockerHub
uses: docker/login-action#v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Run script to replace template file
run: |
build/workflow.sh
- name: Build and push
uses: docker/build-push-action#v3
with:
push: true
tags: ${{secrets.DOCKERHUB_USERNAME}}/${{secrets.REPO_NAME}}:latest
build-args: |
ADMIN_PASSWORD=${{secrets.ADMIN_PASSWORD}}
The workflow file:
# run the ant
ant <--------- This command just creates autodeploy.xar file and puts it inside the build directory
#### I TESTED WITH AN ECHO COMMAND AND THE FILES ARE ALL THERE:
# echo $(ls build)
The docker file:
# Specify the eXist-db release as a base image
FROM existdb/existdb:6.0.1
COPY build/autodeploy.xar /exist/autodeploy/ <------ THIS LINE FAILS
COPY conf/controller-config.xml /exist/etc/webapp/WEB-INF/
COPY conf/exist-webapp-context.xml /exist/etc/jetty/webapps/
COPY conf/conf.xml /exist/etc
# Ports
EXPOSE 8080 8444
ARG ADMIN_PASSWORD
ENV ADMIN_PASSWORD=$ADMIN_PASSWORD
# Start eXist-db
CMD [ "java", "-jar", "start.jar", "jetty" ]
RUN [ "java", "org.exist.start.Main", "client", "--no-gui", "-l", "-u", "admin", "-P", "", "-x", "sm:passwd('admin','$ADMIN_PASSWORD')" ]
The error saying file was not found:
#5 [2/6] COPY build/autodeploy.xar /exist/autodeploy/
#5 ERROR: lstat /var/lib/docker/tmp/buildkit-mount3658977881/build/autodeploy.xar: no such file or directory
#4 [1/6] FROM docker.io/existdb/existdb:6.0.1#sha256:fa537fa9fd8e00ae839f17980810abfff6230b0b9873718a766b767a32f54ed6
this is dumb, but the only thing I needed to change was the context: . in the github actions
- name: Build and push
uses: docker/build-push-action#v3
with:
context: .

Docker secret not mounting to default location

I'm using docker-compose to produce a docker image which requires access to a secure Azure Artifacts directory via Paket. As I'm sure at least some people are aware, Paket does not have default compatibility with the Azure Artifacts Credential Provider. To gain the access I need, I'm trying to mount the access token produced by the credential provider as a secret, then consume it using cat within a paket config command. cat then returns an error message stating that the file is not found at the default secret location.
I'm running this code within an Azure Pipeline on the Microsoft-provided ubuntu-latest agent.
Here's the relevant code snippets (It's possible I'm going into too much detail...):
docker-compose.ci.build.yml:
version: '3.6'
services:
ci_build:
build:
context: .
dockerfile: Dockerfile
image: <IMAGE IDENTITY>
secrets:
- azure_credential
secrets:
azure_credential:
file: ./credential.txt
dockerfile:
FROM mcr.microsoft.com/dotnet/sdk:6.0.102-bullseye-slim-amd64 AS build
<LABEL maintainer="<Engineering lead>"
WORKDIR /src
<Various COPY instructions>
RUN dotnet tool restore
RUN dotnet paket restore
RUN --mount=type=secret,id=azure_credential dotnet paket config add-token "<ARTIFACT_FEED_URL>" "$(cat /run/secrets/azure_credential)"
Azure pipeline definition YAML:
jobs:
- job: BuildPublish
displayName: Build & Publish
steps:
- task: PowerShell#2
displayName: pwsh build.ps1
inputs:
filePath: ${{ parameters.workingDirectory }}/.azure-pipelines/build.ps1
pwsh: true
workingDirectory: ${{ parameters.workingDirectory }}
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
The relevant lines of the powershell script initiating docker-compose:
$projectRoot = Split-Path -Path $PSScriptRoot -Parent
Push-Location -Path $projectRoot
try {
...
Out-File -FilePath ./credential.txt -InputObject $Env:SYSTEM_ACCESSTOKEN
...
& docker-compose -f ./docker-compose.ci.build.yml build
...
}
finally {
...
Pop-Location
}
The error message:
0.276 cat: /run/secrets/azure_credential: No such file or directory
If there's other relevant code, let me know.
I tried to verify that the environment variable I'm housing the secret in on the agent even existed and that the value was being saved to the ./credential.txt file for mounting in the image. I verified that the text file was being properly created. I've tried fiddling with the syntax for all the relevant commands--fun fact, Docker docs have two different versions of the mounting syntax, but the other version just crashed. I tried using Windows default pathing in case my source image was a Windows one, but it doesn't appear to be.
Essentially, here's where I've left it: I know that the file ./credential.txt exists and contains some value. I know my mounting syntax is correct, or Docker would crash. The issue appears to be something to do with the default mounting path and/or how docker-compose embeds its secrets.
I figured it out. For reasons I do not understand, the path to the mounted secret has to be defined as an environment variable in the docker-compose YAML. So, like this:
version: '3.6'
services:
ci_build:
build:
context: .
dockerfile: Dockerfile
image: <IMAGE IDENTITY>
secrets:
- azure_credential
environment:
AZURE_CREDENTIAL_FILE: /run/secrets/azure_credential
secrets:
azure_credential:
file: credential.txt
This solved the issue. If anyone knows why this solved the issue, I'd love to hear.

Configure bitbucket-pipeline.yml to use a DockerFile from repository to build image when running a pipeline

I am new on creating pipelines on bitbucket to automate building a specific branch after merge.
The project is written in C++ and has the following structure:
PROJECT FOLDER
- .devcontainer/
- devcontainer.json
- bin/
- doc/
- lib/
- src/
- CMakeLists.txt
- ...
- CMakeLists.txt
- clean.sh
- compile.sh
- configure.sh
- DockerFile
- bitbucket-pipelines.yml
We created a DockerFile with all the settings required to build the project. Is there any way I can reference the docker image on bitbucket-pipeline.yml to the DockerFile from the repository?
I have been able to upload the docker image on my docker hub and use it with my credentials by defining:
image:
name: <dockerhubname>/<dockername>
username: $DOCKER_HUB_USERNAME
password: $DOCKER_HUB_PASSWORD
email: $DOCKER_HUB_EMAIL
but I am not sure how to do so bitbucket takes the DockerFile from the repository and uses it to build the image, and if by doing it like this, the build time will increase.
Thanks in advance!
In case you want to build your image during your pipelines process you need the same steps as if your image was built in your machine:
Build your image docker build -t $APP_NAME .
Push it to your repo (e.g. docker hub) docker push $APP_NAME:$VERSION
You can do something like this:
steps:
- step: &build
name: Build Docker Image
services:
- docker
script:
- docker build -t $APP_NAME .
- docker push $APP_NAME:$VERSION
Think that every step in your pipelines runs in a docker container and that allows you to do whatever you want. The docker service allows you to use a out of the box docker client. Then after pushed you can use the image in another step. You just need to specified the image for the step.

unable to upload non-image artifacts with cloud build

I have a very simple container (effectively the Cloud Build quickstart sample code) that generates a file. I'm trying to extend this container to upload said file to a bucket via the documentation on storing non-image artifacts with Cloud Build.
My Dockerfile builds a trivial container and executes a single script:
FROM alpine
WORKDIR /app
COPY . /app # the only file present is quickstart.sh
CMD ["./quickstart.sh"]
The script (quickstart.sh) generates a simple timestamp file:
#!/bin/sh
echo "Creating file 'time.txt'"
echo "The time is $(date)" > time.txt
## for debugging:
# pwd
# ls
# cat time.txt
My cloudbuild.yaml file is basically copy-pasted from the aforementioned docs, and is configured to upload the file:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/quickstart-image', '.' ]
artifacts:
objects:
location: 'gs://my-bucket/'
paths: ['*.txt']
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'
However, the file fails to upload and the build fails as a result. When I run the build command
gcloud builds submit --config cloudbuild.yaml .
All logs are successful until the end:
Artifacts will be uploaded to gs://my-bucket using gsutil cp
*.txt: Uploading path....
CommandException: No URLs matched: *.txt
CommandException: 1 file/object could not be transferred.
ERROR
ERROR: could not upload *.txt to gs://my-bucket/; err = exit status 1
Where gsutil is claiming no matching file can be found. However, if I build manually and generate the file, I can use gsutil cp *.txt gs://my-bucket/ to upload the file with no problem. So it's almost as if the file is wiped before Cloud Build reaches the "upload artifacts" step, but that does not seem like it would make sense. I imagine this is a pretty common use case but I'm not making any progress with the documentation alone. Any ideas? Thanks.
The issue here is that with the current steps, you are just building the container and not running it so the time.txt file doesn't get created. Even if you run the container, then the file gets created inside the container so you need to fetch it from inside the container so that gsutil can "see" the file.
I added 2 steps in the cloudbuild.yaml file to do this:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/quickstart-image', '.' ]
- name: 'gcr.io/cloud-builders/docker'
args: [ 'run', '--name', 'containername', 'gcr.io/$PROJECT_ID/quickstart-image']
- name: 'gcr.io/cloud-builders/docker'
args: [ 'cp', 'containername:/app/time.txt, './time.txt']
artifacts:
objects:
location: 'gs://mybucket/'
paths: ['*.txt']
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'
I hope this works for you.

Docker with BitBucket

I'm trying to make automatic publishing using docker + bitbucket pipelines; unfortunately, I have a problem. I read the pipelines deploy instructions on Docker Hub, and I created the following template:
# This is a sample build configuration for Docker.
# Check our guides at https://confluence.atlassian.com/x/O1toN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script: # Modify the commands below to build your repository.
# Set $DOCKER_HUB_USERNAME and $DOCKER_HUB_PASSWORD as environment variables in repository settings
- export IMAGE_NAME=paweltest/tester:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t paweltest/tester .
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push paweltest/tester:tagname
I have completed the data, but after doing the push, I get the following error when the build starts:
unable to prepare context: lstat/opt/atlassian/pipelines/agent/build/Dockerfile: no dry file or directory
What would I want to achieve? After posting changes to the repository, I'd like for an image to be automatically built and sent to the Docker hub, preferably to the target server where the application is.
I've looked for a solution and tried different combinations. For now, I have about 200 commits with Failed status and no further ideas.
Bitbucket pipelines is a CI/CD service, you can build your applications and deploy resources to production or test server instance. You can build and deploy docker images too - it shouldn't be a problem unless you do something wrong...
All defined scripts in bitbucket-pipelines.yml file are running in a container created from the indicated image(atlassian/default-image:2 in your case)
You should have Dockerfile in the project and from this file you can build and publish a docker image.
I created simple repository without Dockerfile and I started build:
unable to prepare context: unable to evaluate symlinks in Dockerfile
path: lstat /opt/atlassian/pipelines/agent/build/Dockerfile: no such
file or directory
I need Dockerfile in my project to build an image(at the same level as the bitbucket-pipelines.yml file):
FROM node:latest
WORKDIR /src/
EXPOSE 4000
In next step I created a public DockerHub repository:
I also changed your bitbucket-pipelines.yml file(you forgot to mark the new image with a tag):
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script:
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t appngpl/stackoverflow-question-56065689 .
# add new image tag
- docker tag appngpl/stackoverflow-question-56065689 appngpl/stackoverflow-question-56065689:$BITBUCKET_COMMIT
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push appngpl/stackoverflow-question-56065689:$BITBUCKET_COMMIT
Result:
Everything works fine :)
Bitbucket repository: https://bitbucket.org/krzysztof-raciniewski/stackoverflow-question-56065689
GitHub image repository: https://hub.docker.com/r/appngpl/stackoverflow-question-56065689

Resources