Can I use skaffold build with source code that contians docker file - docker

Need to clone some repo say: git clone abc.git then switch context to abc then use the dockerfile inside abc to build using skaffold passing custom ARGS.
Is it possible to achieve this via skaffold
for eg:
- name: test-build
build:
tagPolicy:
envTemplate:
template: 1.0.0
artifacts:
- image: dockerhub.io/test-build
requires:
command: ["git", "clone", "abc.git"]
context: ./abc

This is one way that worked for me by using skaffold custom build
- name: test-build
build:
tagPolicy:
envTemplate:
template: 1.0.0
artifacts:
- image: dockerhub.io/test-build
custom:
buildCommand: docker build github.com/abc.git#v1

Related

Circleci Can we use multiple workflows for multiple type?

I'm new in circleci. I want to install my infrastructure via terraform after that I also want to trigger my build, deploy and push command for aws side. But workflow does not allow me to use plan_approve_apply and build-and-deploy together in understand one workflow. I also try to create multiple workflows (like below example) for each one but also it didn't work. How can I call both in single circli config file
My Circleci config yml file:
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#8.1.0
aws-ecs: circleci/aws-ecs#2.2.1
jobs:
init-plan:
working_directory: /tmp/project
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- checkout
- run:
name: terraform init & plan
command: |
terraform init
terraform plan
- persist_to_workspace:
root: .
paths:
- .
apply:
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- attach_workspace:
at: .
- run:
name: terraform
command: |
terraform apply
- persist_to_workspace:
root: .
paths:
- .
destroy:
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- attach_workspace:
at: .
- run:
name: destroy
command: |
terraform destroy
- persist_to_workspace:
root: .
paths:
- .
workflows:
version: 2
plan_approve_apply:
jobs:
- init-plan
- apply:
requires:
- init-plan
- hold-destroy:
type: approval
requires:
- apply
- destroy:
requires:
- hold-destroy
workflows: # didn't work
build-and-deploy:
jobs:
- aws-ecr/build_and_push_image:
account-url: "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com"
repo: "${AWS_RESOURCE_NAME_PREFIX}"
region: ${AWS_DEFAULT_REGION}
tag: "${CIRCLE_SHA1}"
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build_and_push_image
aws-region: ${AWS_DEFAULT_REGION}
family: "${AWS_RESOURCE_NAME_PREFIX}-service"
cluster-name: "${AWS_RESOURCE_NAME_PREFIX}-cluster"
container-image-name-updates: "container=${AWS_RESOURCE_NAME_PREFIX}-service,image-and-tag=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${AWS_RESOURCE_NAME_PREFIX}:${CIRCLE_SHA1}"

Docker : Re-use container image by caching

Here I have two workflows under a job. The only target we want to achieve is that, we want to reuse the container images by using cache or some other means. Similar way we do for node_modules
jobs:
build:
name: build
runs-on: [self-hosted, x64, linux, research]
container:
image: <sample docker image>
env:
NPM_AUTH_TOKEN: <sample token>
steps:
- uses: actions/checkout#v2
- name: Install
run: |
npm install
- name: Build
run: |
npm build
Test:
name: Test Lint
runs-on: [self-hosted, x64, linux, research]
container:
image: <sample docker image>
env:
NPM_AUTH_TOKEN: <sample token>
steps:
- uses: actions/checkout#v2
- name: Install Dependencies
run: npm ci
- name: Lint Check
run: npm run lint
I would suggest using the Docker's Build Push action for this purpose. Through the build-push-action, you can cache your container images by using the inline cache, registry cache or the experimental cache backend API:
Inline cache
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
tags: user/app:latest
cache-from: type=registry,ref=user/app:latest
cache-to: type=inline
Refer to the Buildkit docs.
Registry Cache
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
tags: user/app:latest
cache-from: type=registry,ref=user/app:buildcache
cache-to: type=registry,ref=user/app:buildcache,mode=max
Refer to Buildkit docs.
Cache backend API
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
tags: user/app:latest
cache-from: type=gha
cache-to: type=gha,mode=max
Refer to Buildkit docs.
I personally prefer using the Cache backend API as its easy to setup and provides a great boost in reducing the overall CI pipeline run duration.
By looking at the comments, it seems you want to share Docker cache between workflows. In this case you can share Docker containers between jobs in a workflow using this example:
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout#v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
-
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
file: ./Dockerfile
tags: myimage:latest
outputs: type=docker,dest=/tmp/myimage.tar
-
name: Upload artifact
uses: actions/upload-artifact#v2
with:
name: myimage
path: /tmp/myimage.tar
use:
runs-on: ubuntu-latest
needs: build
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
-
name: Download artifact
uses: actions/download-artifact#v2
with:
name: myimage
path: /tmp
-
name: Load Docker image
run: |
docker load --input /tmp/myimage.tar
docker image ls -a
In general, data is not shared between jobs in GitHub Actions (GHA). jobs actually will run in parallel on distinct ephemeral VMs unless you explicitly create a dependency with needs
GHA does provide a cache mechanism. For package manager type caching, they simplified it, see here.
For docker images, you either can use docker buildx cache and cache to a remote registry (including ghcr), or use the GHA cache action, which probably is easier. The syntax for actions/cache is pretty straightforward and clear on the page. For buildx, documentation always has been a bit of an issue (largely, I think, because he people building it are so smart that they do not realize how much we do not understand what is in their hearts), so you would need to configure the cache action, and then buildx to cache it.
Alternatively, you could do docker save imagename > imagename.tar and use that in the cache. There is a decent example of that here. No idea who wrote it, but it does the job.

how to set bitbucket pipelines to be manually triggered?

i wrote a pipeline in bitbucket environment but i would like the pipeline to be triggered only when the user run it and not automatically on push or commit.
here is the code:
pipelines:
branches:
new_ui_apk:
- step:
name: Build apk
size: 2x
script:
- JAVA_OPTS="-Xmx2048m -XX:MaxPermSize=2048m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8"
- docker build -t app-release:1.0.0 .
services:
- docker
definitions:
services:
docker:
memory: 7128
actually i use the skip ci tip to avoid it but if another team member push or commit any change, the pipeline will run, how else can i avoid it please?
if you mention the definition under "custom" property it stops listening branches and only acts when a user triggers it.
use this.
pipelines:
custom:
new_ui_apk:
- step:
name: Build apk
size: 2x
script:
- JAVA_OPTS="-Xmx2048m -XX:MaxPermSize=2048m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8"
- docker build -t app-release:1.0.0 .
services:
- docker
definitions:
services:
docker:
memory: 7128
The Answer is not good you only need to add trigger: manual
-step
image: XXX
name: XXXX
deployment: XXXX
trigger: manual
script:
- whatever....
And it will be shown a option to be run inside the pipeline options.

CircleCI, how to use container built in previous job?

I have this simple config:
version: 2
jobs:
build:
machine: true
steps:
- checkout
- run: |
docker-compose -f docker-compose.test.yml build
test:
machine: true
working_directory: ~/app
steps:
- checkout
- run:
command: make test
name: Test
workflows:
version: 2
build_and_test:
jobs:
- build
- test:
requires:
- build
My test job, the second one, fails for some reasons. But whenever I check the logs on CircleCI, I can see that the image is always built in this job. I was expecting that the job would use the container that was built in the build job. So question is, why are containers not shared across jobs?

How to use docker image from build step in deploy step in CircleCI 2.0?

Struggling for a few last days to migrate from CircleCI 1.0 to 2.0 and while the build process is done, deployment is still a big issue. CircleCI documentation is not really of a big help.
Here is a similar config.yml to what I have:
version 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
The issue is in deploy job. I have to specify the docker: -image point but I want to reuse the environment from build job where all the required stuff is installed already. Surely, I could just install them in deploy job, but having multiple deploy jobs leads to code duplication which is something I do not want.
you probably want to persist to workspace and attach it in your deploy job.
you wont need to use '- checkout' after that
https://circleci.com/docs/2.0/configuration-reference/#persist_to_workspace
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
- persist_to_workspace:
root: ./
paths:
- ./
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- attach_workspace:
at: ./
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
If you label the image built by the build stage, you can then reference it in the deploy stage: https://docs.docker.com/compose/compose-file/#labels

Resources