docker image for dbt-snowflake - docker

How can I run dbt commands within a bitbucket pipeline? What is the correct docker image that I should be using if I wanted to use dbt-snowflake? I tried fishtownanalytics/dbtand joevandyk/dbtboth:
pipelines:
custom:
test-dbt:
- step:
name: 'Test'
image: fishtownanalytics/dbt
script:
- cd dbt_4flow
- dbt compile
but I still get this error:
+ dbt compile
bash: dbt: command not found

When I try to pull the image locally using
docker pull fishtownanalytics/dbt
I am not able, but by adding the version:
docker pull fishtownanalytics/dbt:1.0.0
It's working.
You need to add the version to your bitbucket pipeline, so you will be able to do it.

As mentioned above, you need to specify the version for fishtownanalytics.
image: fishtownanalytics/dbt:1.0.0
Here's a full tutorial for creating a similar pipeline: https://medium.com/geekculture/automate-dbt-runs-with-bitbucket-pipelines-3e7528ff991f

Related

BitBucket Pipeline log has no output when using custom image

I'm attempting to POC BitBucket Pipelines for some terraform work. I've got a self-hosted runner, running locally in my Docker environment, which is registered to my repository. This was set up following the generic instructions in the BitBucket UI.
My bitbucket-pipelines.yml file looks like this:
pipelines:
branches:
master:
- step:
runs-on: self.hosted
image: hashicorp/terraform:latest
name: 'Terraform Version'
script:
- terraform -v
Extremely basic, just run a terraform -v command on the hashicorp/terraform image.
The pipeline succeeds, and I can see the image is pulled, however there is absolutely no output in BitBucket from the container. All I see in the step log is:
Runner matching labels:
- linux
- self.hosted
Runner name: my-runner
Runner labels: self.hosted, linux
Runner version:
current: 1.252
latest: 1.252
Images used:
build: hashicorp/terraform#sha256:984ac701744995019b1309b542de03535a63097444e72b8f248d0a0d95520443
Even a simple echo "string" script does not get to the logs as output. I find that really strange, and I must be missing something fundamental. I've scoured the docs and can't find anything.
Does anyone know how to get the output from a custom image into the Bitbucket logs?
Do you use Docker Desktop on Windows?
You won't see any logs from containers if you use DockerDesktop (tested on 4.3.2) on Windows with WSL integration. That's due to container logs have another location and they're not available to bitbucket runner container.
-- Update --
There's a feature request to add local runners WSL full compatibility now. Pls vote if you need it too.
https://jira.atlassian.com/browse/BCLOUD-21611
I had a similar issue where I was getting no logs in my Pipeline output UI, though the ultimate status was reflected correctly (i.e. pass or fail).
I was using the command provided by Bitbucket to create a Linux Docker runner, and I noticed it contains this volume definition:
... -v /var/lib/docker/containers:/var/lib/docker/containers:ro ...
However, I am using a custom data-root for docker (see this blog for details), so the path /var/lib/docker/containers doesn't exist on my host machine. So, I modified this volume to point at my data-root setting, and then the logs showed up as expected.

sudo: command not found | gitlab-ci

I'm using gitlab-ci for my simple project.
And everything is ok my runner is working on my local machine(ubuntu18-04) and I tested it with simple .gitlab-ci.yml.
Now I try to use the following yml:
image: ubuntu:18.04
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
- sudo apt-get update
but I get the following error:
/bin/bash: line 110: sudo: command not found
How can I use sudo?
You shouldn't have to worry about updating the Ubuntu image used in a Gitlab CI pipeline job because the docker container is destroyed when the job is finished. Furthermore, the docker images are frequently updated. If you look at ubuntu:18.04's docker hub page, it was just updated 2 days ago: https://hub.docker.com/_/ubuntu?tab=tags&page=1&ordering=last_updated
Since you're doing an update here, I'm going to assume that next you might want to install some packages. It's possible to do so, but not advised since every pipeline that you run will have to install those packages, which can really slow them down. Instead, you can create a custom docker image based on a parent image and customize it that way. Then you can either upload that docker image to docker hub, Gitlab's registry (if using self-hosted Gitlab, it has to be enabled by an admin), or built on all of your gitlab-runners.
Here's a dumb example:
# .../custom_ubuntu:18.04/Dockerfile
FROM ubuntu:18.04
RUN apt-get install git
Next you can build the image: docker build /path/to/directory/that/has/dockerfile, tag it so you can reference it in your pipeline config file: docker tag aaaaafffff59 my_org/custom_ubuntu:18.04. Then if needed you can upload the tagged image docker push my_org/custom_ubuntu:18.04.
In your .gitlab-ci.yml file, reference this custom Ubuntu image:
image: my_org/custom_ubuntu:18.04
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
- git --version # ensures the package you need is available
You can read more about using custom images in Gitlab CI here: https://docs.gitlab.com/charts/advanced/custom-images/

How do I build a docker-compose container from a git-resource in Concourse CI?

I am currently trying to build and deploy a dockerized Go project, pulled from a Git repo in using Concourse.
To give you some background about my current setup:
I got two AWS Lightsail instances set up, both of them using a Docker container to serve Concourse.
One of those instances is serving the web node, the other one is acting as a worker node, which connects to the web node.
My current pipeline looks like this:
resources:
- name: zsu-wasserlabor-api-repo
type: git
webhook_token: TOP_SECRET
source:
uri: git#github.com:lennartschoch/zsu-wasserlabor-api
branch: master
private_key: TOP_SECRET
jobs:
- name: build-api
plan:
- get: zsu-wasserlabor-api-repo
trigger: true
- task: build
config:
platform: linux
image_resource:
type: docker-image
source: {repository: alpine}
inputs:
- name: zsu-wasserlabor-api-repo
run:
path: sh
args:
- -c
- |
cd zsu-wasserlabor-api-repo
docker-compose build
The problem is that docker-compose is not installed.
I am feeling like I am doing something fundamentally wrong. Could anyone give me a hint?
Best,
Lennart
The pipeline described above specifies that it should use the alpine image, which doesn't have docker-compose on it. Thus, you will need to find an image that has docker-compose installed on it, but even then, there are additional steps you will need to take to make it work in Concourse (see this link for more details).
Fortunately, someone has made an image available that takes care of the additional steps, with a sample pipeline that you can find here: https://github.com/meAmidos/dcind
That being said, if you are simply trying to build a Docker image, you can use the docker-image-resource instead and just specify the Dockerfile.

Circleci passing a docker image in workflow jobs

Is it possible to pass docker images built in earlier job in circle ci
example
jobs:
build:
steps:
- checkout
// build image
deploy:
steps:
- deploy earlier image
i cant see how i can access the image without rebuilding it
Each job can run on a different host, so to share the image you would need to push it to a registry from the job that builds it.
To reference the same job that was pushed you'll need an identifier that is known ahead of time. A good example of this is the CIRCLE_SHA1 environment variable. You can use this variable as the image tag
jobs:
build:
machine: true
steps:
...
- run: |
docker build -t repo/app:$CIRCLE_SHA1 .
docker push repo/app:$CIRCLE_SHA1
test:
docker:
- image: repo/app:$CIRCLE_SHA1
steps:
...
I believe you can achieve this by persisting the image to a workspace and then attaching the workspace when you want to deploy it. See CircleCI's workspace documentation here: https://circleci.com/docs/workspaces

Bitbucket Pipelines - steps - docker - cant find image

I'm building my pipline to create a docker image, then push it to AWS. I have it broken into steps, and in Bitbucket, you have to tell it what artifacts to share between them. I have a feeling this is a simple bug, but I just cannot figure it out.
It's failing at 'docker tag' in step 4 with:
docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Error response from daemon: No such image: projectname:v.11
Basically it cannot find the docker image created...
Here's my pipeline script (some of it simplified)
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
name: 1. Install dotnet
script:
# Do things
- step:
name: 2. Install AWS CLI
script:
# Do some more things
- step:
name: 3. Build Docker Image
script:
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
artifacts:
- ./**
- step:
name: 4. Push Docker Image to AWS
script:
# Tag and push my docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Now, I know this script works, but only if I remove all the steps. For whatever reason, step 4 doesn't have access to the docker image created in step 3. Any help is appreciated!
Your docker images are not stored in the folder where you start the build, so they are not saved to the artefacts, and not available in the next step.
Even if they were (you could pack/unpack it through docker save), you would probably run against the size limits for artefacts, not to mention the time the time it takes to pack/unpack.
I guess you'd be better off if you created a Dockerfile in your project yourself, and combine step 1 & 2 there. Your bitbucket pipeline could then be based on a docker image that already contains the AWS-cli and uses docker as a service, and your one step would then consist of building your project's Dockerfile and uploading to AWS. This also lowers your dependency on bitbucket pipelines, as
The Docker image is not being passed from step 3 to step 4 as the Docker image is not stored in the build directory.
The simplest solution would be to combine all four of your steps into a single step as follows:
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
script:
# Install dependencies
- ./install-dot-net
- ./install-aws-cli
# Build the Docker image
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
# Tag and push the Docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER

Resources