I have to run from bitbucket the sonar-scanner that is already configured. Thing is that I am new to all of these: BitBucket, sonar-scanner, docker and I need to integrate them in a way that I can only run the sonar-scanner from BitBucket from this point and then use more advanced analysis from sonar-scanner.
I tried to use a docker image using sonar-scanner, but didn't manage to build it. So I got it from GitHub directly, but didn't manage to use it from bitbucket.
I took a look on this thread but it is using GitLab, even though it is similar to what I need:
Launching Sonar Scanner from a gitlab docker runner
bitbucket-pipelines.yml
# This is a sample build configuration for Docker.
# Check our guides at https://confluence.atlassian.com/x/O1toN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script: # Modify the commands below to build your repository.
# Set $DOCKER_HUB_USERNAME and $DOCKER_HUB_PASSWORD as environment variables in repository settings
- export IMAGE_NAME=emeraldsquad/sonar-scanner:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
#RETURNS ERROR - docker build -t $IMAGE_NAME .
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push $IMAGE_NAME
Related
I work on a spring-boot based project and use a local machine as test environment to deploy it as a docker container.
I am in the middle of creating a bitbucket pipeline that automates everything between building and deploying. For this pipeline I make use of a self hosted runner (docker) that also runs on the same machine and docker instance where I plan to deploy my project.
I managed to successfully build the project (mvn and docker), and load the docker image into my GCP container registry.
My final deployment step (docker run xxx, see yml script below) was also successful but since it is running in a container itself it was not running the script on the top level docker.
as far as I understand the runner itself has access to the host docker, because the docker.sock is mounted. but for each step another container is created which does not have access to the docker.sock, right? So basically I need to know how to give access to this file unless there's a better solution to that.
here the shortened pipeline definition:
image: maven:3.8.7-openjdk-18
definitions:
services:
docker:
image: docker:dind
pipelines:
default:
# build only for feature branches or so
branches:
test:
# build, docker and upload steps
- step:
name: Deploy
deployment: test
image: google/cloud-sdk:alpine
runs-on:
- 'self.hosted'
- 'linux'
caches:
- docker
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- VERSION="${BITBUCKET_BUILD_NUMBER}"
- DOCKER_IMAGE="${DOCKER_REGISTRY}/${IMAGE_NAME}:${VERSION}"
# Authenticating with the service account key file
- echo $GCLOUD_API_KEYFILE > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- gcloud config set project $GCLOUD_PROJECT
# Login with docker and stop old container (if exists) and run new one
- cat ./gcloud-api-key.json | docker login -u _json_key --password-stdin https://eu.gcr.io
- docker ps -q --filter "name=${IMAGE_NAME}" | xargs -r docker stop
- docker run -d -p 82:8080 -p 5005:5005 --name ${IMAGE_NAME} --rm ${DOCKER_IMAGE}
services:
- docker
I'm trying to create a CI/CD pipeline to push to Docker Hub from my bitbucket repository. The build is successful up to a point where I get the following error:
error when pushing to docker
"docker push" requires exactly 1 argument.
See 'docker push --help'.
Usage: docker push [OPTIONS] NAME[:TAG]
Push an image or a repository to a registry
Also, here is my bitbucket-pipelines.yml file:
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script:
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -f docker/workspace/Dockerfile .
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push chatapp/monorepo:version1.1 .
Am I missing something in the .yml file? Is my image name something that needs a change? I'm new to pipelines and can't seem to find the error.
We have a prototype-oriented develop environment, in which many small services are being developed and deployed to our on-premise hardware. We're using GitLab to manage our code and GitLab CI / CD for continuous integration. As a next step, we also want to automate the deployment process. Unfortunately, all documentation we find uses a cloud service or kubernetes cluster as target environment. However, we want to configure our GitLab runner in a way to deploy docker containers locally. At the same time, we want to avoid using a privileged user for the runner (as our servers are so far fully maintained via Ansible / services like Portainer).
Typically, our .gitlab-ci.yml looks something like this:
stages:
- build
- test
- deploy
dockerimage:
stage: build
# builds a docker image from the Dockerfile in the repository, and pushes it to an image registry
sometest:
stage: test
# uses the docker image from build stage to test the service
production:
stage: deploy
# should create a container from the above image on system of runner without privileged user
TL;DR How can we configure our local Gitlab Runner to locally deploy docker containers from images defined in Gitlab CI / CD without usage of privileges?
The Build stage is usually the one that people use Docker in Docker (find). To not have to use the privileged user you can use the kaniko executor image in Gitlab.
Specifically you would use the kaniko debug image like this:
dockerimage:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
rules:
- if: $CI_COMMIT_TAG
You can find examples of how to use it in Gilab's documentation.
If you want to use that image in the deploy stage you simply need to reference the created image.
You could do something like this:
production:
stage: deploy
image: $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
With this method, you do not need a privileged user. But I assume this is not what you are looking to do in your deployment stage. Usually, you would just use the image you created in the container registry to deploy the container locally. The last method explained would only deploy the image in the GitLab runner.
I'm trying to make automatic publishing using docker + bitbucket pipelines; unfortunately, I have a problem. I read the pipelines deploy instructions on Docker Hub, and I created the following template:
# This is a sample build configuration for Docker.
# Check our guides at https://confluence.atlassian.com/x/O1toN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script: # Modify the commands below to build your repository.
# Set $DOCKER_HUB_USERNAME and $DOCKER_HUB_PASSWORD as environment variables in repository settings
- export IMAGE_NAME=paweltest/tester:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t paweltest/tester .
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push paweltest/tester:tagname
I have completed the data, but after doing the push, I get the following error when the build starts:
unable to prepare context: lstat/opt/atlassian/pipelines/agent/build/Dockerfile: no dry file or directory
What would I want to achieve? After posting changes to the repository, I'd like for an image to be automatically built and sent to the Docker hub, preferably to the target server where the application is.
I've looked for a solution and tried different combinations. For now, I have about 200 commits with Failed status and no further ideas.
Bitbucket pipelines is a CI/CD service, you can build your applications and deploy resources to production or test server instance. You can build and deploy docker images too - it shouldn't be a problem unless you do something wrong...
All defined scripts in bitbucket-pipelines.yml file are running in a container created from the indicated image(atlassian/default-image:2 in your case)
You should have Dockerfile in the project and from this file you can build and publish a docker image.
I created simple repository without Dockerfile and I started build:
unable to prepare context: unable to evaluate symlinks in Dockerfile
path: lstat /opt/atlassian/pipelines/agent/build/Dockerfile: no such
file or directory
I need Dockerfile in my project to build an image(at the same level as the bitbucket-pipelines.yml file):
FROM node:latest
WORKDIR /src/
EXPOSE 4000
In next step I created a public DockerHub repository:
I also changed your bitbucket-pipelines.yml file(you forgot to mark the new image with a tag):
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script:
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t appngpl/stackoverflow-question-56065689 .
# add new image tag
- docker tag appngpl/stackoverflow-question-56065689 appngpl/stackoverflow-question-56065689:$BITBUCKET_COMMIT
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push appngpl/stackoverflow-question-56065689:$BITBUCKET_COMMIT
Result:
Everything works fine :)
Bitbucket repository: https://bitbucket.org/krzysztof-raciniewski/stackoverflow-question-56065689
GitHub image repository: https://hub.docker.com/r/appngpl/stackoverflow-question-56065689
Been trying to set-up Gitlab CI which can build a docker image, and came across that DinD was enabled initially only for separate runners and Blog Post suggest it would be enabled soon for shared runners,
Running DinD requires enabling privileged mode in runners, which is set as a flag while registering runner, but couldn't find an equivalent mechanism for Shared Runners
The shared runners are now capable of building Docker images. Here is the job that you can use:
stages:
- build
- test
- deploy
# ...
# other jobs here
# ...
docker:image:
stage: deploy
image: docker:1.11
services:
- docker:dind
script:
- docker version
- docker build -t $CI_REGISTRY_IMAGE:latest .
# push only for tags
- "[[ -z $CI_BUILD_TAG ]] && exit 0"
- docker tag $CI_REGISTRY_IMAGE:latest $CI_REGISTRY_IMAGE:$CI_BUILD_TAG
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE:$CI_BUILD_TAG
This job assumes that you are using the Container Registry provided by Gitlab. It pushes the images only when the build commit is tagged with a version number.
Documentation for Predefined variables.
Note that you will need to cache or generate as temporary artifacts of any dependencies for your service which are not committed in the repository. This is supposed to be done in other jobs. e.g. node_modules are not generally contained in the repository and must be cached from the build/test stage.