I developed the yaml files for kubernetes and skaffold and the dockerfile. My deployment with Skaffold work well in my local machine.
Now I need to implement the same deployment in my k8s cluster in my Google Cloud project, triggered by new tags in a GitHub repository. I found that I have to use Google Cloud Build, but I don't know how to execute Skaffold from the cloudbuild.yaml file.
There is a skaffold image in https://github.com/GoogleCloudPlatform/cloud-builders-community
To use it, follow the following steps:
Clone the repository
git clone https://github.com/GoogleCloudPlatform/cloud-builders-community
Go to the skaffold directory
cd cloud-builders-community/skaffold
Build the image:
gcloud builds submit --config cloudbuild.yaml .
Then, in your cloudbuild.yaml, you can add a step based on this one:
- id: 'Skaffold run'
name: 'gcr.io/$PROJECT_ID/skaffold:alpha' # https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/skaffold
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-central1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=[YOUR_CLUSTER_NAME]'
entrypoint: 'bash'
args:
- '-c'
- |
gcloud container clusters get-credentials [YOUR_CLUSTER_NAME] --region us-central1-a --project [YAOUR_PROJECT_NAME]
if [ "$BRANCH_NAME" == "master" ]
then
skaffold run
fi
Related
I work on a spring-boot based project and use a local machine as test environment to deploy it as a docker container.
I am in the middle of creating a bitbucket pipeline that automates everything between building and deploying. For this pipeline I make use of a self hosted runner (docker) that also runs on the same machine and docker instance where I plan to deploy my project.
I managed to successfully build the project (mvn and docker), and load the docker image into my GCP container registry.
My final deployment step (docker run xxx, see yml script below) was also successful but since it is running in a container itself it was not running the script on the top level docker.
as far as I understand the runner itself has access to the host docker, because the docker.sock is mounted. but for each step another container is created which does not have access to the docker.sock, right? So basically I need to know how to give access to this file unless there's a better solution to that.
here the shortened pipeline definition:
image: maven:3.8.7-openjdk-18
definitions:
services:
docker:
image: docker:dind
pipelines:
default:
# build only for feature branches or so
branches:
test:
# build, docker and upload steps
- step:
name: Deploy
deployment: test
image: google/cloud-sdk:alpine
runs-on:
- 'self.hosted'
- 'linux'
caches:
- docker
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- VERSION="${BITBUCKET_BUILD_NUMBER}"
- DOCKER_IMAGE="${DOCKER_REGISTRY}/${IMAGE_NAME}:${VERSION}"
# Authenticating with the service account key file
- echo $GCLOUD_API_KEYFILE > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- gcloud config set project $GCLOUD_PROJECT
# Login with docker and stop old container (if exists) and run new one
- cat ./gcloud-api-key.json | docker login -u _json_key --password-stdin https://eu.gcr.io
- docker ps -q --filter "name=${IMAGE_NAME}" | xargs -r docker stop
- docker run -d -p 82:8080 -p 5005:5005 --name ${IMAGE_NAME} --rm ${DOCKER_IMAGE}
services:
- docker
I've got a NodeJS project in a Bitbucket repo, and I am struggling to understand how to use Bitbucket Pipelines to get it from there onto my DigitalOcean server, where it can be served on the web.
So far I've got this
image: node:10.15.3
pipelines:
default:
- parallel:
- step:
name: Build
caches:
- node
script:
- npm run build
So now the app was built and should be saved as a single file server.js in a theoretical /dist directory.
How now do I dockerize this file and then upload it to my DigitalOcean?
I can't find any examples for something like this.
I did find a Docker template in the Bitbucket Pipelines editor, but it only somewhat describes creating a Docker image, and not at all how to actually deploy it to a DigitalOcean server (or anywhere)
- step:
name: Build and Test
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- docker build . --file Dockerfile --tag ${IMAGE_NAME}
- docker save ${IMAGE_NAME} --output "${IMAGE_NAME}.tar"
services:
- docker
caches:
- docker
artifacts:
- "*.tar"
- step:
name: Deploy to Production
deployment: Production
script:
- echo ${DOCKERHUB_PASSWORD} | docker login --username "$DOCKERHUB_USERNAME" --password-stdin
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- docker load --input "${IMAGE_NAME}.tar"
- VERSION="prod-0.1.${BITBUCKET_BUILD_NUMBER}"
- IMAGE=${DOCKERHUB_NAMESPACE}/${IMAGE_NAME}
- docker tag "${IMAGE_NAME}" "${IMAGE}:${VERSION}"
- docker push "${IMAGE}:${VERSION}"
services:
- docker
You would have to SSH into your DigitalOcean VPS and then do some steps there:
Pull the current code
Build the docker file
Deploy the docker file
An example could look like this:
Create some script like "deployment.sh" in your repository root:
cd <path_to_local_repo>
git pull origin master
docker container stop <container_name>
docker container rm <container_name>
docker image build -t <image_name> .
docker container run -itd --name <container_name> <image_name>
and then add the following into your pipeline:
# ...
- step:
deployment: staging
script:
- cat ./deployment.sh | ssh <ssh_user>#<ssh_host>
You have to add your ssh key for your repository on your server, though. Check out the following link, on how to do this: https://confluence.atlassian.com/display/BITTEMP/Use+SSH+keys+in+Bitbucket+Pipelines
Here is a similar question, but using PHP: Using BitBucket Pipelines to Deploy onto VPS via SSH Access
We have a prototype-oriented develop environment, in which many small services are being developed and deployed to our on-premise hardware. We're using GitLab to manage our code and GitLab CI / CD for continuous integration. As a next step, we also want to automate the deployment process. Unfortunately, all documentation we find uses a cloud service or kubernetes cluster as target environment. However, we want to configure our GitLab runner in a way to deploy docker containers locally. At the same time, we want to avoid using a privileged user for the runner (as our servers are so far fully maintained via Ansible / services like Portainer).
Typically, our .gitlab-ci.yml looks something like this:
stages:
- build
- test
- deploy
dockerimage:
stage: build
# builds a docker image from the Dockerfile in the repository, and pushes it to an image registry
sometest:
stage: test
# uses the docker image from build stage to test the service
production:
stage: deploy
# should create a container from the above image on system of runner without privileged user
TL;DR How can we configure our local Gitlab Runner to locally deploy docker containers from images defined in Gitlab CI / CD without usage of privileges?
The Build stage is usually the one that people use Docker in Docker (find). To not have to use the privileged user you can use the kaniko executor image in Gitlab.
Specifically you would use the kaniko debug image like this:
dockerimage:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
rules:
- if: $CI_COMMIT_TAG
You can find examples of how to use it in Gilab's documentation.
If you want to use that image in the deploy stage you simply need to reference the created image.
You could do something like this:
production:
stage: deploy
image: $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
With this method, you do not need a privileged user. But I assume this is not what you are looking to do in your deployment stage. Usually, you would just use the image you created in the container registry to deploy the container locally. The last method explained would only deploy the image in the GitLab runner.
I have a github repository, a docker repository and a Amazon ec2 instance. I am trying to create a CI/CD pipeline with these tools. The idea is to deploy a docker container to ec2 instance when a push happened to github repository master branch. I have used github actions to build the code, build docker image and push docker image to docker hub. Now I want to pull the latest image from docker hub to remote ec2 instance and run the same. For this I am trying to execute ansible command from github actions. But I need to specify .pem file as an argument to the ansible command. I tried to keep .pem file in github secretes, but it didn't work. I am really confused how to proceed with this.
Here is my github workflow file
name: helloworld_cicd
on:
push:
branches:
- master
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Check out code into the Go module directory
uses: actions/checkout#v1
- name: Go Build
run: go build
- name: Docker build
run: docker build -t helloworld .
- name: Docker login
run: docker login --username=${{ secrets.docker_username }} --password=${{ secrets.docker_password }}
- name: Docker tag
run: docker tag helloworld vijinvv/helloworld:latest
- name: Docker push
run: docker push vijinvv/helloworld:latest
I tried to run something like
ansible all -i '3.15.152.219,' --private-key ${{ secrets.ssh_key }} -m rest of the command
but that didn't work. What would be the best way to solve this issue
I'm guessing what you meant by "it didn't work" is that ansible expects the private key to be a file, whereas you are supplying a string.
This page on github actions shows how to use secret files on github actions. The equivalent for your case would be to do the following steps:
gpg --symmetric --cipher-algo AES256 my_private_key.pem
Choose a strong passphrase and save this passphrase as a secret in github secrets. Call it LARGE_SECRET_PASSPHRASE
Commit your encrypted my_private_key.pem.gpg in git
Create a step in your actions that decrypts this file. It could look something like:
- name: Decrypt Pem
run: gpg --quiet --batch --yes --decrypt --passphrase="$LARGE_SECRET_PASSPHRASE" --output $HOME/secrets/my_private_key.pem my_private_key.pem.gpg
env:
LARGE_SECRET_PASSPHRASE: ${{ secrets.LARGE_SECRET_PASSPHRASE }}
Finally you can run your ansible command with ansible all -i '3.15.152.219,' --private-key $HOME/secrets/my_private_key.pem
You can easily use webfactory/ssh-agent to add your ssh private key. You can see its documentation and add the following stage before running the ansible command.
# .github/workflows/my-workflow.yml
jobs:
my_job:
...
steps:
- actions/checkout#v2
# Make sure the #v0.5.2 matches the current version of the
# action
- uses: webfactory/ssh-agent#v0.5.2
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- ... other steps
SSH_PRIVATE_KEY must be the key that is registered in repository secrets. After that, run your ansible command without passing the private key file.
This question is more advice related so I hope its not flagged for anything. Just really need help :(
Trying to implement CI/CD using GitHub/Jenkins/Kubernetes.
On a highlevel this is what should happen:
Build on Jenkins
Push to container registry
Deploy built image on Kubernetes development cluster
Once testing finished on Development cluster, deploy it on a client
testing cluster and finally production cluster
So far this is what I have created a job on Jenkins which will be triggered using a Github hook.
This job is responsible for the following things:
Checkout from GitHub
Run unit tests / call REST API and send unit test results
Build artifacts using maven / call REST API and inform if build
success or fail
Build docker image
Push docker image to container registry (docker image will have
incremented versions which match with the BUILD_NUMBER environment variable)
The above stated tasks are more or less completed and I dont need much assitance with it (unless anyone thinks the aforementioned steps are not best practice)
I do need help with the part where I deploy to the Kubernetes cluster.
For local testing, I have set up a local cluster using Vagrant boxes and it works. In order to deploy the built image on the development cluster, I am thinking about doing it like this:
Point Jenkins build server to Kubernetes development cluster
Deploy using deployment.yml and service.yml (available in my repo)
This part I need help with...
Is this wrong practice? Is there a better/easier way to do it?
Also is there a way to migrate between clusters? Ex: Development cluster to client testing cluster and client testing cluster to production cluster etc
When searching on the internet, the name Helm comes up a lot but I am not sure if it will be applicable to my use case. I would test it and see but I am a bit hard pressed for time which is why I cant
Would appreciate any help y'all could provide.
Thanks a lot
There are countless ways of doing this. Take Helm out for now as you are just starting.
If you are already using Github and docker , then I would just recommend you to push your code/changes/config/Dockerfile to Github that will auto trigger a docker build on Dockerhub ( maybe jenkins in ur case if u dont want to use dockerhub for builds ) , it can be a multi-stage docker build where you can build code , run tests , throw away dev environmenet , and finally produce a producion docker image , once the image is produced , it will triger a web hook to your kubernetes deployment job/manifests to deploy on to test evironmenet , followed by manual triiger to deploy to production.
The docker images can be tagged based on SHA of the commits in Github/Git so that you can deploy and rollback based on commits.
Reference: https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build
Here is my Gitlab implementation of Gtips workflow:
# Author , IjazAhmad
image: docker:latest
stages:
- build
- test
- deploy
services:
- docker:dind
variables:
CI_REGISTRY: dockerhub.example.com
CI_REGISTRY_IMAGE: $CI_REGISTRY/$CI_PROJECT_PATH
DOCKER_DRIVER: overlay2
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
docker-build:
stage: build
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
docker-push:
stage: build
script:
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
unit-tests:
stage: test
script:
- echo "running unit testson the image"
- echo "running security testing on the image"
- echo "pushing the results to build/test pipeline dashboard"
sast:
stage: test
script:
- echo "running security testing on the image"
- echo "pushing the results to build/test pipeline dashboard"
dast:
stage: test
script:
- echo "running security testing on the image"
- echo "pushing the results to build/test pipeline dashboard"
testing:
stage: deploy
script:
- sed -i "s|CI_IMAGE|$CI_REGISTRY_IMAGE|g" k8s-configs/deployment.yaml
- sed -i "s|TAG|$CI_COMMIT_SHA|g" k8s-configs/deployment.yaml
- kubectl apply --namespace webproduction-test -f k8s-configs/
environment:
name: testing
url: https://testing.example.com
only:
- branches
staging:
stage: deploy
script:
- sed -i "s|CI_IMAGE|$CI_REGISTRY_IMAGE|g" k8s-configs/deployment.yaml
- sed -i "s|TAG|$CI_COMMIT_SHA|g" k8s-configs/deployment.yaml
- kubectl apply --namespace webproduction-stage -f k8s-configs/
environment:
name: staging
url: https://staging.example.com
only:
- master
production:
stage: deploy
script:
- sed -i "s|CI_IMAGE|$CI_REGISTRY_IMAGE|g" k8s-configs/deployment.yaml
- sed -i "s|TAG|$CI_COMMIT_SHA|g" k8s-configs/deployment.yaml
- kubectl apply --namespace webproduction-prod -f k8s-configs/
environment:
name: production
url: https://production.example.com
when: manual
only:
- master
Links:
Trigger Jenkins builds by pushing to Github
Triggering a Jenkins build from a push to Github
Jenkins: Kick off a CI Build with GitHub Push Notifications
Look at spinnaker for continuous delivery. After the image is built and pushed to registry, have a web hook in spinnaker trigger a deployment to required kubernetes cluster. Spinnaker works well with kubernetes and you definitely should try it out
I understand that you are trying to implement GitOps, my advice is to review this article where you can start to figure out a little bit more about the components you need.
https://www.weave.works/blog/managing-helm-releases-the-gitops-way
Basically, you need to implement your own helm charts for your custom services and manage it using flux, I recommend to use a different repository per environment and leave flux to manage the deployment to each environment based on the state of the master branch on the repo.