How to update running task in AWS ECS Cluster using circle-ci deployment to AWS ECS - devops

The aim is to create ci/cd using circle-ci and deploy it in AWS ECR and AWS ECS.
Circle-Ci deploy to AWS ECS is updating the service but not the running task.
This is the circle-ci config file to build the image, push it to AWS ECR and deploy the updated service in AWS ECS Cluster.
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#8.1.2
aws-ecs: circleci/aws-ecs#3.1.0
workflows:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
dockerfile: Dockerfile
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-cli-version: latest
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
path: .
profile-name: default
repo: ${repo}
region: ${AWS_REGION}
registry-id: AWS_ECR_REGISTRY_ID
tag: ${CIRCLE_SHA1}
push-image: true
platform: linux/amd64
- aws-ecs/deploy-service-update:
cluster: ${MY_APP_PREFIX}
container-image-name-updates: 'container=${MY_APP_PREFIX},tag=${CIRCLE_SHA1}'
family: ${MY_APP_PREFIX}
service-name: ${MY_APP_PREFIX}
force-new-deployment: true
requires:
- aws-ecr/build-and-push-image
Both jobs are completed successfully. In AWS ECR, the Image is pushed successfully.
In cluster service, it is showing that task defination is 95th.
Cluster service image
When we go inside the service, the runnig task is not updated.
Cluster service taks runnig image

Related

Execute Skaffold deployment using Google Cloud Build?

I developed the yaml files for kubernetes and skaffold and the dockerfile. My deployment with Skaffold work well in my local machine.
Now I need to implement the same deployment in my k8s cluster in my Google Cloud project, triggered by new tags in a GitHub repository. I found that I have to use Google Cloud Build, but I don't know how to execute Skaffold from the cloudbuild.yaml file.
There is a skaffold image in https://github.com/GoogleCloudPlatform/cloud-builders-community
To use it, follow the following steps:
Clone the repository
git clone https://github.com/GoogleCloudPlatform/cloud-builders-community
Go to the skaffold directory
cd cloud-builders-community/skaffold
Build the image:
gcloud builds submit --config cloudbuild.yaml .
Then, in your cloudbuild.yaml, you can add a step based on this one:
- id: 'Skaffold run'
name: 'gcr.io/$PROJECT_ID/skaffold:alpha' # https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/skaffold
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-central1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=[YOUR_CLUSTER_NAME]'
entrypoint: 'bash'
args:
- '-c'
- |
gcloud container clusters get-credentials [YOUR_CLUSTER_NAME] --region us-central1-a --project [YAOUR_PROJECT_NAME]
if [ "$BRANCH_NAME" == "master" ]
then
skaffold run
fi

Automate local deployment of docker containers with gitlab runner and gitlab-ci without privileged user

We have a prototype-oriented develop environment, in which many small services are being developed and deployed to our on-premise hardware. We're using GitLab to manage our code and GitLab CI / CD for continuous integration. As a next step, we also want to automate the deployment process. Unfortunately, all documentation we find uses a cloud service or kubernetes cluster as target environment. However, we want to configure our GitLab runner in a way to deploy docker containers locally. At the same time, we want to avoid using a privileged user for the runner (as our servers are so far fully maintained via Ansible / services like Portainer).
Typically, our .gitlab-ci.yml looks something like this:
stages:
- build
- test
- deploy
dockerimage:
stage: build
# builds a docker image from the Dockerfile in the repository, and pushes it to an image registry
sometest:
stage: test
# uses the docker image from build stage to test the service
production:
stage: deploy
# should create a container from the above image on system of runner without privileged user
TL;DR How can we configure our local Gitlab Runner to locally deploy docker containers from images defined in Gitlab CI / CD without usage of privileges?
The Build stage is usually the one that people use Docker in Docker (find). To not have to use the privileged user you can use the kaniko executor image in Gitlab.
Specifically you would use the kaniko debug image like this:
dockerimage:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
rules:
- if: $CI_COMMIT_TAG
You can find examples of how to use it in Gilab's documentation.
If you want to use that image in the deploy stage you simply need to reference the created image.
You could do something like this:
production:
stage: deploy
image: $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
With this method, you do not need a privileged user. But I assume this is not what you are looking to do in your deployment stage. Usually, you would just use the image you created in the container registry to deploy the container locally. The last method explained would only deploy the image in the GitLab runner.

How to continuously deploy from jenkins to Kubernetes

i have a Maven project on my local machine and a docker image in my repo and im using gitlab and jenkins to automate builds, and now with current setup I want to continously deploy to Kubernetes. I have no idea on how this is done. Any input will be appreciated.
my yaml file looks like this
apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
component: web
spec:
containers:
- name: client
image: <image>
ports:
- containerPort: 3000
The easiest way will be to set the name of the new image. See here:
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
You will need to have access from your gitlab/Jenkins to your cluster.
Another option will be to use some kind of kubernetes deployment tool such as helm, or any other solution. This case will help you in more complicated scenarios where you also want to update your configuration files (k8s yamls).
Once the image is built and pushed to the container repository you just have to set the new image
>>> docker build -t repo-name/whatever-app:<version>
>>> docker push repo-name/whatever-app:<version>
>>> kubectl set image deployment/my-deployment mycontainer=repo-name/whatever-app:<version>
You can use this exemplary jenkins pipeline, to build and deploy your dockerized maven-app to Kubernetes with helm. It consists of following steps:
Git clone and setup
Build and local tests
Publish Docker and Helm
Deploy to dev and test
Deploy to staging and test
Optionally deploy to production and test
It think it's a nice starting point to realize CI/CD with Jenkins & Kubernets.

Usage of Kubectl command and deployment of pods using Kubernetes and Jenkins

I am trying to implement CI/CD pipeline for my spring boot microservice deployment. Here I have some sample microservices. When I am exploring about Kubernetes , I found that pods, services, replica sets/ controller, statefulsets etc. I understood those Kubernetes terminologies properly. And I am planning to use Docker hub for my image registry.
My Requirement
When there is a commit made to my SVN code repository, then the Jenkins need to pull code from Subversion repository and need to build the project , create docker image, push into Docker hub - as mentioned earlier. And after that need to deploy into my test environment from Dockerhub by pulling by Jenkins.
My Confusion
When am I creating services and pods, how I can define the docker image path within pod/services / statefulsets? Since it pulling from Docker hub for deployment.
Can I directly add kubectl command within Jenkins pipeline schedule job? How can I use kubectl command for Kubernetes deployment?
Jenkins can do anything you can do given that the tools are installed and accessible. So an easy solution is to install docker and kubectl on Jenkins and provide him with the correct kube config so he can access the cluster. So if your host can use kubectl you can have a look at the $HOME/.kube/config file.
So in your job you can just use kubectl like you do from your host.
Regarding the images from Docker Hub:
Docker Hub is the default Docker Registry for Docker anyway so normally there is no need to change anything in your cluster only if you want to use your own Private Hosted Registry. If you are running your cluster at any cloud provider I would use there Docker registries because they are better integrated.
So this part of a deployment will pull nginx from Docker Hub no need to specify anything special for it:
spec:
containers:
- name: nginx
Image: nginx:1.7.9
So ensure Jenkins can do the following things from command line:
build Docker images
Push Docker Images (make sure you called docker login on Jenkins)
Access your cluster via kubectl get pods
So an easy pipeline needs to simply do this steps:
trigger on SVN change
checkout code
create a unique version which could be Build number, SVN Revision, Date)
Build / Test
Build Docker Image
tag Docker Image with unique version
push Docker Image
change image line in Kubernetes deployment.yaml to newly build version (if your are using Jenkins Pipeline you can use readYaml, writeYaml to achive this)
call kubectl apply -f deployment.yaml
Depending on your build system and languages used there are some useful tools which can help building and pushing the Docker Image and ensuring a unique tag. For example for Java and Maven you can use Maven CI Friendly Versions with any maven docker plugin or jib.
To create deployment you need to create a yaml file.
In the yaml file you the row:
image: oronboni/serviceb
Leads you to the container that in this case in DockerHub:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: serviceb
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: serviceb
template:
metadata:
labels:
app: serviceb
spec:
containers:
- name: serviceb
image: oronboni/serviceb
ports:
- containerPort: 5002
I strongly suggest that you will see the kubernetes deployment webinar in the link below:
https://m.youtube.com/watch?v=_vHTaIJm9uY
Good luck.

Codeship: Deploying to EC2 Container Service from Docker Image

I have a project which used Codeship Pro and i have successfully push to Docker Hub, and after that i want to push my project to AWS EC2 Container Service with it.
I followed this documentation:
https://documentation.codeship.com/pro/continuous-deployment/aws/
- service: awsdeployment
command: aws ecs register-task-definition --cli-input-json file:///deploy/tasks/backend.json
- service: awsdeployment
command: aws ecs update-service --service my-backend-service --task-definition backend
The problem is in documentation, it doesn't explain what deploy/tasks/backend.json contain,
i tried to remove it in codeship-steps.yml
- service: awsdeployment
command: aws ecs update-service --service my-backend-service --task-definition backend
But the result is: An error occurred (ClientException) when calling the UpdateService operation: TaskDefinition not found.
Currently, i use
ecs-cli compose up
it push my project to my EC2 Container Service using my docker-compose.yml
I spend a long day to figure it out, but still i have no idea to successfully push to my AWS ECS, i can't use ecs-cli command when pushing with shipcode.
What should i do ?

Resources