My application consists with two project such as client and server. The client is a react application and the server is a spring boot based java backend project. Both contain separate docker files and in the root folder, I have combined both using docker-compose.yaml file. It works fine in the local machine and now I want to deploy the whole application in AWS. I am trying to deploy the images to AWS using Git Hub Action.
It is as follows.
name: Deploy to AWS ECR
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v3
# Runs a single command using the runners shell
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
# Runs a set of commands using the runners shell
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: snake_ecr
IMAGE_TAG: latest
run: |
docker-compose up --build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
my project structure is as follows,
|
|-client\Dockerfile
|-server\Dockerfile
|-docker-compose.yml
When running 'Build, tag, and push image to Amazon ECR' area the script it gives the following error,
Run docker-compose up --build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker-compose up --build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
shell: /usr/bin/bash -e {0}
env:
AWS_DEFAULT_REGION: us-east-1
AWS_REGION: us-east-1
AWS_ACCESS_KEY_ID: ***
AWS_SECRET_ACCESS_KEY: ***
ECR_REGISTRY: ***.dkr.ecr.us-east-1.amazonaws.com
ECR_REPOSITORY: snake_ecr
IMAGE_TAG: latest
[1644] Failed to execute script docker-compose
Traceback (most recent call last):
File "docker-compose", line 3, in <module>
File "compose/cli/main.py", line 81, in main
File "compose/cli/main.py", line 203, in perform_command
File "compose/metrics/decorator.py", line 18, in wrapper
File "compose/cli/main.py", line 1140, in up
File "compose/cli/main.py", line 1300, in timeout_from_opts
ValueError: invalid literal for int() with base 10: '***.dkr.ecr.us-east-1.amazonaws.com/snake_ecr:latest'
Can anybody help me to solve this issue?
Are you sure that docker-compose is installed on the ubuntu-latest image you are using to run the actions?
I usually use an extra image for that like specified here:
https://github.com/marketplace/actions/build-and-push-docker-images#git-context
name: Build and push
uses: docker/build-push-action#v3
with:
push: true
tags: user/app:latest
Then you might need to specify your docker images individually, because it's not working with docker-compose.
Related
Im building a simple CI/CD workflow with github actions. The workflow starts with running all unit tests. When the unit tests were successfull, a docker image gets build and uploaded to docker hub. I have a few environment variables that need to be set, in order run the tests and also to run the docker container.
Thats currently the only way I get it to work:
name: Deploy to Linode k8s Cluster Workflow
env:
REGISTRY: "abc"
IMAGE_NAME: "defg"
DO_POSTGRESQL_URL: ${{ secrets.DO_POSTGRESQL_URL }}
DO_POSTGRESQL_USER: ${{ secrets.DO_POSTGRESQL_USER }}
DO_POSTGRESQL_PASS: ${{ secrets.DO_POSTGRESQL_PASS }}
# ... and more
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
environment: test
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v3
- name: Setup Java
uses: actions/setup-java#v3
with:
java-version: 18
distribution: temurin
cache: gradle
- name: Setup Gradle
uses: gradle/gradle-build-action#v2
- name: Execute Gradle build
run: ./gradlew build --scan
- name: Login to Docker Hub
uses: docker/login-action#v2.1.0
with:
# Username used to log against the Docker registry
username: ${{ secrets.DOCKER_HUB_USERNAME }}
# Password or personal access token used to log against the Docker registry
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Build and push Docker images
uses: docker/build-push-action#v4.0.0
with:
context: .
# because two jar files get generated (...-SNAPSHOT-plain.jar and ...-SNAPSHOT.jar) only ...-SNAPSHOT.jar is needed
build-args: JAR_FILE=build/libs/*SNAPSHOT.jar, DO_POSTGRESQL_URL=$DO_POSTGRESQL_URL #,... and so on
push: true
tags: abc/defg:latest
# Linode deployment here
But imagine im having multiple workflow files, then I have to add the environment variables on multiple different places. Is there a clever way to solve this problem. So that I only have to define the env variables once?
This will be a decent read so I thank you a lot for trying to help :bow:
I am trying to write a github action configuration that does the following two tasks:
Creates a autodeploy.xar file inside the build folder
Use that folder along with all other files inside to create a docker image.
The build process can not find the folder/files that the previous step has created. So I tried three things:
Try to use the file created in the previous step (within the same job in github actions) but couldn't get it to run.
The build process threw an error that complained that the file doesn't exist: Error: buildx failed with: error: failed to solve: lstat /var/lib/docker/tmp/buildkit-mount3658977881/build/autodeploy.xar: no such file or directory
Try to build two jobs, one to initiate the file and the other that needs the first one to build the docker. However, this gave the same error as step 1.
Try to build the docker image from task 1
This step is just running a bash script from the github actions.
I tried to run docker build . from inside the shell script, but the github actions complained with "docker build" requires exactly 1 argument.
I was providing the right argument because on echoing the command I clearly saw the output docker build . --file Dockerfile --tag ***/***:latest --build-arg ADMIN_PASSWORD=***
This must be something very trivial, but I have no idea what's going wrong. And I think a solution to either one of these approaches should work.
Thanks once again for going through all this. Please find the GH actions, workflow.sh and the docker file below:
The GitHub actions yml file:
name: ci
on:
push:
branches:
- 'build'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up JDK 11
uses: actions/setup-java#v3
with:
java-version: '11'
distribution: 'temurin'
- name: Login to DockerHub
uses: docker/login-action#v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Run script to replace template file
run: |
build/workflow.sh
- name: Build and push
uses: docker/build-push-action#v3
with:
push: true
tags: ${{secrets.DOCKERHUB_USERNAME}}/${{secrets.REPO_NAME}}:latest
build-args: |
ADMIN_PASSWORD=${{secrets.ADMIN_PASSWORD}}
The workflow file:
# run the ant
ant <--------- This command just creates autodeploy.xar file and puts it inside the build directory
#### I TESTED WITH AN ECHO COMMAND AND THE FILES ARE ALL THERE:
# echo $(ls build)
The docker file:
# Specify the eXist-db release as a base image
FROM existdb/existdb:6.0.1
COPY build/autodeploy.xar /exist/autodeploy/ <------ THIS LINE FAILS
COPY conf/controller-config.xml /exist/etc/webapp/WEB-INF/
COPY conf/exist-webapp-context.xml /exist/etc/jetty/webapps/
COPY conf/conf.xml /exist/etc
# Ports
EXPOSE 8080 8444
ARG ADMIN_PASSWORD
ENV ADMIN_PASSWORD=$ADMIN_PASSWORD
# Start eXist-db
CMD [ "java", "-jar", "start.jar", "jetty" ]
RUN [ "java", "org.exist.start.Main", "client", "--no-gui", "-l", "-u", "admin", "-P", "", "-x", "sm:passwd('admin','$ADMIN_PASSWORD')" ]
The error saying file was not found:
#5 [2/6] COPY build/autodeploy.xar /exist/autodeploy/
#5 ERROR: lstat /var/lib/docker/tmp/buildkit-mount3658977881/build/autodeploy.xar: no such file or directory
#4 [1/6] FROM docker.io/existdb/existdb:6.0.1#sha256:fa537fa9fd8e00ae839f17980810abfff6230b0b9873718a766b767a32f54ed6
this is dumb, but the only thing I needed to change was the context: . in the github actions
- name: Build and push
uses: docker/build-push-action#v3
with:
context: .
I am using below config.yml file ( .circleci/config.yml ) to run the circle CI job for github and build and push docker image to repo:
orbs:
docker: circleci/docker#1.5.0
version: 2.1
executors:
docker-publisher:
environment:
IMAGE_NAME: johndocker/docker-node-app
docker: # Each job requires specifying an executor
# (either docker, macos, or machine), see
— image: circleci/golang:1.15.1
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
jobs:
publishLatestToHub:
executor: docker-publisher
steps:
— checkout
— setup_remote_docker
— run
name: Publish Docker Image to Docker Hub
command: |
echo “$DOCKERHUB_PASSWORD” | docker login -u “$DOCKERHUB_USERNAME” — password-stdin
docker build -t $IMAGE_NAME .
docker push $IMAGE_NAME:latest
workflows:
version: 2
build-master:
jobs:
— publishLatestToHub
The config.yml is the magic that tells circleci what to do with our app, for this demo we want it to build a docker image.
In circleci *workflows* are simply orchestrators, they order how things should be done, *executors* defines or groups up task, *jobs* define the basic steps and commands to run.
But, it shows below error in Circle CI dashboard:
Unable to parse YAML, while scanning a simple key in 'string', line 21,
I checked using yml formatted also , but couldn't resolve the issue. Please help.
guys!
I need you help to run docker-compose build on github action. I have a docker-compose file and I can't understand how to build and deploy it in correct way besides of just copying docker-compose by ssh and run scripts there.
There's docker/build-push-action#v2 but it's not working with docker-compose.yml.
This strongly depends where do you want to push your images. But for instance if you use Azure ACR you can use this action
on: [push]
name: AzureCLISample
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Azure Login
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Azure CLI script
uses: azure/CLI#v1
with:
azcliversion: 2.0.72
inlineScript: |
az acr login --name <acrName>
docker-compose up
docker-compose push
And then just build and push your images. But this is an example. If you use ECR it would be similar I guess.
For DigitialOcean it would be like this:
steps:
- uses: actions/checkout#v2
- name: Build image
run: docker-compose up
- name: Install doctl # install the doctl on the runner
uses: digitalocean/action-doctl#v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: push image to digitalocean
run: |
doctl registry login
docker-compose push
You can find more details about this here
I have a github repository, a docker repository and a Amazon ec2 instance. I am trying to create a CI/CD pipeline with these tools. The idea is to deploy a docker container to ec2 instance when a push happened to github repository master branch. I have used github actions to build the code, build docker image and push docker image to docker hub. Now I want to pull the latest image from docker hub to remote ec2 instance and run the same. For this I am trying to execute ansible command from github actions. But I need to specify .pem file as an argument to the ansible command. I tried to keep .pem file in github secretes, but it didn't work. I am really confused how to proceed with this.
Here is my github workflow file
name: helloworld_cicd
on:
push:
branches:
- master
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Check out code into the Go module directory
uses: actions/checkout#v1
- name: Go Build
run: go build
- name: Docker build
run: docker build -t helloworld .
- name: Docker login
run: docker login --username=${{ secrets.docker_username }} --password=${{ secrets.docker_password }}
- name: Docker tag
run: docker tag helloworld vijinvv/helloworld:latest
- name: Docker push
run: docker push vijinvv/helloworld:latest
I tried to run something like
ansible all -i '3.15.152.219,' --private-key ${{ secrets.ssh_key }} -m rest of the command
but that didn't work. What would be the best way to solve this issue
I'm guessing what you meant by "it didn't work" is that ansible expects the private key to be a file, whereas you are supplying a string.
This page on github actions shows how to use secret files on github actions. The equivalent for your case would be to do the following steps:
gpg --symmetric --cipher-algo AES256 my_private_key.pem
Choose a strong passphrase and save this passphrase as a secret in github secrets. Call it LARGE_SECRET_PASSPHRASE
Commit your encrypted my_private_key.pem.gpg in git
Create a step in your actions that decrypts this file. It could look something like:
- name: Decrypt Pem
run: gpg --quiet --batch --yes --decrypt --passphrase="$LARGE_SECRET_PASSPHRASE" --output $HOME/secrets/my_private_key.pem my_private_key.pem.gpg
env:
LARGE_SECRET_PASSPHRASE: ${{ secrets.LARGE_SECRET_PASSPHRASE }}
Finally you can run your ansible command with ansible all -i '3.15.152.219,' --private-key $HOME/secrets/my_private_key.pem
You can easily use webfactory/ssh-agent to add your ssh private key. You can see its documentation and add the following stage before running the ansible command.
# .github/workflows/my-workflow.yml
jobs:
my_job:
...
steps:
- actions/checkout#v2
# Make sure the #v0.5.2 matches the current version of the
# action
- uses: webfactory/ssh-agent#v0.5.2
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- ... other steps
SSH_PRIVATE_KEY must be the key that is registered in repository secrets. After that, run your ansible command without passing the private key file.