circleci config.yml: 'Invalid template path template.yml' - devops

I am working on a CI/CD project(Using circleci pipeline) and currently, I am stuck on getting my "create_infrastructure" job to work. Below is the job
# AWS infrastructure
create_infrastructure:
docker:
- image: amazon/aws-cli
steps:
- checkout
- run:
name: Ensure backend infrastructure exist
command: |
aws cloudformation deploy \
--template-file template.yml \
--stack-name my-stack
When I run the job above, it returns Invalid template path template.yml
Where am I suppose to keep the template.yml file?
I placed it in the same location as the config.yml in the project's GitHub repository(Is this right?)
Could the problem on the line --template-file template.yml in my job? (I am a beginner here).
Please I need help.

I actually misspelled the name of the template found in my GitHub repository. Everything worked well after I correct it.
But I think this error was not explicit at all, I was expecting something like 'template not found in the path specified', instead of 'Invalid template path template.yml'

Related

Bitbucket auto deploy to Linux server (DigitalOcean Droplet)

I have encountered a problem while attempting to deploy my code to a Droplet server (running Ubuntu) using BitBucket Pipeline.
I have set the necessary environment variables (SSH_PRIVATE_KEY, SSH_USER, SSH_HOST) and added the public key of the SSH_PRIVATE_KEY to the ~/.ssh/authorized_keys file on the server. When I manually deploy from the server, there are no issues with cloning or pulling. However, during the automatic CI deployment stage, I am encountering the error shown in the attached image.
This is my .yml configuration.
Thanks for helps in advance.
To refer to the values of the variables in defined in the configuration, you script should use $VARIABLE_NAME, not VARIABLE_NAME as that is literally that string.
- pipe: atlassian/ssh-run:latest
variables:
SSH_KEY: $SSH_KEY
SSH_USER: $SSH_USER
SERVER: $SSH_HOST
COMMAND: "/app/deploy_dev01.sh"
Also, note some pitfalls exists when using an explicit $SSH_KEY, it is generally easier and safer to use the default key provided by Bitbucket, see Load key "/root/.ssh/pipelines_id": invalid format

Deploying Cloud Run via YAML gives error spec.template.spec.containers should contain exactly 1 container

When deploying a Cloud Run service via a YAML file from the command line, it fails with this error.
ERROR: (gcloud.run.services.replace) spec.template.spec.containers should contain exactly 1 container
This is because the documentation for adding an environment variable is wrong, or confusing at best.
The env node should be a child of the image and not the containers node as it says here.
https://cloud.google.com/run/docs/configuring/environment-variables#yaml
This is correct:
- image: us-east1-docker.pkg.dev/proj/repo/image:r1
env:
- name: SOMETHING
value: Xyz

CircleCI insert environment variable

I created my first pipeline yesterday and I wanted to replace a placeholder in my bundle.gradle file with the CIRCLE_BUILD_NUM environment variable. The only method I found find was writing my own ‘sed’ command and executing the regex in a run statement. This worked fine to get up and running, since there was only one variable to replace, however this method obviously won’t scale, down the road. Is there a CircleCI feature/orb or other method to do a more comprehensive placeholder/envar swap throughout my project?
- run:
name: Increment build id
command: sed "s/_buildNum/${CIRCLE_BUILD_NUM}/g" -i build.gradle
EDIT
Looking for a utility/tools/orb/CircleCI best practice similar to what they have in Azure DevOps (Jenkins performs a similar feature as well): simply replace all placeholders in specified files with environment variables matching the same name.
https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens
There is envtpl tool with myriad of implementation in various languages.
It allows for interpolating variables in templates with values set in environment variables.
The following described command installs an implementation in Rust.
commands:
replace-vars-from-env:
description: Replace variables in file from environment variables.
parameters:
filename:
type: string
steps:
- run:
name: Replace variables in build.gradle file
command: |
if ! [ -x /usr/local/bin/envtpl ]; then
curl -L https://github.com/niquola/envtpl/releases/download/0.0.3/envtpl.linux > /usr/local/bin/envtpl
chmod +x /usr/local/bin/envtpl
fi
mv <<parameters.filename>> <<parameters.filename>>.tpl
cat <<parameters.filename>>.tpl | envtpl > <<parameters.filename>>
rm <<parameters.filename>>
and use that in other commands or as a part of your jobs. For example,
executors:
linux:
machine:
image: ubuntu-1604:201903-01
jobs:
build:
executor: linux
steps:
- replace-vars-from-env:
filename: build.gradle
You could use envsubst which provides that basically out of the box.
Depending on your primary container you can install envsubst on top of alpine/your distro, or use some image that has that already, like datasailors/envsubst.
In that case, you would just need to run configure like:
- run:
name: Increment build id
command: envsubst < build.gradle.template > build.gradle
And in your template file you can have ${CIRCLE_BUILD_NUM}, as many other variables directly.

How to both run an AWS SAM template locally and deploy it to AWS successfully (with Java Lambdas)?

I'm trying to build an AWS application using SAM (Serverless Application Model) with the Lambdas written in Java.
I was able to get it running locally by using a resource definition like this in the template:
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: HelloWorldFunction
Handler: helloworld.App::handleRequest
Runtime: java8
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
But to get the sam package phase to upload only the actual code (and not the whole project directory) to S3 I had to change it to this:
...
Properties:
CodeUri: HelloWorldFunction/target/HelloWorld-1.0.jar
...
as documented in the AWS SAM example project README.
However, this breaks the ability to run the application locally with sam build followed by sam local start-api.
I tried to get around this by giving the CodeUri value as a parameter (with --parameter-overrides) and this works locally but breaks the packaging phase because of a known issue with the SAM translator.
Is there a way to make both the local build and the real AWS deployment working, preferably with the same template file?
The only workaround I've come up myself with so far is to use different template files for local development and actual packaging and deployment.
To avoid maintaining two almost equal template files I wrote a script for running the service locally:
#!/bin/bash
echo "Copying template..."
sed 's/CodeUri: .*/CodeUri: HelloWorldFunction/' template.yaml > template-local.yaml
echo "Building..."
if sam build -t template-local.yaml
then
echo "Serving local API..."
sam local start-api
else
echo "Build failed, not running service."
fi
This feels less than optimal but does the trick. Would love to hear better alternatives, still.
Another idea that came to mind was extending a mutual base template with separate CodeUri values for these cases but I don't think SAM templates support anything like that.

How to copy a file or jar file that has built from jenkins to a diff host server

Am having a jenkins job where am building a jar file. after the build is done I need to copy that jar file to a different server and deploy it there.
Am trying this yml file to achieve the same but it is looking for the file in the different server other than the jenkins server.
---
# ansible_ssh_private_key_file: "{{inventory_dir}}/private_key"
- hosts: host
remote_user: xuser
tasks:
- service: name=nginx state=started
become: yes
become_method: sudo
tasks:
- name: test a shell script
command: sh /home/user/test.sh
tasks:
- name: copy files
synchronize:
src: /var/jenkins_home/hadoop_id_rsa
dest: /home/user/
could you please suggest is there any other way or what could be approach to copy a build file to the server using jenkins to deploy.
Thanks.
Hi as per my knowledge you can use Publish Over ssh plugin in jenkins. actually i am not clear about your problem. but hoping that this can help you. plugin details: https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin if it wont help you, please comment. can you please more specific. (screen shot if possible)
Use remote ssh script in the build step no plug in is required
scp –P 22 Desktop/url.txt user#192.168.1.50:~/Desktop/url.txt
Setup passwords less authentication use the below link for help
https://www.howtogeek.com/66776/how-to-remotely-copy-files-over-ssh-without-entering-your-password/

Resources