Pass cut string into environment variable? - circleci

I'm trying to cut my git tag and pass this as an environment variable so other parts of my circle ci build know which directory to look at. How do I go about doing this?
run-terraform:
docker:
- image: cimg/python:3.8.6-node
environment:
DEPLOY_STAGE: <<pipeline.git.tag>> | cut -f1 -d'/'
some-step:
steps:
- run:
name: "Do thing."
command: |
do thing --arg $DEPLOY_STAGE-rest-of-string
Is this possible?

I'm pretty sure this is not possible.
But what you could do is implement a "setup workflow" to set the value of a pipeline parameter declared in the "continuation" workflow.

Related

Cloud Builds Failulre , unable to find logs to see what is going on

i am kicking off a dataflow flex template using a cloud build. In my cloud build file i am attempting to do 3 things
build an image
publish it
run a flex template job using that image
this is my yaml file
substitutions:
_IMAGE: my_logic:latest4
_JOB_NAME: 'pipelinerunner'
_TEMP_LOCATION: ''
_REGION: us-central1
_FMPKEY: ''
_PYTHON_VERSION: '3.8'
# checkout this link https://github.com/davidcavazos/python-docs-samples/blob/master/dataflow/gpu-workers/cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args:
[ 'build'
, '--build-arg=python_version=$_PYTHON_VERSION'
, '--tag=gcr.io/$PROJECT_ID/$_IMAGE'
, '.'
]
# Push the image to Container Registry.
- name: gcr.io/cloud-builders/docker2
args: [ 'push', 'gcr.io/$PROJECT_ID/$_IMAGE' ]
- name: gcr.io/$PROJECT_ID/$_IMAGE
entrypoint: python
args:
- /dataflow/template/main.py
- --runner=DataflowRunner
- --project=$PROJECT_ID
- --region=$_REGION
- --job_name=$_JOB_NAME
- --temp_location=$_TEMP_LOCATION
- --sdk_container_image=gcr.io/$PROJECT_ID/$_IMAGE
- --disk_size_gb=50
- --year=2018
- --quarter=QTR1
- --fmpkey=$_FMPKEY
- --setup_file=/dataflow/template/setup.py
options:
logging: CLOUD_LOGGING_ONLY
# Use the Compute Engine default service account to launch the job.
serviceAccount: projects/$PROJECT_ID/serviceAccounts/$PROJECT_NUMBER-compute#developer.gserviceaccount.com
And this is the command i am launching
gcloud beta builds submit \
--config run.yaml \
--substitutions _REGION=$REGION \
--substitutions _FMPKEY=$FMPKEY \
--no-source
The error message i am getting is this
Logs are available at [https://console.cloud.google.com/cloud-build/builds/0f5953cc-7802-4e53-b7c4-7e79c6f0d0c7?project=111111111].
ERROR: (gcloud.beta.builds.submit) build 0f5953cc-7802-4e53-b7c4-7e79c6f0d0c7 completed with status "FAILURE
but i cannot access the logs from the URL mentioned above
I cannot see the logs, so i am unable to see what is wrong, but i stongly suspect somethign in my run.yaml is not quite right
Note: before this, i was building the image myself by launching this command
gcloud builds submit --project=$PROJECT_ID --tag $TEMPLATE_IMAGE .
and my run.yaml just contained 1 step, the last one, and everything worked fine
But i am trying to see if i can do everything in the yaml file
Could anyone advise on what might be incorrect? I dont have much experience with yaml files for cloud build
thanks and regards
Marco
I guess the pipeline does not work because (in the second step) the container: gcr.io/cloud-builders/docker2 does not exist (check https://gcr.io/cloud-builders/ - there is a docker container, but not a docker2one).
This second step pushes the final container to the registry and, it is a dependence of the third step, which will fail too.
You can build the container and push it to the container registry in just one step:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE_NAME', '<path_to_docker-file>']
images: ['gcr.io/$PROJECT_ID/$IMAGE_NAME']
Ok, sorted, the problem was the way i was launching the build command
this is the original
gcloud beta builds submit \
--config run.yaml \
--substitutions _REGION=$REGION \
--substitutions _FMPKEY=$FMPKEY \
--no-source
apparently when i removed the --no-source all worked fine.
I think i copied and pasted the command without really understanding it
regards

CircleCI: Skip entire workflow

Basically I'm trying to skip the build if it's not a pull request or a certain branch, however I don't seem to be able to skip a job or a part of the workflow if this fails, so far the problem is that circleci step halt does nothing in my pipelines, example config here:
version: 2.1
orbs:
hello: circleci/hello-build#0.0.5
jobs:
build:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
command: |
if [[ $(echo "$CIRCLE_PULL_REQUEST $CIRCLE_PULL_REQUESTS" | grep -c "pull") -gt 0 ]]; then
echo "Do stuff if it's a PR"
else
echo "Not a PR, Skipping."
circleci step halt # does nothing
circleci-agent step halt # does nothing
exit 0
fi
workflows:
"Hello Workflow":
jobs:
- hello/hello-build:
requires:
- build
filters:
branches:
only:
- testing
- /^(?!pull\/).*$/
tags:
only:
- /^pull\/.*$/
- build:
filters:
branches:
only:
- testing
- /^(?!pull\/).*$/
tags:
only:
- /^pull\/.*$/
This does not fail, and it works on pull requests but the hello/hello-build is executed anyway despite the circleci step halt commands.
Any help would be appreciated, thanks!
After creating a thread in their forums this is what worked: https://discuss.circleci.com/t/does-circleci-step-halt-works-with-version-2-1/36674/4
Go to account settings -> Personal API Tokens -> New token. Once you have the token go to the project and create a new environment variable something like CIRCLE_TOKEN and save it there.
Then in the config.yml you can run something like this to cancel the current workflow:
curl -X POST https://circleci.com/api/v2/workflow/${CIRCLE_WORKFLOW_ID}/cancel -H 'Accept: application/json' -u '${CIRCLE_TOKEN}:'
Then you will see something like:

How to set the variable values in ansible based on user entry on jenkins

Here is my exact requirement.
In jenkins:
User will choose the env parameters: TEST OR DEV
In ansible playbook:
I need to run the sed command based on the ENV type being selected.
- vars:
environ1: "TEST"
environ2: "PROD"
- command: sed -i "s/test.abc.com/"{{SITE_URL}}"/g" /home/ubuntu/mysql.sql
when: '{{backup_from}}' == '{{environ1}}'
- command: sed -i "s/abc.com/"{{SITE_URL}}"/g" /home/ubuntu/mysql.sql
when: '{{backup_from}}' == '{{environ2}}'
ERROR:
[0;31m when: '{{backup_from}}' == '{{environ}}'[0m
[0;31m ^ here[0m
[0;31mWe could be wrong, but this one looks like it might be an issue with[0m
[0;31mmissing quotes. Always quote template expression brackets when they[0m
[0;31mstart a value. For instance:[0m
[0;31m[0m
[0;31m with_items:[0m
[0;31m - {{ foo }}[0m
[0;31m[0m
[0;31mShould be written as:[0m
[0;31m[0m
[0;31m with_items:[0m
[0;31m - "{{ foo }}"[0m
[0;31m[0m
Connection to 18.221.160.190 closed.
Build step 'Execute shell' marked build as failure
Finished: FAILURE
You have multiple problems with quotes and indentation:
if you start an argument in YAML with a quote, you must quote the whole YAML value (this is the reason for the error)
when should be on the task indentation level
argument of when should be a Jinja2 expression, no need to use braces inside (although it works with a warning)
you cannot nest double quotes in the argument of command
there is no need to use quotes around Jinja2 expressions {{ ... }} except for the case when the whole YAML value starts with a brace
tasks:
- command: sed -i "s/test.abc.com/{{SITE_URL}}/g" /home/ubuntu/mysql.sql
when: backup_from == environ1
- command: sed -i "s/abc.com/{{SITE_URL}}/g" /home/ubuntu/mysql.sql
when: backup_from == environ2
You might also consider writing this as a single task (if there are only two environment values possible):
tasks:
- command: sed -i "{{ expression }}" /home/ubuntu/mysql.sql
vars:
expression: "{{ (backup_from == 'TEST') | ternary('s/test.abc.com/{{SITE_URL}}/g', 's/abc.com/"{{SITE_URL}}"/g') }}"
I think that the entire command has to be without quotes, try with:
- vars:
environ1: "TEST"
environ2: "PROD"
- command: sed -i "s/test.abc.com/"{{SITE_URL}}"/g" /home/ubuntu/mysql.sql
when: backup_from == environ1
- command: sed -i "s/abc.com/"{{SITE_URL}}"/g" /home/ubuntu/mysql.sql
when: backup_from == environ2

Declare env variable which value include space for docker/docker-compose

I have an environment variable defined in a file passed in via --env-file like this:
TEST_VAR=The value
Does anybody know whether this is legal? should I place " around the value for this to be interpreted as needed in docker?
Thanks
EDIT: Quotation marks will not be a good solution as it is will be part of the val see reference here.
Lets see the result running the following compose file:
version: "3"
services:
service:
image: alpine
command: env
env_file: env.conf
env.conf:
TEST_VAR1=The value
TEST_VAR2="The value2"
docker-compose up Result:
service_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
service_1 | TEST_VAR2="The value2"
service_1 | TEST_VAR1=The value
service_1 | HOME=/root
Therefore, it is legal to have spaces in the env value.
You can escape the space with a \:
TEST_VAR=The\ value
Edit: This is how I pass them when starting the container (i.e. docker run -e TEST_VAR=The\ value hello-world). If you're using docker-compose or an env file, see the answer by #yamenk.
In Dockerfile use doublequotes, do not use singlequotes because they do not expand variables inside, excerp from passing buildargs/envs to dockerfile and into a python script below:
ARG HOST="welfare-dev testapi"
ENV HOST "${HOST}"
ARG SITENAME="Institusjon"
ENV SITENAME "${SITENAME}"
RUN cd ${TESTDIR}/sensiotools/sensiotools && cd test && \
./testapi-events.py --activate --sitename="${SITENAME}" --host="${HOST}" --dbcheck --debug --wait=0.5 && \
./testapi-events.py --deactivate --sitename="${SITENAME}" --host="${HOST}" --dbcheck --debug
My case with docker-compose, if that can help. I couldn't make use of the suggestions in the other answers.
For a variable in volumes, I could use the .env file:
# .env
LOCAL_DIR=/local/path
while for a variable with spaces (for https://github.com/wolfcw/libfaketime in my case) I had to use command line: FAKETIME_ARG="#2021-02-11 13:23:02" docker-compose up.
The resulting docker-compose file (note the ${} just for LOCAL_DIR) :
# docker-compose.yml
services:
myservice:
build:
context: ./path/to/dir/of/Dockerfile
args:
- FAKETIME_ARG
volumes:
- ${LOCAL_DIR}:/path/in/container

How use `echo` in a command in docker-compose.yml to handle a colon (":") sign?

Here is my docker-compose.yml,
elasticsearch:
ports:
- 9200:9200/tcp
image: elasticsearch:2.4
volumes:
- /data/elasticsearch/usr/share/elasticsearch/data:/usr/share/elasticsearch/data
command: /bin/bash -c “echo 'http.cors.enabled: true' > /usr/share/elasticsearch/config/elasticsearch.yml"
it throws the error:
Activating (yaml: [] mapping values are not allowed in this context at line 7, column 49
Looks as if I cannot use the colon sign : in command, is this true?
The colon is how YAML introduces a dictionary. If you have it in a value, you just need to quote the value, for example like this:
image: "elasticsearch:2.4"
Or by using one of the block scalar operators, like this:
command: >
/bin/bash -c “echo 'http.cors.enabled: true' > /usr/share/elasticsearch/config/elasticsearch.yml"
For more information, take a look at the YAML page on Wikipedia. You can always use something like this online YAML parser to test out your YAML syntax.
Properly formatted, your first document should look something like:
elasticsearch:
ports:
- 9200:9200/tcp
image: "elasticsearch:2.4"
volumes:
- /data/elasticsearch/usr/share/elasticsearch/data:/usr/share/elasticsearch/data
command: >
/bin/bash -c “echo 'http.cors.enabled: true' > /usr/share/elasticsearch/config/elasticsearch.yml"
(The indentation of the list markers (-) from the key isn't strictly necessary, but I find that it helps make things easier to read)
A docker container can only run a single command. If you want to run multiple commands, put them in a shell script and copy that into the image.

Resources