How to put databricks credentials in environment.yml - environment-variables

I am looking for a way to put CLIENT_ID and CLIENT_SECRET in environment.yml file.
Something like this -
environment.yml
name: env-name
channels:
- conda-forge
- defaults
dependencies:
- python=3.7
- codecov
variables:
CLIENT_ID: dbutils.secrets.get(scope="a", key ="SERVICE-PRINICIPAL-CLIENT-ID")
CLIENT_SECRET: dbutils.secrets.get(scope="a", key ="SERVICE-PRINICIPAL-CLIENT-SECRET")
TENANT_ID: "abc"
But this is not working and my code is not able to use these variables.

Related

Unable to give correct URI for postgresql database in yml file for GitHub Actions

I am implementing GitHub actions for the first time and I am unable to pass the test successfully for my Ruby On Rails project. The reason is that I am unable to provide the correct URI for the PostgreSQL server.
When I push the code to the GitHub via git push, a test begins successfully as expected, but it fails to pass with this error:
URI::InvalidURIError: bad URI(is not URI?): postgres:***:***#localhost:5432/github_actions_test
I have a username and password set in my GitHub secrets.
I have provided a constant to hold the value of the URI in my yml file as:
DATABASE_URL = "postgres://username:password#localhost:5432/github_actions_test"
Now replacing username & password with mine, since my DB credentials are already available in secrets:
DATABASE_URL = "postgres://${{secrets.DATABASE_USERNAME}}:${{secrets.DATABASE_PASSWORD}}#localhost:5432/github_actions_test"
this doesn't work and it gave same error as mentioned above. Then I tried as this:
DATABASE_URL = "postgres://<%=secrets.DATABASE_USERNAME%>:<%=secrets.DATABASE_PASSWORD%>#localhost:5432/github_actions_test"
this also resulted in the same error.
Then to test whether the DATABASE_URL really works, I explicitly gave DB credentials in the URI. I know it's not recommended but I had to, to test things out and that also resulted in the same error.
I also tried to save the DB URL in my secrets and then call in my yml file as I did with my DB credentials but that didn't work too.
So my question is how do I make the PostgreSQL URI work in GitHub actions? Is there some other way to set it up or may be my string interpolation is wrong? Or may be I am doing something else wrong?
Here's the code inside of my yml file:
# This workflow uses actions that are not certified by GitHub. They are
# provided by a third-party and are governed by separate terms of service,
# privacy policy, and support documentation.
#
# This workflow will install a prebuilt Ruby version, install dependencies, and
# run tests and linters.
name: "Ruby on Rails CI"
on:
push:
branches: ["rspec"]
pull_request:
branches: ["rspec"]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:11-alpine
ports:
- "5432:5432"
env:
POSTGRES_DB: github_actions_test
POSTGRES_USER: ${{ secrets.DATABASE_USERNAME }}
POSTGRES_PASSWORD: ${{ secrets.DATABASE_PASSWORD }}
env:
RAILS_ENV: test
DATABASE_URL: "postgres://username:password#localhost:5432/github_actions_test"
steps:
- name: Checkout code
uses: actions/checkout#v3
# Add or replace dependency steps here
- name: Install Ruby and gems
uses: ruby/setup-ruby#0a29871fe2b0200a17a4497bae54fe5df0d973aa # v1.115.3
with:
bundler-cache: true
ruby-version: 3.1.2
# Add or replace database setup steps here
- name: Setup Ruby
run: sudo apt-get -yqq install libpq-dev
- name: Setup bundler and gems
run: |
gem install bundler
bundle install --jobs 4 --retry 3
- name: Set up database schema
run: bin/rails db:prepare
# Add or replace test runners here
- name: Run tests
run: bundle/exec rspec
# lint:
# runs-on: ubuntu-latest
# steps:
# - name: Checkout code
# uses: actions/checkout#v3
# - name: Install Ruby and gems
# uses: ruby/setup-ruby#0a29871fe2b0200a17a4497bae54fe5df0d973aa # v1.115.3
# with:
# bundler-cache: true
# # Add or replace any other lints here
# - name: Security audit dependencies
# run: bin/bundler-audit --update
# - name: Security audit application code
# run: bin/brakeman -q -w2
# - name: Lint Ruby files
# run: bin/rubocop --parallel

Aws eks bitbucket pipeline permission error

image: atlassian/default-image:3
pipelines:
tags:
ecr-release-*:
- step:
services:
- docker
script:
- apt update -y
- apt install python3-pip -y
- pip3 --version
- pip3 install awscli
- aws configure set aws_access_key_id "AKIA6J47DSdaUIAZH46DKDDID6UH"
- aws configure set aws_secret_access_key "2dWgDxx5i7Jre0aZJ+tQ3oDve5biYk0ZMDKKASA7554QoJSJSJS"
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- mv ./kubectl /usr/local/bin/kubectl
- aws eks update-kubeconfig --name build_web --region us-west-2
- kubectl apply -f eks/aws-auth.yaml
- kubectl apply -f eks/deployment.yaml
- kubectl apply -f eks/service.yaml
definitions:
services:
docker:
memory: 3072
Here is my bitbucket-pipelines.yml.
When i am running bitbucket pipeline i am getting below error in screenshot.
I think i already added aws access credentials
Please take a look
You need to create service account and give permissions, also you need certificate to connect Kubernetes API server.
Here is nice explanation with all details which might be helpful for you: https://medium.com/codeops/continuous-deployment-with-bitbucket-pipelines-ecr-and-aws-eks-791a30b7c84b
The problem is resolved changing the kube config file. You need to specify the profile you need to use. By default the update-kubeconfig creates the authentication credentials, and put inside the file something like this:
- name: arn:aws:eks:{region}:{}account-id:cluster/{cluster-name}
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- {region}
- eks
- get-token
- --cluster-name
- {cluster-name}
command: aws
env:
- name: AWS_PROFILE
value: {profile}
interactiveMode: IfAvailable
provideClusterInfo: false
For son reason aws cli is not picking up the AWS_PROFILE env variable value, so in this case I solved manualy updating the kube config and specifying the --profile in the aws command part:
- name: arn:aws:eks:{region}:{}account-id:cluster/{cluster-name}
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- {region}
- eks
- get-token
- --profile
- {profile}
- --cluster-name
- {cluster-name}
command: aws
#env:
#- name: AWS_PROFILE
# value: {profile}
interactiveMode: IfAvailable
provideClusterInfo: false

Difference between Kubernetes Service Account Tokens from secret and projected volume

When I do kubectl get secret my-sa-token-lr928 -o yaml, there is a base64 string(JWT A) value for data.token. There are other fields too, like data.ca.crt in this returned secret.
When I use projected volume with source serviceAccountToken and read the file, there is another not-base64 string(JWT B).
cat /var/run/secrets/some.directory/serviceaccount/token
Why JWT A and JWT B strings are different? The most notable difference is in JWT B iss i.e my issuer url (--service-account-issuer) and in JWT A iss i.e my issuer url iskubernetes/serviceaccount`.
Aren't they both JWT service account tokens? If not then what Kubernetes API object they actually represent?
Following is my Kubernetes Pod spec (edited for brevity)
apiVersion: v1
kind: Pod
metadata:
annotations:
labels:
app: sample-app
name: sample-pod-gwrcf
spec:
containers:
image: someImage
name: sample-app-container
resources: {}
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: my-sa-token-lr928
readOnly: true
- mountPath: /var/run/secrets/some.directory/serviceaccount
name: good-token
readOnly: true
serviceAccount: my-sa
serviceAccountName: my-sa
terminationGracePeriodSeconds: 30
volumes:
- name: good-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: my.audience.com
expirationSeconds: 86400
path: token
- name: my-sa-token-lr928
secret:
defaultMode: 420
secretName: my-sa-token-lr928
Aren't they both JWT service account tokens?
Yes, they are both JWT tokens.
The one you mentined as JWT A in my-sa-token-lr928 is base64 encoded as all data in every kubernetes secret.
When k8s is mounting a secret data to a pod, this data is being decoded and stored e.g. as a token file in this case.
JWT B token using Service Account Token Volume Projection is issued by kubelet and allows you for more flexibility, for example setting expiration time in contrast to Regular Service Account Tokens which once issued stays the same (unless recreated) and does not expire.
If you exec to your pod and lookup the content of these tokens what you will see are an actual JWT tokens. You can decode the data from this tokens using any jwt decoder e.g. jwt.io.
Why JWT A and JWT B strings are different?
Because they contain different data.

aws serverless.yml file "A valid option to satisfy the declaration 'opt:stage' could not be found" error

Getting below warning when trying to run serverless.
Serverless Warning --------------------------------------------
A valid option to satisfy the declaration 'opt:stage' could not be found.
Below is my serverless.yml file
# Serverless Config
service: api-service
# Provider
provider:
name: aws
runtime: nodejs8.10
region: ${opt:region, 'ap-east-1'}
stage: ${opt:stage, 'dev'}
# Enviroment Varibles
environment:
STAGE: ${self:custom.myStage}
MONGO_DB_URI: ${file(./serverless.env.yml):${opt:stage}.MONGO_DB_URI}
LAMBDA_ONLINE: ${file(./serverless.env.yml):${opt:stage}.LAMBDA_ONLINE}
# Constants Varibles
custom:
# environments Variables used for convert string in upper case format
environments:
myStage: ${opt:stage, self:provider.stage}
stages:
- dev
- qa
- staging
- production
region:
dev: 'ap-east-1'
stage: 'ap-east-1'
production: 'ap-east-1'
# Function
functions:
testFunc:
handler: index.handler
description: ${opt:stage} API's
events:
- http:
method: any
path: /{proxy+}
cors:
origin: '*'
#package
package:
exclude:
- .env
- node_modules/aws-sdk/**
- node_modules/**
In the description of the testFunc you're using ${opt:stage}. If you use that directly you need to pass the --stage flag when you run the deploy command.
What you should do there is to use the ${self:provider.stage}, because there you will have the stage calculated.
I will suggest you to do below implementation
provider:
name: aws
runtime: nodejs8.10
region: ${opt:region, self:custom.environments.region.${self:custom.environments.myStage}}
stage: ${opt:stage, self:custom.environments.myStage}
# Enviroment Varibles
environment:
STAGE: ${self:custom.myStage}
MONGO_DB_URI: ${file(./serverless.env.yml):${self:provider.stage}.MONGO_DB_URI}
LAMBDA_ONLINE: ${file(./serverless.env.yml):${self:provider.stage}.LAMBDA_ONLINE}
# Constants Varibles
custom:
# environments Variables used for convert string in upper case format
environments:
# set the default stage if not specified
myStage: dev
stages:
- dev
- qa
- staging
- production
region:
dev: 'ap-east-1'
stage: 'ap-east-1'
production: 'ap-east-1'
Basically, if stage and region is not specified using command line, then use defaults. Otherwise the command line one will be used.

Fixing 'invalid reference format' error in docker-image-resource put

Currently trying to build and push docker images, issue is that I'm receiving a this message from concourse during wordpress-release put step:
waiting for docker to come up...
invalid reference format
Here's the important bit of the Concourse Pipeline.
- name: wordpress-release
type: docker-image
source:
repository: #############.dkr.ecr.eu-west-1.amazonaws.com/wordpress-release
aws_access_key_id: #############
aws_secret_access_key: #############
- name: mysql-release
type: docker-image
source:
repository: #############.dkr.ecr.eu-west-1.amazonaws.com/mysql-release
aws_access_key_id: #############
aws_secret_access_key: #############
jobs:
- name: job-hello-world
plan:
- get: wordpress-website
- task: write-release-tag
config:
platform: linux
image_resource:
type: registry-image
source: { repository: alpine/git }
inputs:
- name: wordpress-website
outputs:
- name: tags
run:
dir: wordpress-website
path: sh
args:
- -exc
- |
printf $(basename $(git remote get-url origin) | sed 's/\.[^.]*$/-/')$(git tag --points-at HEAD) > ../tags/release-tag
- put: wordpress-release
params:
build: ./wordpress-website/.
dockerfile: wordpress-website/shared-wordpress-images/wordpress/wordpress-release/Dockerfile
tag_file: tags/release-tag
- put: mysql-release
params:
build: ./wordpress-website/
dockerfile: wordpress-website/shared-wordpress-images/db/mysql-release/Dockerfile
tag_file: tags/release-tag
Those images contain FROM #############.dkr.ecr.eu-west-1.amazonaws.com/shared-mysql (and shared-wordpress) could this be an issue?
The tag_file: tags/release-tag, doesn't seem to be the issue as even without it, this still happens.
This is Concourse 5.0 running on top of Docker in Windows 10.
Any thoughts?

Resources