I am not able to pass ${environment} in the vault secret path for reading the values.
May be secret getting initialized before variables are getting set.
Kindly help as I'm not able to read environment-specific values from the same vault repo.
It worked pretty nicely for me using a choice parameter in a parameterized build. I think your issue is in the used Vault path (vault/secret/$environment). I think the correct in your case is just "secret/$environment". Does your secret engine start with "vault"?
Just FYI, if you define the variable in "Jenkins > Manage Jenkins > Configure System > Environment variables" it'll work too.
Related
I have encountered a problem while attempting to deploy my code to a Droplet server (running Ubuntu) using BitBucket Pipeline.
I have set the necessary environment variables (SSH_PRIVATE_KEY, SSH_USER, SSH_HOST) and added the public key of the SSH_PRIVATE_KEY to the ~/.ssh/authorized_keys file on the server. When I manually deploy from the server, there are no issues with cloning or pulling. However, during the automatic CI deployment stage, I am encountering the error shown in the attached image.
This is my .yml configuration.
Thanks for helps in advance.
To refer to the values of the variables in defined in the configuration, you script should use $VARIABLE_NAME, not VARIABLE_NAME as that is literally that string.
- pipe: atlassian/ssh-run:latest
variables:
SSH_KEY: $SSH_KEY
SSH_USER: $SSH_USER
SERVER: $SSH_HOST
COMMAND: "/app/deploy_dev01.sh"
Also, note some pitfalls exists when using an explicit $SSH_KEY, it is generally easier and safer to use the default key provided by Bitbucket, see Load key "/root/.ssh/pipelines_id": invalid format
I would like to run a Nextflow pipeline through a Docker container. As part of the pipeline I would like to push and pull from AWS. To accomplish this end, I need to pass in AWS credentials to the container, but I do not want to write them into the image.
Nextflow has an option to pass in environmental variables as part of the Docker scope via the envWhitelist option, however I have not been able to find an example for correct syntax when doing this.
I have tried the following syntax and get an access denied error, suggesting that I am not passing in the variables properly.
docker {
enabled = true
envWhitelist = "AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID"
}
I explicitly passed these variables into my environment and I can see them using printenv.
Does this syntax seem correct? Thanks for any help!
Usually you can just keep your AWS security credentials in a file called ~/.aws/credentials:
If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not defined in the
environment, Nextflow will attempt to retrieve credentials from your
~/.aws/credentials or ~/.aws/config files.
Alternatively, you can declare your AWS credentials in your nextflow.config (or in a separate config profile) using the aws scope:
aws {
accessKey = '<YOUR S3 ACCESS KEY>'
secretKey = '<YOUR S3 SECRET KEY>'
region = '<REGION IDENTIFIER>'
}
You could also use an IAM Instance Role to provide your credentials.
Problem
Some library I use requires the case sensitive environment variable QXToken.
When I create a codespaces secret the environment variable is only available in uppercase (QXTOKEN), as the secrets are case insensitive. Therefore I want to copy the secret stored in QXTOKEN to the environment variable QXToken.
I tried to do that in the devcontainer.json:
{
...
"remoteEnv": {
"QXAuthURL": "https://auth.quantum-computing.ibm.com/api",
"QXToken": "${secrets.QXTOKEN}"
},
"updateContentCommand": "env; export QXToken=$QXTOKEN; env",
"postCreateCommand": "env; export QXToken=$QXTOKEN; env",
"postStartCommand": "env; export QXToken=$QXTOKEN; env",
"postAttachCommand": "env; export QXToken=$QXTOKEN; env"
}
But remoteEnv cannot access the codespaces secrets via ${secrets.QXTOKEN} as one would be able to with GitHub Actions and none of updateContentCommand, postCreateCommand, postStartCommand and postAttachCommand saved the environment variable persistently for the user.
Using the command env I see from the logs that the environment variables have been set, but already in the next command they are gone.
Even though the postCreateCommand is able to access the codespaces secrets according to the documentation I was not able to set environment variables for later usage.
For now I only see the following environment variables, but I am missing QXToken:
$ env | grep QX
QXAuthURL=https://auth.quantum-computing.ibm.com/api
QXTOKEN=***
Question
Is there a best practice to reuse codespaces secrets inside devcontainer.json and make them available as environment variables in the codespace?
The GitHub Codespaces secrets are available via localEnv which is a special variable used by devcontainer.json which provides access to environment variables on the host machine. Therefore, you can set the environment variable QXToken with ${localEnv:QXTOKEN} inside devcontainer.json.
Furthermore, if you want to set an environment variable pointing to a path inside your repo you can use ${containerWorkspaceFolder}/path/inside/your/repo.
"remoteEnv": {
// Use a GitHub Codespaces secret:
"QXToken": "${localEnv:QXTOKEN}",
// Point to a path inside your repo:
"QISKIT_SETTINGS": "${containerWorkspaceFolder}/.qiskit/settings.conf"
}
For more details on the available variables in devcontainer.json have a look at the documentation.
I'm using Jenkins X for microservice build / deployment. In each environment there are shared secrets used across microservices (client keys etc) which are injected into deployment.yaml as environment variables using valueFrom and secretKeyRef. This works well in Production and Staging where the namespaces are well know, but since preview generates a new namespace each time, these secrets will no exist. Is there a way to copy secrets from another, known, namespace, or a better approach?
You can create another namespace called jx-preview to store preview specific secrets, and add this line after the jx preview command in your Jenkinsfile
sh "kubectl get secret {secret_name} --namespace={from_namespace} --export -o yaml | kubectl apply --namespace=jx-$ORG-$PREVIEW_NAMESPACE -f -"
Not sure if this is the best way though
We've got a command to service link services from one namespace to another - such as to link services from staging to your preview environment via jx step link services.
It would be nice to add a similar command to copy secrets from a namespace in the same way. I've raised an issue to track this new feature
Another option is to create your own Job in charts/preview/templates/myjob.yaml and in that job create whatever Secrets you need however you want and then annotate it so that its triggered as a post-install hook of your Preview chart
What would be the best way to use an environment variables declared for different users in a cluster(all nodes) and make a call to a oozie workflow (Cloudera) and the container of yarn recover the environment variable according to the user.
In the configuration of yarn in Cloudera manager seems to have references of this kind, something like ENVVAR_USER=$ENVVAR_USER.
It is a way to get a different properties file depending on the user making the call.
You could define one set of env. variables for every user, then resolve the actual values based on actual user name:
### per-user config
Sex_Mary=female
Sex_Mario=male
### resolving config for current user
User=Mario
eval Sex=\$Sex_$User
echo $Sex
But it's an old Unix trick, nothing to do with Hadoop or Cloudera. And maintaining the whole config would be a chore.
Any chance you can store the values in LDAP, and retrieve them dynamically with ldapsearch plus sed or awk??