Golang os.Getenv(key) returning entire env file instead of just the key's value - docker

I'm a developer who's recently moved from macOS to WSL2 on Windows 10. I've finally managed to get everything working, but when I call a lambda locally through Docker, the os.Getenv() function returns my whole env file, instead of just the one key.
Things I've tried:
Set "files.eol" = "\n" in vscode settings.json
Set core.autocrlf = input in git config --global
Set eol = lf in git config --global
I've been super stumped for ages, and haven't been able to find any solutions online. Any help would be super appreciated!
Edit: Apologies, I should have had the foresight to post the code and env. necessary to replicate the problem
////////////
// Code: //
////////////
func init() {
if err := config.Load(); err != nil {
api.ReportError(err)
}
dbo = db.Instance{
DSN: os.Getenv("DBReadDataSourceName"),
}
log.Println("DSN: ", dbo.DSN)
}
// Load ...
func Load() error {
stage := os.Getenv("Stage")
log.Println("stage: ", stage)
if len(data) <= 0 && stage != "local" {
log.Println("stage != local")
log.Println("do production config and ssm stuff")
}
return nil
}
//////////
// Env: //
//////////
Stage=local
ServerPort=:1234
DBDriverName=mysql
DBReadDataSourceName=MySQLReadDataSourceCredentials
DBWriteDataSourceName=MySQLWriteDataSourceCredentials
RiakAddress=RiakAddress
RedisAddress=RedisAddress
ElasticSearchUrl=ElasticSearchUrl
ElasticSearchPrefix=ElasticSearchPrefix
ThidPartyBaseURL=https://api-sandbox.ThirdParty.com
ThidPartyCountryCode=SG
ThidPartyClientID=asdf12345qwertyuiadsf
ThidPartyClientSecret=asdf12345qwertyuiadsf
/////////////
// Output: //
/////////////
2020/07/13 17:04:03 stage: local
ServerPort=:1234
DBDriverName=mysql
DBReadDataSourceName=MySQLReadDataSourceCredentials
DBWriteDataSourceName=MySQLWriteDataSourceCredentials
RiakAddress=RiakAddress
RedisAddress=RedisAddress
ElasticSearchUrl=ElasticSearchUrl
ElasticSearchPrefix=ElasticSearchPrefix
ThidPartyBaseURL=https://api-sandbox.ThirdParty.com
ThidPartyCountryCode=SG
ThidPartyClientID=asdf12345qwertyuiadsf
ThidPartyClientSecret=asdf12345qwertyuiadsf
2020/07/13 17:04:03 stage != local
2020/07/13 17:04:03 do production config and ssm stuff
2020/07/13 17:04:03 DSN: DBReadDataSourceName
When I run this lambda locally on a mac, os.Getenv() functions as intended, returning local for Stage, and returning MySQLReadDataSourceCredentials for DBReadDataSourceName. However, running this lambda locally through WSL2 on a windows machine results in the above output. Stage returns the entire file, and DBReadDataSourceName returns DBReadDataSourceName.
I'm super stumped, and have tried everything I could think of, including manually writing \r\n on the end of each env. value. Any help would be extremely appreciated! Thank you very much for your time
Edit 2: From the comments
The command I used to load the env. file is env -S "`cat env.local`" sam local start-api --template template.yaml --profile company_local, with env.local being the file name.
The command env -S $'a=5\nb=6' sh -c 'echo "$a"' prints
5
b=6
The command env -S $'a=5\r\nb=6' sh -c 'echo "$a"' prints the exact same output as above,
5
b=6
The command env -S 'a=5;b=6' sh -c 'echo "$a"' prints
5;b=6
And env -S 'a=5:b=6' sh -c 'echo "$a"' prints
5:b=6
And the command xxd env.local | head -n 3 prints
00000000: 5374 6167 653d 6c6f 6361 6c0a 5365 7276 Stage=local.Serv
00000010: 6572 506f 7274 3d3a 3132 3334 0a44 4244 erPort=:1234.DBD
00000020: 7269 7665 724e 616d 653d 6d79 7371 6c0a riverName=mysql.

Related

error when pulling a docker container using singularity in nextflow

I am making a very short workflow in which I use a tool for my analysis called salmon.
In the hpc that I am working in, I cannot install this tool so I decided to pull the container from biocontainers.
In the hoc we do not have docker installed (I also do not have permission to do so) but we have singularity instead.
So I have to pull docker container (from: quay.io/biocontainers/salmon:1.2.1--hf69c8f4_0) using singularity.
The workflow management system that I am working with is nextflow.
This is the short workflow I made (index.nf):
#!/usr/bin/env nextflow
nextflow.preview.dsl=2
container = 'quay.io/biocontainers/salmon:1.2.1--hf69c8f4_0'
shell = ['/bin/bash', '-euo', 'pipefail']
process INDEX {
script:
"""
salmon index \
-t /hpc/genome/gencode.v39.transcripts.fa \
-i index \
"""
}
workflow {
INDEX()
}
I run it using this command:
nextflow run index.nf -resume
But got this error:
salmon: command not found
Do you know how I can fix the issue?
You are so close! All you need to do is move these directives into your nextflow.config or declare them at the top of your process body:
container = 'quay.io/biocontainers/salmon:1.2.1--hf69c8f4_0'
shell = ['/bin/bash', '-euo', 'pipefail']
My preference is to use a process selector to assign the container directive. So for example, your nextflow.config might look like:
process {
shell = ['/bin/bash', '-euo', 'pipefail']
withName: INDEX {
container = 'quay.io/biocontainers/salmon:1.2.1--hf69c8f4_0'
}
}
singularity {
enabled = true
// not strictly necessary, but highly recommended
cacheDir = '/path/to/singularity/cache'
}
And your index.nf might then look like:
nextflow.enable.dsl=2
params.transcripts = '/hpc/genome/gencode.v39.transcripts.fa'
process INDEX {
input:
path fasta
output:
path 'index'
"""
salmon index \\
-t "${fasta}" \\
-i index \\
"""
}
workflow {
transcripts = file( params.transcripts )
INDEX( transcripts )
}
If run using:
nextflow run -ansi-log false index.nf
You should see the following results:
N E X T F L O W ~ version 21.04.3
Launching `index.nf` [clever_bassi] - revision: d235de22c4
Pulling Singularity image docker://quay.io/biocontainers/salmon:1.2.1--hf69c8f4_0 [cache /path/to/singularity/cache/quay.io-biocontainers-salmon-1.2.1--hf69c8f4_0.img]
[8a/279df4] Submitted process > INDEX

Retrieving RSA key from AWS Secrets Manager in CodeBuild corrupts key "invalid format"

During a CodeBuild run I am retrieving a rsa key from SecretsManager, which is the private key to use to access private sources in BitBucket. To do this I have copied the private key into a secret, then in my buildspec file I have the following snippet:
"env": {
"secrets-manager": {
"LOCAL_RSA_VAR": "name-of-secret"
}
},
In the install portion of the buildspec:
"install": {
"commands": [
"echo $LOCAL_RSA_VAR" > ~/.ssh/id_rsa,
"chmod 600 ~/.ssh/id_rsa",
"yarn install"
]
},
HOWEVER, this always ends up with an error:
Load key "/root/.ssh/id_rsa": invalid format
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
To determine if the key was wrong I tried uploading the rsa_id file into S3 and then download it from there and used it that way using these commands instead:
"install": {
"commands": [
"aws s3 cp s3://the-bucket-name/id_rsa ~/.ssh/id_rsa",
"chmod 600 ~/.ssh/id_rsa",
"yarn install"
]
},
This works fine.
So I guess the question is... Has anyone tried this and had better success? Is there something that I am not doing correctly that you can think of?
I have encountered the same issue.
Copying the id_rsa generated from the the command echo $LOCAL_RSA_VAR > ~/.ssh/id_rsa in S3 I have noticed that the new lines have not been preseved.
I have resolved putting the var env between double quote "":
echo "$LOCAL_RSA_VAR" > ~/.ssh/id_rsa
I was able to get an answer by diff'ing the output of the Env Var vs the File contents from the S3 file. ('cat' will not print out the content of a secret mgr env variable) It ends up content of the env var was altered by the 'echo' command.
The solution that ended up working for me was:
printenv LOCAL_RSA_VAR > ~/.ssh/id_rsa
this command didn't alter the content of the rsa and I was able to successfully use the certificate.
As a recap this is what I was successful doing:
Generate the new key
Used command "pbcopy < id_rsa" to get local key into clipboard
Pasted that into a new secret in Secret Manager
Used the first set of code above to have the buildspec file retrieve the content into a env variable and then the 'printenv' command above, in the install command portion of the buildspec file, to save that to the default ssh location.
Hope this helps anyone that runs into the same issue.
UPDATE: I found that this works if the RSA is stored as its own secret as one big block of text. If you try and add this as part of a json object, ie:
{
"some": "thing",
"rsa_id": "<the rsa key here>"
}
this does not seem to work. I found that the content is altered with spaces in place of the newline. This is what i found when running an 'od -ax' on each and comparing them:
own secret:
R I V A T E sp K E Y - - - - - nl
json secret:
R I V A T E sp K E Y - - - - - sp
I has the same issue, fixed it my NOT Copy-Paste my private key to SecretManager, but use AWS CLI to upload my private key to SecretManager:
aws secretsmanager put-secret-value --secret-id AWS_CODECOMMIT_SSH_PRIVATE --secret-string file://myprivatekey.pem
And then CloudBuild worked fine:
version: 0.2
env:
secrets-manager:
AWS_CODECOMMIT_SSH_ID : AWS_CODECOMMIT_SSH_ID
AWS_CODECOMMIT_SSH_PRIVATE: AWS_CODECOMMIT_SSH_PRIVATE
phases:
install:
commands:
- echo "Setup CodeCommit SSH Key"
- mkdir ~/.ssh/
- echo "$AWS_CODECOMMIT_SSH_PRIVATE" > ~/.ssh/id_rsa
- echo "Host git-codecommit.*.amazonaws.com" > ~/.ssh/config
- echo " User $AWS_CODECOMMIT_SSH_ID" >> ~/.ssh/config
- echo " IdentityFile ~/.ssh/id_rsa" >> ~/.ssh/config
- echo " StrictHostKeyChecking no" >> ~/.ssh/config
- chmod 600 ~/.ssh/id_rsa
- chmod 600 ~/.ssh/config

Powershell: Issue redirecting output from error stream when using docker

I am working on a set of build scripts which are called from a ubuntu hosted CI environment. The powershell build script calls jest via react-scripts via npm. Unfortunately jest doesn't use stderr correctly and writes non-errors to the stream.
I have redirected the error stream using 3>&1 2>&1 and this works fine from just powershell core ($LASTEXITCODE is 0 after running, no content from stderr is written in red).
However when I introduce docker via docker run, the build script appears to not behave and outputs the line that should be redirected from the error stream in red (and crashes). i.e. something like: docker : PASS src/App.test.js. Error: Process completed with exit code 1..
Can anyone suggest what I am doing wrong? because I'm a bit stumped. I include the sample PowerShell call below:-
function Invoke-ShellExecutable
{
param (
[ScriptBlock]
$Command
)
$Output = Invoke-Command $Command -NoNewScope | Out-String
if($LASTEXITCODE -ne 0) {
$CmdString = $Command.ToString().Trim()
throw "Process [$($CmdString)] returned a failure status code [$($LASTEXITCODE)]. The process may have outputted details about the error."
}
return $Output
}
Invoke-ShellExecutable {
($env:CI = "true") -and (npm run test:ci)
} 3>&1 2>&1

How to edit "Version: xxx" from a script to automate a debian package build?

The Debian control file has a line like this (among many others):
Version: 1.1.0
We are using jenkins to build our application as a .deb package. in Jenkins we are doing something like this:
cp -r $WORKSPACE/p1.1/ourap/scripts/ourapp_debian $TARGET/
cd $TARGET
fakeroot dpkg-deb --build ourapp_debian
We would like to do shomething like this in our control file:
Packages: ourapp
Version: 1.1.$BUILD_NUMBER
but obviously this is not possible.
So we need something like a sed script to find the line starting with Version: and replace anything after it with a constant plus the BUILD_NUMBER env var which Jenkins creates.
We have tried things like this:
$ sed -i 's/xxx/$BUILD_NUMBER/g' control
then put "Version: xxx" in our file, but this doesn't work, and there must be a better way?
Any ideas?
We don't use the change-log, as this package will be installed on servers which no one has access to. the change logs are word docs given to the customer.
We don't use or need any of the Debian helper tools.
Create two files:
f.awk
function vp(s) { # return 1 for a string with version info
return s ~ /[ \t]*Version:/
}
function upd() { # an example of version number update function
v[3] = ENVIRON["BUILD_NUMBER"]
}
vp($0) {
gsub("[^.0-9]", "") # get rid of everything but `.' and digits
split($0, v, "[.]") # split version info into array `v' elements
upd()
printf "Version: %s.%s.%s\n", v[1], v[2], v[3]
next # done with this line
}
{ # print the rest without modifications
print
}
f.example
rest1
Version: 1.1.0
rest2
Run the command
BUILD_NUMBER=42 awk -f f.awk f.example
Expected output is
rest1
Version: 1.1.42
rest2
With single quote:
sed -ri "s/(Version.*\.)[0-9]*/\1$BUILD_NUMBER/g" <control file>
OR
sed -ni "/Version/{s/[0-9]*$/$BUILD_NUMBER/};p" <control file>

error while executing lua script for redis server

I was following this simple tutorial to try out a simple lua script
http://www.redisgreen.net/blog/2013/03/18/intro-to-lua-for-redis-programmers/
I created a simple hello.lua file with these lines
local msg = "Hello, world!"
return msg
And i tried running simple command
EVAL "$(cat /Users/rsingh/Downloads/hello.lua)" 0
And i am getting this error
(error) ERR Error compiling script (new function): user_script:1: unexpected symbol near '$'
I can't find what is wrong here and i haven't been able to find someone who has come across this.
Any help would be deeply appreciated.
Your problem comes from the fact you are executing this command from an interactive Redis session:
$ redis-cli
127.0.0.1:6379> EVAL "$(cat /path/to/hello.lua)" 0
(error) ERR Error compiling script (new function): user_script:1: unexpected symbol near '$'
Within such a session you cannot use common command-line tools like cat et al. (here cat is used as a convenient way to get the content of your script in-place). In other words: you send "$(cat /path/to/hello.lua)" as a plain string to Redis, which is not Lua code (of course), and Redis complains.
To execute this sample you must stay in the shell:
$ redis-cli EVAL "$(cat /path/to/hello.lua)" 0
"Hello, world!"
If you are coming from windows and trying to run a lua script you should use this format:
redis-cli --eval script.lua
Run this from the folder where your script is located and it will load a multi line file and execute it.
On the off chance that anyone's come to this from Windows instead, I found I had to do a lot of juggling to achieve the same effect. I had to do this:
echo “local msg = 'Hello, world!'; return msg” > hello.lua
for /F "delims=" %i in ('type hello.lua') do #set cmd=%i
redis-cli eval "%cmd%" 0
.. if you want it saved as a file, although you'll have to have all the content on one line. If you don’t just roll the content into a set command
set cmd=“local msg = 'Hello, world!'; return msg”
redis-cli eval "%cmd%" 0

Resources