TLDR
How can I configure my provider in terraform so that it is using docker to mount code and the correct function directory to execute lambda functions?
I am trying to run a simple lambda function that listens for dynamodb stream events. My code itself is working properly, but the issue I am having is when using terraform, the function executor does not find the function to run. In order to debug, I set the following envars in my localstack container DEBUG=true. I tested my code first with the serverless frame work, which works as expected.
The successful function execution logs from serverless shows:
localstack | 2021-03-17T13:14:53:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd: docker run -i -v "/Users/myuser/functions":/var/task -e AWS_REGION="$AWS_REGION" -e DOCKER_LAMBDA_USE_STDIN="$DOCKER_LAMBDA_USE_STDIN" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e _HANDLER="$_HANDLER" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -eAWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" -e AWS_LAMBDA_COGNITO_IDENTITY="$AWS_LAMBDA_COGNITO_IDENTITY" --rm "lambci/lambda:go1.x" "bin/dbchanges"
localstack | 2021-03-17T13:14:54:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-1:000000000000:function:myService-local-dbchanges result / log output:
localstack | null
Terraform: issue
But when running from terramform, it looks like the function cannot be found and fails with the following logs:
localstack | 2021-03-17T13:30:32:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd: docker run -i -v "/tmp//zipfile.717163a0":/var/task -e AWS_REGION="$AWS_REGION" -e DOCKER_LAMBDA_USE_STDIN="$DOCKER_LAMBDA_USE_STDIN" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e _HANDLER="$_HANDLER" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -e AWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" -e AWS_LAMBDA_COGNITO_IDENTITY="$AWS_LAMBDA_COGNITO_IDENTITY" --rm "lambci/lambda:go1.x" "dbchanges"
localstack | 2021-03-17T13:30:33:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-1:000000000000:function:dbchanges result / log output:
localstack | {"errorType":"exitError","errorMessage":"RequestId: 4f3cfd0a-7905-12e2-7d4e-049bd2c1a1ac Error: fork/exec /var/task/dbchanges: no such file or directory"}
After inspect the two log sets, I noticed that the path that is being mounted by terraform + localstack docker executor is different. In the case of serverless, it is pointing to the correct folder for volume mounting; i.e. /Users/myuser/functions while in terraform, it is mounting /tmp//zipfile.somevalue which seems to be the root of the issue.
In my serverless config file, lambda mountcode is set to true which leads me to believe that is why it is mounting and executing correctly.
lambda:
mountCode: True
So my question is, what can I do in terraform so that the uploaded function actually gets executed by the docker container, or tell terraform to mount the correct directory so that it can find the function? My terraform lambda function definition is:
data "archive_file" "dbchangeszip" {
type = "zip"
source_file = "../bin/dbchanges"
output_path = "./zips/dbchanges.zip"
}
resource "aws_lambda_function" "dbchanges" {
description = "Function to capture dynamodb change"
runtime = var.runtime
timeout = var.timeout
memory_size = var.memory
role = aws_iam_role.lambda_role.arn
handler = "dbchanges"
filename = "./zips/dbchanges.zip"
function_name = "dbchanges"
source_code_hash = data.archive_file.dbchangeszip.output_base64sha256
}
P.S. Some other things are tried are
setting the handler in terraform to bin/handler to mimic serverless
Figured out the issue. When using terraform, the s3 bucket for the functions being stored isnt defined, so those two has to be set in the resource definition in terraform.
Example:
resource "aws_lambda_function" "dbchanges" {
s3_bucket = "__local__"
s3_key = "/Users/myuser/functions/"
role = aws_iam_role.lambda_role.arn
handler = "bin/dbchanges"
# filename = "./zips/dbchanges.zip"
function_name = "dbchanges"
source_code_hash = data.archive_file.dbchangeszip.output_base64sha256
}
The two important values are:
s3_bucket = "__local__"
s3_key = "/Users/myuser/functions/"
Where s3_key is the absolute path to the functions.
Related
How to load a lib redis (lua) using 'docker exec' command? I'm trying to load a lib in redis using 'docker exec' but I get the following error:
ERR Missing library metadata
Command used:
docker exec redis-db_1 redis-cli -a [password] FUNCTION LOAD "$(cat [path]\ZRANDSCORE.lua)"
Code [ZRANDSCORE.lua]
#!lua name=ZRANDSCORE
local function zrandscore(KEYS, ARGV)
local set = redis.call('ZRANGEBYSCORE', KEYS[1], ARGV[1], ARGV[2])
local langth = table.getn(set)
local member = set[math.random(1, langth)]
redis.call('ZINCRBY', KEYS[1], ARGV[3], member)
return {member, langth-1}
end
redis.register_function('ZRANDSCORE', zrandscore)
On the first line, notice that the metadata is reported as instructed in the documentation:
#!<engine name> name=<library name>
so much so that when I run this same code directly in redis I get success
execution output
This worked for me (Redis 7.0.5):
call docker exec with "-i" option to bind terminal's stdin to docker exec and
redis-cli with "-x" option to read its last argument from stdin:
cat <script> | docker exec -i <container> redis-cli -x FUNCTION LOAD REPLACE
note: the script must begin with #!lua name=<libname>
In all other cases redis-cli gives: "ERR Missing library metadata"
have you tried:
docker exec redis-db_1 redis-cli -a [password] FUNCTION LOAD < [path]\ZRANDSCORE.lua
Based on this example:
sudo gitlab-rails runner "token = User.find_by_username('automation-bot').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation token'); token.set_token('token-string-here123'); token.save!"
I've created an equivalent working command for docker that creates the personalised access token from the CLI:
output="$(sudo docker exec -i 5303124d7b87 bash -c "gitlab-rails runner \"token = User.find_by_username('root').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation token'); token.set_token('token-string-here123'); token.save! \"")"
However, when trying to parameterize that command, I am experiencing slight difficulties with the single quote. For example, when I try:
output="$(sudo docker exec -i 5303124d7b87 bash -c "gitlab-rails runner \"token = User.find_by_username($gitlab_username).personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation-token'); token.set_token('token-string-here123'); token.save! \"")"
It returns:
undefined local variable or method `root' for main:Object
Hence, I would like to ask, how can I substitute 'root' with a variable $gitlab_username that has value root?
I believe the error was, unlike I had assumed incorrectly, not necessarily in the command, but mostly in the variables that I passed into the command. The username contained a newline character, which broke up the command. Hence, I included a trim function that removes the newline characters from the incoming variables. The following function successfully creates a personal access token in GitLab:
create_gitlab_personal_access_token() {
docker_container_id=$(get_docker_container_id_of_gitlab_server)
# trim newlines
personal_access_token=$(echo $GITLAB_PERSONAL_ACCESS_TOKEN | tr -d '\r')
gitlab_username=$(echo $gitlab_server_account | tr -d '\r')
token_name=$(echo $GITLAB_PERSONAL_ACCESS_TOKEN_NAME | tr -d '\r')
# Create a personal access token
output="$(sudo docker exec -i $docker_container_id bash -c "gitlab-rails runner \"token = User.find_by_username('$gitlab_username').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: '$token_name'); token.set_token('$personal_access_token'); token.save! \"")"
}
I'm trying to get a Blazor Web Assembly app up and running with authentication for deployment on Digital Ocean, but I can't for the life of me seem to load a .pfx cert to pass it to Identity Server when trying to run within docker container.
I load and pass the certificate in Startup like so:
string password = "password";
string certificate = "IdSrv.pfx";
var cert = new X509Certificate2(
certificate,
password,
X509KeyStorageFlags.MachineKeySet |
X509KeyStorageFlags.PersistKeySet |
X509KeyStorageFlags.Exportable);
services.AddIdentityServer()
.AddApiAuthorization<JccUser, JccContext>()
.AddSigningCredential(cert);
But when trying to run container and mount the path to the certificate I just can't seem to locate the file.
`docker run --rm -it -p 8000:80 -p 8001:443
-e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001
-e ASPNETCORE_Kestrel__Certificates__Default__Password="password"
-e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/cert.pfx
// for SSL
-v C:\Users\user\.aspnet\https:/https/
// for Identity Server (I tried using the same for both originally)
-v C:\Users\user\.cert\IdSrv.pfx:/IdSrv.pfx
docker-name`
I get Unhandled exception. Interop+Crypto+OpenSslCryptographicException: error
:2006D080:BIO routines:BIO_new_file:no such file
In AppSettings.json I just have the boiler plate
"IdentityServer": {
"Clients": {
"Jcc.Client": {
"Profile": "IdentityServerSPA"
} } },
But as I understand it, I don't need to configure the key in there if I am doing at AddSigningCredentuial().
I must be missing something obvious or have got the wrong end of the stick somewhere?
I am trying to do tensorflow serving with REST API using docker. I was following example from https://www.tensorflow.org/tfx/serving/docker, and https://towardsdatascience.com/serving-keras-models-locally-using-tensorflow-serving-tf-2-x-8bb8474c304e. I've created a simple digit mnist classifier. My model's export path:
MODEL_DIR = 'digit_mnist/model_serving/'
version = 1
export_path = os.path.join(MODEL_DIR, str(version))
then saved model with this command:
tf.keras.models.save_model(model,
export_path,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None)
when I run:
sudo docker run -p 8501:8501 --mount type=bind,source=/artificial-neural-network/tensorflow_nn/digit_mnist/model_serving/1/,target=/models/model_serving -e MODEL_NAME=dmc -t tensorflow/serving
I get this error:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /artificial-neural-networks/tensorflow_nn/digit_mnist/model_serving/1/.
My file structure goes like this:
(venv) artificial_neural_networks/
__init__.py
pytorch_nn/
tensorflow_nn/
__init__.py
digit_mnist/
model_serving/
1/
assets
variables/
variables.data-00000-of-00002
variables.data-00001-of-00002
variables.index
saved_model.pb
__init__.py
mnist.py
Where am I doing the wrong thing? I'm on my second day of solving this problem so any help would be appreciated.
Having a problem where running a GNU Parallel job in distributed mode (ie. across multiple machines via the --sshloginfile) and finding that even though the job is running on each machine as the same user (or at least dictated that way in the file being given to the --sshloginfile (eg. myuser#myhostname00x)), getting a "Permission denied" error when the job tries to access a file. This occurs despite being able to (passwordless) ssh into the remote nodes in question and ls the files that the Parallel job claims it has no permissions for (the specified path is to a filesystem that is shared and NFS mounted on all the nodes).
Have a list file of nodes like
me#host001
me#host005
me#host006
and the actual Parallel job looks like
bcpexport() {
<do stuff to arg $1 to BCP copy to a MSSQL DB>
}
export -f bcpexport
parallel -q -j 10 --sshloginfile $basedir/src/parallel-nodes.txt --env $bcpexport \
bcpexport {} "$TO_SERVER_ODBCDSN" $DB $TABLE $USER $PASSWORD $RECOMMEDED_IMPORT_MODE $DELIMITER \
::: $DATAFILES/$TARGET_GLOB
where the $DATAFILES/$TARGET_GLOB glob pattern returns files from a directory. Running this job in single node mode works fine, but when running across all the nodes in the parallel-nodes.txt file throws
/bin/bash: line 27: /path/to/file001: Permission denied
/bin/bash: line 27: /path/to/file002: Permission denied
...and so on for all the files...
If anyone knows what could be going on here, advice or debugging suggestions would be appreciated.
I think the problem is the additional $:
parallel [...] --env $bcpexport bcpexport {} [...]
Unless you set the shell variable $bcpexport to something you probably meant bcpexport (no $) instead.
If $bcpexport is undefined, then it will be replace with nothing by the shell. Thus --env will eat the next argument, so you will really be running:
parallel [...] --env bcpexport {} [...]
which will execute {} as a command, which is exactly what you experience.
So try this instead:
parallel [...] --env bcpexport bcpexport {} [...]