How to load a redis (lua) lib using 'docker exec' command? - docker

How to load a lib redis (lua) using 'docker exec' command? I'm trying to load a lib in redis using 'docker exec' but I get the following error:
ERR Missing library metadata
Command used:
docker exec redis-db_1 redis-cli -a [password] FUNCTION LOAD "$(cat [path]\ZRANDSCORE.lua)"
Code [ZRANDSCORE.lua]
#!lua name=ZRANDSCORE
local function zrandscore(KEYS, ARGV)
local set = redis.call('ZRANGEBYSCORE', KEYS[1], ARGV[1], ARGV[2])
local langth = table.getn(set)
local member = set[math.random(1, langth)]
redis.call('ZINCRBY', KEYS[1], ARGV[3], member)
return {member, langth-1}
end
redis.register_function('ZRANDSCORE', zrandscore)
On the first line, notice that the metadata is reported as instructed in the documentation:
#!<engine name> name=<library name>
so much so that when I run this same code directly in redis I get success
execution output

This worked for me (Redis 7.0.5):
call docker exec with "-i" option to bind terminal's stdin to docker exec and
redis-cli with "-x" option to read its last argument from stdin:
cat <script> | docker exec -i <container> redis-cli -x FUNCTION LOAD REPLACE
note: the script must begin with #!lua name=<libname>
In all other cases redis-cli gives: "ERR Missing library metadata"

have you tried:
docker exec redis-db_1 redis-cli -a [password] FUNCTION LOAD < [path]\ZRANDSCORE.lua

Related

docker system df -v output as json

I am doing some work with docker container management, and need to get docker container status.
My current approach is using ssh client by execute shell cmd to grab.
For example i can get container stats by execute cmd:
docker stats --no-stream --format '{"container":"{{ .Name }}","memory":{"raw":"{{ .MemUsage }}","percent":"{{ .MemPerc }}"},"cpu":"{{ .CPUPerc }}","networkIO":"{{.NetIO}}","BlockIO":"{{.BlockIO}}"}'
output:
{"container":"postgresql","memory":{"raw":"255.4MiB / 31.21GiB","percent":"0.80%"},"cpu":"0.00%","networkIO":"1.03GB / 476MB,"BlockIO":"545MB / 7.67GB"}
{"container":"pgadmin","memory":{"raw":"146.1MiB / 31.21GiB","percent":"0.46%"},"cpu":"0.03%","networkIO":"26.2kB / 0B,"BlockIO":"200MB / 8.19kB"}
{"container":"pis_middle_layer_flask","memory":{"raw":"849.9MiB / 31.21GiB","percent":"2.66%"},"cpu":"13.48%","networkIO":"26.4kB / 0B,"BlockIO":"65.9MB / 0B"}
So how can i get similar text with docker system df -v?
Cause i want to get each container size and their volume size.
I did with same cmd:
docker system df -v --format '{"container":"{{ .Name }}","memory":{"raw":"{{ .MemUsage }}","percent":"{{ .MemPerc }}"},"cpu":"{{ .CPUPerc }}","networkIO":"{{.NetIO}}","BlockIO":"{{.BlockIO}}"}'
but occurred with error:
{"container":"template: :1:17: executing "" at <.Name>: can't evaluate field Name in type *formatter.diskUsageContext
I know i got wrong go template keywords, but really can't find anything doc about it.
Al right...just got it.
docker system df -v --format "{{ json . }}"
this cmd will return as json so i can parse it with json.loads

Creating parameterized GitLab personal access token from CLI

Based on this example:
sudo gitlab-rails runner "token = User.find_by_username('automation-bot').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation token'); token.set_token('token-string-here123'); token.save!"
I've created an equivalent working command for docker that creates the personalised access token from the CLI:
output="$(sudo docker exec -i 5303124d7b87 bash -c "gitlab-rails runner \"token = User.find_by_username('root').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation token'); token.set_token('token-string-here123'); token.save! \"")"
However, when trying to parameterize that command, I am experiencing slight difficulties with the single quote. For example, when I try:
output="$(sudo docker exec -i 5303124d7b87 bash -c "gitlab-rails runner \"token = User.find_by_username($gitlab_username).personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation-token'); token.set_token('token-string-here123'); token.save! \"")"
It returns:
undefined local variable or method `root' for main:Object
Hence, I would like to ask, how can I substitute 'root' with a variable $gitlab_username that has value root?
I believe the error was, unlike I had assumed incorrectly, not necessarily in the command, but mostly in the variables that I passed into the command. The username contained a newline character, which broke up the command. Hence, I included a trim function that removes the newline characters from the incoming variables. The following function successfully creates a personal access token in GitLab:
create_gitlab_personal_access_token() {
docker_container_id=$(get_docker_container_id_of_gitlab_server)
# trim newlines
personal_access_token=$(echo $GITLAB_PERSONAL_ACCESS_TOKEN | tr -d '\r')
gitlab_username=$(echo $gitlab_server_account | tr -d '\r')
token_name=$(echo $GITLAB_PERSONAL_ACCESS_TOKEN_NAME | tr -d '\r')
# Create a personal access token
output="$(sudo docker exec -i $docker_container_id bash -c "gitlab-rails runner \"token = User.find_by_username('$gitlab_username').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: '$token_name'); token.set_token('$personal_access_token'); token.save! \"")"
}

How can I mount lambda code when using terraform with localstack?

TLDR
How can I configure my provider in terraform so that it is using docker to mount code and the correct function directory to execute lambda functions?
I am trying to run a simple lambda function that listens for dynamodb stream events. My code itself is working properly, but the issue I am having is when using terraform, the function executor does not find the function to run. In order to debug, I set the following envars in my localstack container DEBUG=true. I tested my code first with the serverless frame work, which works as expected.
The successful function execution logs from serverless shows:
localstack | 2021-03-17T13:14:53:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd: docker run -i -v "/Users/myuser/functions":/var/task -e AWS_REGION="$AWS_REGION" -e DOCKER_LAMBDA_USE_STDIN="$DOCKER_LAMBDA_USE_STDIN" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e _HANDLER="$_HANDLER" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -eAWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" -e AWS_LAMBDA_COGNITO_IDENTITY="$AWS_LAMBDA_COGNITO_IDENTITY" --rm "lambci/lambda:go1.x" "bin/dbchanges"
localstack | 2021-03-17T13:14:54:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-1:000000000000:function:myService-local-dbchanges result / log output:
localstack | null
Terraform: issue
But when running from terramform, it looks like the function cannot be found and fails with the following logs:
localstack | 2021-03-17T13:30:32:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd: docker run -i -v "/tmp//zipfile.717163a0":/var/task -e AWS_REGION="$AWS_REGION" -e DOCKER_LAMBDA_USE_STDIN="$DOCKER_LAMBDA_USE_STDIN" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e _HANDLER="$_HANDLER" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -e AWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" -e AWS_LAMBDA_COGNITO_IDENTITY="$AWS_LAMBDA_COGNITO_IDENTITY" --rm "lambci/lambda:go1.x" "dbchanges"
localstack | 2021-03-17T13:30:33:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-1:000000000000:function:dbchanges result / log output:
localstack | {"errorType":"exitError","errorMessage":"RequestId: 4f3cfd0a-7905-12e2-7d4e-049bd2c1a1ac Error: fork/exec /var/task/dbchanges: no such file or directory"}
After inspect the two log sets, I noticed that the path that is being mounted by terraform + localstack docker executor is different. In the case of serverless, it is pointing to the correct folder for volume mounting; i.e. /Users/myuser/functions while in terraform, it is mounting /tmp//zipfile.somevalue which seems to be the root of the issue.
In my serverless config file, lambda mountcode is set to true which leads me to believe that is why it is mounting and executing correctly.
lambda:
mountCode: True
So my question is, what can I do in terraform so that the uploaded function actually gets executed by the docker container, or tell terraform to mount the correct directory so that it can find the function? My terraform lambda function definition is:
data "archive_file" "dbchangeszip" {
type = "zip"
source_file = "../bin/dbchanges"
output_path = "./zips/dbchanges.zip"
}
resource "aws_lambda_function" "dbchanges" {
description = "Function to capture dynamodb change"
runtime = var.runtime
timeout = var.timeout
memory_size = var.memory
role = aws_iam_role.lambda_role.arn
handler = "dbchanges"
filename = "./zips/dbchanges.zip"
function_name = "dbchanges"
source_code_hash = data.archive_file.dbchangeszip.output_base64sha256
}
P.S. Some other things are tried are
setting the handler in terraform to bin/handler to mimic serverless
Figured out the issue. When using terraform, the s3 bucket for the functions being stored isnt defined, so those two has to be set in the resource definition in terraform.
Example:
resource "aws_lambda_function" "dbchanges" {
s3_bucket = "__local__"
s3_key = "/Users/myuser/functions/"
role = aws_iam_role.lambda_role.arn
handler = "bin/dbchanges"
# filename = "./zips/dbchanges.zip"
function_name = "dbchanges"
source_code_hash = data.archive_file.dbchangeszip.output_base64sha256
}
The two important values are:
s3_bucket = "__local__"
s3_key = "/Users/myuser/functions/"
Where s3_key is the absolute path to the functions.

how to load/install .sql for postgis?

I am following instructions in this link to make mvt file
https://blog.jawg.io/how-to-make-mvt-with-postgis/
When I did the setup and ran generate-tiles.sh , I get the error that bbox function does not exist
ERROR: function bbox(integer, integer, integer) does not exist
LINE 8: BBox(16596, 11273, 15)
If you scroll that page, he is mentioning he is using a helper function and also has given the link.
https://raw.githubusercontent.com/jawg/blog-resources/master/how-to-make-mvt-with-postgis/bbox.sql
I do not have experience with postgis and docker and hence I am stuck.
So my question is, how do I load/install/mount the .sql file so that the script does not produce the error and can call the bbox function ?
Thanks for the comments.
Adding this line to the script file helped.
psql -h /var/run/postgresql -p 5432 -U postgres -d [dbName] -f [scriptsDir]/bbox.sql;
Replace [dbName] with your actual database name and [scriptsDir] where bbox.sql is present.
To verify the port, run this command
sudo lsof -U -a -c postgres

docker exec command doesn't return after completing execution

I started a docker container based on an image which has a file "run.sh" in it. Within a shell script, i use docker exec as shown below
docker exec <container-id> sh /test.sh
test.sh completes execution but docker exec does not return until i press ctrl+C. As a result, my shell script never ends. Any pointers to what might be causing this.
I could get it working with adding the -it parameters:
docker exec -it <container-id> sh /test.sh
Mine works like a charm with this command. Maybe you only forgot the path to the binary (/bin/sh)?
docker exec 7bd877d15c9b /bin/bash /test.sh
File location at
/test.sh
File Content:
#!/bin/bash
echo "Hi"
echo
echo "This works fine"
sleep 5
echo "5"
Output:
ArgonQQ#Terminal ~ docker exec 7bd877d15c9b /bin/bash /test.sh
Hi
This works fine
5
ArgonQQ#Terminal ~
My case is a script a.sh with content
like
php test.php &
if I execute it like
docker exec contianer1 a.sh
It also never returned.
After half a day googling and trying
changed a.sh to
php test.php >/tmp/test.log 2>&1 &
It works!
So it seems related with stdin/out/err.
>/tmp/test.log 2>&1
Please try.
And please note that my test.php is a dead loop script that monitors a specified process, if the process is down, it will restart it. So test.php will never exit.
As described here, this "hanging" behavior occurs when you have processes that keep stdout or stderr open.
To prevent this from happening, each long-running process should:
be executed in the background, and
close both stdout and stderr or redirect them to files or /dev/null.
I would therefore make sure that any processes already running in the container, as well as the script passed to docker exec, conform to the above.
OK, I got it.
docker stop a590382c2943
docker start a590382c2943
then will be ok.
docker exec -ti a590382c2943 echo "5"
will return immediately, while add -it or not, no use
actually, in my program, the deamon has the std input and std output, std err. so I change my python deamon like following, things work like a charm:
if __name__ == '__main__':
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0:
# exit first parent
os._exit(0)
except OSError, e:
print "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
os._exit(0)
# decouple from parent environment
#os.chdir("/")
os.setsid()
os.umask(0)
#std in out err, redirect
si = file('/dev/null', 'r')
so = file('/dev/null', 'a+')
se = file('/dev/null', 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# do second fork
while(True):
try:
pid = os.fork()
if pid == 0:
serve()
if pid > 0:
print "Server PID %d, Daemon PID: %d" % (pid, os.getpid())
os.wait()
time.sleep(3)
except OSError, e:
#print "fork #2 failed: %d (%s)" % (e.errno, e.strerror)
os._exit(0)

Resources