Based on this example:
sudo gitlab-rails runner "token = User.find_by_username('automation-bot').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation token'); token.set_token('token-string-here123'); token.save!"
I've created an equivalent working command for docker that creates the personalised access token from the CLI:
output="$(sudo docker exec -i 5303124d7b87 bash -c "gitlab-rails runner \"token = User.find_by_username('root').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation token'); token.set_token('token-string-here123'); token.save! \"")"
However, when trying to parameterize that command, I am experiencing slight difficulties with the single quote. For example, when I try:
output="$(sudo docker exec -i 5303124d7b87 bash -c "gitlab-rails runner \"token = User.find_by_username($gitlab_username).personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation-token'); token.set_token('token-string-here123'); token.save! \"")"
It returns:
undefined local variable or method `root' for main:Object
Hence, I would like to ask, how can I substitute 'root' with a variable $gitlab_username that has value root?
I believe the error was, unlike I had assumed incorrectly, not necessarily in the command, but mostly in the variables that I passed into the command. The username contained a newline character, which broke up the command. Hence, I included a trim function that removes the newline characters from the incoming variables. The following function successfully creates a personal access token in GitLab:
create_gitlab_personal_access_token() {
docker_container_id=$(get_docker_container_id_of_gitlab_server)
# trim newlines
personal_access_token=$(echo $GITLAB_PERSONAL_ACCESS_TOKEN | tr -d '\r')
gitlab_username=$(echo $gitlab_server_account | tr -d '\r')
token_name=$(echo $GITLAB_PERSONAL_ACCESS_TOKEN_NAME | tr -d '\r')
# Create a personal access token
output="$(sudo docker exec -i $docker_container_id bash -c "gitlab-rails runner \"token = User.find_by_username('$gitlab_username').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: '$token_name'); token.set_token('$personal_access_token'); token.save! \"")"
}
Related
I am trying to pass an environment variable into my devcontainer that is the output of a command run on my dev machine. I have tried the following in my devcontainer.json with no luck:
"initializeCommand": "export DOCKER_HOST_IP=\"$(ifconfig | grep -E '([0-9]{1,3}.){3}[0-9]{1,3}' | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)\"",
"containerEnv": {
"DOCKER_HOST_IP1": "${localEnv:DOCKER_HOST_IP}",
"DOCKER_HOST_IP2": "${containerEnv:DOCKER_HOST_IP}"
},
and
"runArgs": [
"-e DOCKER_HOST_IP=\"$(ifconfig | grep -E '([0-9]{1,3}.){3}[0-9]{1,3}' | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)\"
],
(the point of the ifconfig/grep piped command is to provide me with the IP of my docker host which is running via Docker for Desktop (Mac))
Some more context
Within my devcontainer I am running some kubectl deployments (to a cluster running on Docker for Desktop) where I would like to configure a hostAlias for a pod (docs) such that that pod will direct requests to https://api.cancourier.local to the ip of the docker host (which would then hit an ingress I have configured for that CNAME).
I could just pass in the output of the ifconfig command to my kubectl command when running from within the devcontainer. The problem is that I get two different results from this depending on whether I am running it on my host (10.0.0.89) or from within the devcontainer (10.1.0.1). 10.0.0.89 in this case is the "correct" IP as if I curl this from within my devcontainer, or my deployed pod, I get the response I'd expect from my ingress.
I'm also aware that I could just use the name of my k8s service (in this case api) to communicate between pods, but this isn't ideal. As for why, I'm running a Next.js application in a pod. The Next.js app on this pod has two "contexts":
my browser - the app serves up static HTML/JS to my browser where communicating with https://api.cancourier.local works fine
on the pod itself - running some things (ie. _middleware) on the pod itself, where the pod does not currently know what https://api.cancourier.local
What I was doing to temporarily get around this was to have a separate config on the pod, one for the "browser context" and the other for things running on the pod itself. This is less than ideal as when I go to deploy this Next.js app (to Vercel) it won't be an issue (as my API will be deployed on some publicly accessible CNAME). If I can accomplish what I was trying to do above, I'd be able to avoid this.
So I didn't end up finding a way to pass the output of a command run on the host machine as an env var into my devcontainer. However I did find a way to get the "correct" docker host IP and pass this along to my pod.
In my devcontainer.json I have this:
"runArgs": [
// https://stackoverflow.com/a/43541732/3902555
"--add-host=api.cancourier.local:host-gateway",
"--add-host=cancourier.local:host-gateway"
],
which augments the devcontainer's /etc/hosts with:
192.168.65.2 api.cancourier.local
192.168.65.2 cancourier.local
then in my Makefile where I store my kubectl commands I am simply running:
deploy-the-things:
DOCKER_HOST_IP = $(shell cat /etc/hosts | grep 'api.cancourier.local' | awk '{print $$1}')
helm upgrade $(helm_release_name) $(charts_location) \
--install \
--namespace=$(local_namespace) \
--create-namespace \
-f $(charts_location)/values.yaml \
-f $(charts_location)/local.yaml \
--set cwd=$(HOST_PROJECT_PATH) \
--set dockerHostIp=$(DOCKER_HOST_IP) \
--debug \
--wait
then within my helm chart I can use the following for the pod running my Next.js app:
hostAliases:
- ip: {{ .Values.dockerHostIp }}
hostnames:
- "api.cancourier.local"
Highly recommend following this tutorial: Container environment variables
In this tutorial, 2 methods are mentioned:
Adding individual variables
Using env file
Choose which is more comfortable for you, good luck))
TLDR
How can I configure my provider in terraform so that it is using docker to mount code and the correct function directory to execute lambda functions?
I am trying to run a simple lambda function that listens for dynamodb stream events. My code itself is working properly, but the issue I am having is when using terraform, the function executor does not find the function to run. In order to debug, I set the following envars in my localstack container DEBUG=true. I tested my code first with the serverless frame work, which works as expected.
The successful function execution logs from serverless shows:
localstack | 2021-03-17T13:14:53:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd: docker run -i -v "/Users/myuser/functions":/var/task -e AWS_REGION="$AWS_REGION" -e DOCKER_LAMBDA_USE_STDIN="$DOCKER_LAMBDA_USE_STDIN" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e _HANDLER="$_HANDLER" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -eAWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" -e AWS_LAMBDA_COGNITO_IDENTITY="$AWS_LAMBDA_COGNITO_IDENTITY" --rm "lambci/lambda:go1.x" "bin/dbchanges"
localstack | 2021-03-17T13:14:54:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-1:000000000000:function:myService-local-dbchanges result / log output:
localstack | null
Terraform: issue
But when running from terramform, it looks like the function cannot be found and fails with the following logs:
localstack | 2021-03-17T13:30:32:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd: docker run -i -v "/tmp//zipfile.717163a0":/var/task -e AWS_REGION="$AWS_REGION" -e DOCKER_LAMBDA_USE_STDIN="$DOCKER_LAMBDA_USE_STDIN" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e _HANDLER="$_HANDLER" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -e AWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" -e AWS_LAMBDA_COGNITO_IDENTITY="$AWS_LAMBDA_COGNITO_IDENTITY" --rm "lambci/lambda:go1.x" "dbchanges"
localstack | 2021-03-17T13:30:33:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-1:000000000000:function:dbchanges result / log output:
localstack | {"errorType":"exitError","errorMessage":"RequestId: 4f3cfd0a-7905-12e2-7d4e-049bd2c1a1ac Error: fork/exec /var/task/dbchanges: no such file or directory"}
After inspect the two log sets, I noticed that the path that is being mounted by terraform + localstack docker executor is different. In the case of serverless, it is pointing to the correct folder for volume mounting; i.e. /Users/myuser/functions while in terraform, it is mounting /tmp//zipfile.somevalue which seems to be the root of the issue.
In my serverless config file, lambda mountcode is set to true which leads me to believe that is why it is mounting and executing correctly.
lambda:
mountCode: True
So my question is, what can I do in terraform so that the uploaded function actually gets executed by the docker container, or tell terraform to mount the correct directory so that it can find the function? My terraform lambda function definition is:
data "archive_file" "dbchangeszip" {
type = "zip"
source_file = "../bin/dbchanges"
output_path = "./zips/dbchanges.zip"
}
resource "aws_lambda_function" "dbchanges" {
description = "Function to capture dynamodb change"
runtime = var.runtime
timeout = var.timeout
memory_size = var.memory
role = aws_iam_role.lambda_role.arn
handler = "dbchanges"
filename = "./zips/dbchanges.zip"
function_name = "dbchanges"
source_code_hash = data.archive_file.dbchangeszip.output_base64sha256
}
P.S. Some other things are tried are
setting the handler in terraform to bin/handler to mimic serverless
Figured out the issue. When using terraform, the s3 bucket for the functions being stored isnt defined, so those two has to be set in the resource definition in terraform.
Example:
resource "aws_lambda_function" "dbchanges" {
s3_bucket = "__local__"
s3_key = "/Users/myuser/functions/"
role = aws_iam_role.lambda_role.arn
handler = "bin/dbchanges"
# filename = "./zips/dbchanges.zip"
function_name = "dbchanges"
source_code_hash = data.archive_file.dbchangeszip.output_base64sha256
}
The two important values are:
s3_bucket = "__local__"
s3_key = "/Users/myuser/functions/"
Where s3_key is the absolute path to the functions.
I am following instructions in this link to make mvt file
https://blog.jawg.io/how-to-make-mvt-with-postgis/
When I did the setup and ran generate-tiles.sh , I get the error that bbox function does not exist
ERROR: function bbox(integer, integer, integer) does not exist
LINE 8: BBox(16596, 11273, 15)
If you scroll that page, he is mentioning he is using a helper function and also has given the link.
https://raw.githubusercontent.com/jawg/blog-resources/master/how-to-make-mvt-with-postgis/bbox.sql
I do not have experience with postgis and docker and hence I am stuck.
So my question is, how do I load/install/mount the .sql file so that the script does not produce the error and can call the bbox function ?
Thanks for the comments.
Adding this line to the script file helped.
psql -h /var/run/postgresql -p 5432 -U postgres -d [dbName] -f [scriptsDir]/bbox.sql;
Replace [dbName] with your actual database name and [scriptsDir] where bbox.sql is present.
To verify the port, run this command
sudo lsof -U -a -c postgres
In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
I was reading a blog post on Percona Monitoring Plugins and how you can somehow monitor a Galera cluster using pmp-check-mysql-status plugin. Below is the link to the blog demonstrating that:
https://www.percona.com/blog/2013/10/31/percona-xtradb-cluster-galera-with-percona-monitoring-plugins/
The commands in this tutorial are run on the command line. I wish to try these commands in a Nagios .cfg file e.g, monitor.cfg. How do i write the services for the commands used in this tutorial?
This was my attempt and i cannot figure out what the best parameters to use for check_command on the service. I am suspecting that where the problem is.
So inside my /etc/nagios3/conf.d/monitor.cfg file, i have the following:
define host{
use generic-host
host_name percona-server
alias percona
address 127.0.0.1
}
## Check for a Primary Cluster
define command{
command_name check_mysql_status
command_line /usr/lib/nagios/plugins/pmp-check-
mysql-status -x wsrep_cluster_status -C == -T str -c non-Primary
}
define service{
use generic-service
hostgroup_name mysql-servers
service_description Cluster
check_command pmp-check-mysql-
status!wsrep_cluster_status!==!str!non-Primary
}
When i run the command Nagios and go to monitor it, i get this message in the Nagios dashboard:
status: UNKNOWN; /usr/lib/nagios/plugins/pmp-check-mysql-status: 31:
shift: can't shift that many
You verified that:
/usr/lib/nagios/plugins/pmp-check-mysql-status -x wsrep_cluster_status -C == -T str -c non-Primary
works fine on command line on the target host? I suspect there's a shell escape issue with the ==
Does this work well for you? /usr/lib64/nagios/plugins/pmp-check-mysql-status -x wsrep_flow_control_paused -w 0.1 -c 0.9