Hello I am interested in retrieving the Task ID from within inside a running container which lives inside of a EC2 host machine.
AWS ECS documentation states there is an environment variable ECS_CONTAINER_METADATA_FILE with the location of this data but will only be set/available if ECS_ENABLE_CONTAINER_METADATA variable is set to true upon cluster/EC2 instance creation. I don't see where this can be done in the aws console.
Also, the docs state that this can be done by setting this to true inside the host machine but would require to restart the docker agent.
Is there any other way to do this without having to go inside the EC2 to set this and restart the docker agent?
This doesn't work for newer Amazon ECS container versions anymore, and in fact it's now much simpler and also enabled by default. Please refer to this docu, but here's a TL;DR:
If you're using Amazon ECS container agent version 1.39.0 and higher, you can just do this inside the docker container:
curl -s "$ECS_CONTAINER_METADATA_URI_V4/task" \
| jq -r ".TaskARN" \
| cut -d "/" -f 3
Here's a list of container agent releases, but if you're using :latest – you're definitely fine.
The technique I'd use is to set the environment variable in the container definition.
If you're managing your tasks via Cloudformation, the relevant yaml looks like so:
Taskdef:
Type: AWS::ECS::TaskDefinition
Properties:
...
ContainerDefinitions:
- Name: some-name
...
Environment:
- Name: AWS_DEFAULT_REGION
Value: !Ref AWS::Region
- Name: ECS_ENABLE_CONTAINER_METADATA
Value: 'true'
This technique helps you keep everything straightforward and reproducible.
If you need metadata programmatically and don't have access to the metadata file, you can query the agent's metadata endpoint:
curl http://localhost:51678/v1/metadata
Note that if you're getting this information as a running task, you may not be able to connect to the loopback device, but you can connect to the EC2 instance's own IP address.
We set it with the so called user data, which are executed at the start of the machine. There are multiple ways to set it, for example: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-console
It could look like this:
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=ecs-staging
ECS_ENABLE_CONTAINER_METADATA=true
EOF
Important: Adjust the ECS_CLUSTER above to match your cluster name, otherwise the instance will not connect to that cluster.
Previous answers are correct, here is another way of doing this:
From the ec2 instance where container is running, run this command
curl http://localhost:51678/v1/tasks | python -mjson.tool |less
From the AWS ECS cli Documentation
Command:
aws ecs list-tasks --cluster default
Output:
{
"taskArns": [
"arn:aws:ecs:us-east-1:<aws_account_id>:task/0cc43cdb-3bee-4407-9c26-c0e6ea5bee84",
"arn:aws:ecs:us-east-1:<aws_account_id>:task/6b809ef6-c67e-4467-921f-ee261c15a0a1"
]
}
To list the tasks on a particular container instance
This example command lists the tasks of a specified container instance, using the container instance UUID as a filter.
Command:
aws ecs list-tasks --cluster default --container-instance f6bbb147-5370-4ace-8c73-c7181ded911f
Output:
{
"taskArns": [
"arn:aws:ecs:us-east-1:<aws_account_id>:task/0cc43cdb-3bee-4407-9c26-c0e6ea5bee84"
]
}
My ECS solution as bash and Python snippets. Logging calls are able to print for debug by piping to sys.stderr while print() is used to pass the value back to a shell script
#!/bin/bash
TASK_ID=$(python3.8 get_ecs_task_id.py)
echo "TASK_ID: ${TASK_ID}"
Python script - get_ecs_task_id.py
import json
import logging
import os
import sys
import requests
# logging configuration
# file_handler = logging.FileHandler(filename='tmp.log')
# redirecting to stderr so I can pass back extracted task id in STDOUT
stdout_handler = logging.StreamHandler(stream=sys.stderr)
# handlers = [file_handler, stdout_handler]
handlers = [stdout_handler]
logging.basicConfig(
level=logging.INFO,
format="[%(asctime)s] {%(filename)s:%(lineno)d} %(levelname)s - %(message)s",
handlers=handlers,
datefmt="%Y-%m-%d %H:%M:%S",
)
logger = logging.getLogger(__name__)
def get_ecs_task_id(host):
path = "/task"
url = host + path
headers = {"Content-Type": "application/json"}
r = requests.get(url, headers=headers)
logger.debug(f"r: {r}")
d_r = json.loads(r.text)
logger.debug(d_r)
ecs_task_arn = d_r["TaskARN"]
ecs_task_id = ecs_task_arn.split("/")[2]
return ecs_task_id
def main():
logger.debug("Extracting task ID from $ECS_CONTAINER_METADATA_URI_V4")
logger.debug("Inside get_ecs_task_id.py, redirecting logs to stderr")
logger.debug("so that I can pass the task id back in STDOUT")
host = os.environ["ECS_CONTAINER_METADATA_URI_V4"]
ecs_task_id = get_ecs_task_id(host)
# This print statement passes the string back to the bash wrapper, don't remove
logger.debug(ecs_task_id)
print(ecs_task_id)
if __name__ == "__main__":
main()
Related
Installed Falco drivers on the host.
Able to capture alerts for specific conditions like when there is a process spawned or if any script is getting executed inside the container. But the requirement is to trigger an alert whenever any manual command gets executed inside the container.
Is there any custom condition we use to generate an alert whenever any command gets executed inside a container?
Expecting the below condition should capture an alert whenever command line contains newline char or pressed enter inside a container or the command executed contains any .sh but this didn't work.
- rule: shell_in_container
desc: notice shell activity within a container
condition: >
container.id != host and
proc.cmdline contains "\n" or
proc.cmdline endswith ".sh"
output: >
shell in a container
(user=%user.name container_id=%container.id container_name=%container.name
shell=%proc.name parent=%proc.pname source_ip=%fd.rip cmdline=%proc.cmdline)
priority: WARNING
Your question made me go and read about falco(I learned a new lesson today). After installing falco and reading its documentation, I found a solution that seems to work.
- rule: shell_in_container
desc: notice shell activity within a container
condition: >
container.id != host and
proc.cmdline != ""
output: >
shell in a container
(user=%user.name container_id=%container.id container_name=%container.name
shell=%proc.name parent=%proc.pname source_ip=%fd.rip cmdline=%proc.cmdline)
priority: WARNING
Below rule is generating alerts whenever there is a manual command executed inside container (exec with bash or sh) with all the required fields in the output.
Support for pod ip to be present in falco version 0.35. work is in progress.
https://github.com/falcosecurity/libs/pull/708 and will be called container.ip (but effectively it is the Pod_IP since all containers share the network stack of the pod) and container.cni.json for a complete view in case you have dual-stack and multiple interfaces.
- rule: shell_in_container
desc: notice shell activity within a container
condition: >
container.id != host and
evt.type = execve and
(proc.pname = bash or
proc.pname = sh) and
proc.cmdline != bash
output: >
(user=%user.name command=%proc.cmdline timestamp=%evt.datetime.s container_id=%container.id container_name=%container.name pod_name=%k8s.pod.name proc_name=%proc.name proc_pname=%proc.pname res=%evt.res)
priority: informational
I have a supervisord file where like this
[program:decrypt]
command=export KEYTOKEN=$(aws kms decrypt --ciphertext-blob fileb://<(echo %(ENV_TOKENENC)s | base64 -d) --output text --query Plaintext --region %(ENV_REGION)s | base64 -d )
I am passing the environment ENV_TOKENENC,ENV_REGION to the container and I can echo those variables and confirm that the docker container is getting them, also the command to decrypt kms value also works.But when I put the kms decrypt command in supervised file it throws error saying ('ENV_REGION')&('ENV_CONSULTOKENENC') which cannot be expanded.
Am I putting the right value in supervisord file?
Setting an environment variable is easy, if you're setting it to a constant value:
[program:decrypt]
command=/usr/bin/env foo=bar baz=qux /path/to/something ...
or, with less overhead:
environment=foo="bar",baz="qux"
command=/path/to/something ...
However, dynamically generating that variable's value requires a shell:
[program:decrypt]
command=/bin/sh -c 'foo=$(generate-bar) /path/to/something'
Note that export is not actually needed here, as var=value something as part of a single command exports var having value value during the execution of something.
I'm using Nagios Core 4.3.4. Is there any way to monitor the number of users connected to the server RDP on a Windows server like nrpe check_users? Please tell me if you have.
you would have to write your own check for this.
In your check you could call a powershell script on the server (but it depends on your windows version):
ipmo RemoteDesktop # 1. import the remotedesktop module
$(Get-RDUserSession).count # 2. print the count of the session
But there is another approach mentioned on monitoring-portal.org site. It's in german, so I try to translate:
1.) read window performance counters with nsclient:
c:\program files\nsclient\nsclient++.exe -noboot CheckSystem listpdh >counters_list.txt
2.) define the command (where -s $USER7$ is the passphrase to establishe the connection
define command{
command_name check_nt_Counter_User
command_line $USER1$/check_nt -H $HOSTADDRESS$ -s $USER7$ -p 12489 -v COUNTER -l $ARG1$ -w $ARG2$ -c $ARG3$ -d SHOWALL
}
3.) define the service
define service{
service_description RDP-Sessions
host_name TerminalSrv
use sometemplate
check_command check_nt_Counter_User!"\\Terminalservices\\active sessions","RDP-User active","users"!18!20
notes get count of active sessions
process_perf_data 1
notifications_enabled 0
}
I'm trying to run perf test in my CI environment, using the k6 docker, and a simple single script file works fine. However, I want to break down my tests into multiple JS files. In order to do this, I need to mount a volume on Docker so I can import local modules.
The volume seems to be mounting correctly, with my command
docker run --env-file ./test/performance/env/perf.list -v \
`pwd`/test/performance:/perf -i loadimpact/k6 run - /perf/index.js
k6 seems to start, but immediately errors with
time="2018-01-17T13:04:17Z" level=error msg="accepts 1 arg(s), received 2"
Locally, my file system looks something like
/toychicken
/test
/performance
/env
- perf.list
- index.js
- something.js
And the index.js looks like this
import { check, sleep } from 'k6'
import http from 'k6/http'
import something from '/perf/something'
export default () => {
const r = http.get(`https://${__ENV.DOMAIN}`)
check(r, {
'status is 200': r => r.status === 200
})
sleep(2)
something()
}
You need to remove the "-" after run in the Docker command. The "-" instructs k6 to read from stdin, but in this case you want to load the main JS file from the file system. That's why it complains that it receives two args, one being the "-" and the second being the path to index.js (the error message could definitely be more descriptive).
You'll also need to add .js to the '/perf/something' import.
I am able to create a new node via the Jenkins web GUI and then have the node running in a container connect back to the Jenkins master via the name and -secret value
ex.
docker run jenkinsci/jnlp-slave -url http://jenkins-server:port <secret> <slave name>
Is there a way to programmatically create a Jenkins node and get the secret and slave name so I don't have to do it via the GUI?
Creating an agent programmatically
You can use the create-node CLI command to create new agents with a given configuration.
For example, given this minimal JNLP agent configuration in a file config.xml:
<slave>
<remoteFS>/opt/jenkins</remoteFS>
<numExecutors>2</numExecutors>
<launcher class="hudson.slaves.JNLPLauncher" />
</slave>
you can run the create-node command via the CLI client, or the SSH interface:
cat config.xml | java -jar jenkins-cli.jar -s https://jenkins/ create-node my-agent
Viewing agent configuration
To see what the XML configuration looks like for an existing agent, you can append config.xml to an agent URL, e.g. https://jenkins/computer/some-agent-name/config.xml, or you can use the get-node CLI command.
Fetching the per-agent secret programmatically
To fetch the secret hex value without using the Jenkins web UI, you can run a script via the groovy CLI command:
echo 'println jenkins.model.Jenkins.instance.nodesObject.getNode("my-agent")?.computer?.jnlpMac' \
| java -jar ~/Downloads/jenkins-cli.jar -s https://jenkins/ groovy =
This will return the secret value directly. Note that in order to use the groovy command via the SSH interface, you need Jenkins 2.46 or newer. In earlier versions, it only works via the CLI client.
You can also create an agent using the REST API. This is especially useful when having an apache proxy in front (see issue JENKINS47279) and no direct access to the jenkins otherwise (e.g. in a corporate network) where CLI will not work.
I recommend to create an API token for this purpose. Then you can do something like this
Linux (Bash)
export JENKINS_URL=https://jenkins.intra
export JENKINS_USER=papanito
export JENKINS_API_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxx
export NODE_NAME=testnode
export JSON_OBJECT="{ 'name':+'${NODE_NAME}',+'nodeDescription':+'Linux+slave',+'numExecutors':+'5',+'remoteFS':+'/home/jenkins/agent',+'labelString':+'SLAVE-DOCKER+linux',+'mode':+'EXCLUSIVE',+'':+['hudson.slaves.JNLPLauncher',+'hudson.slaves.RetentionStrategy\$Always'],+'launcher':+{'stapler-class':+'hudson.slaves.JNLPLauncher',+'\$class':+'hudson.slaves.JNLPLauncher',+'workDirSettings':+{'disabled':+true,+'workDirPath':+'',+'internalDir':+'remoting',+'failIfWorkDirIsMissing':+false},+'tunnel':+'',+'vmargs':+'-Xmx1024m'},+'retentionStrategy':+{'stapler-class':+'hudson.slaves.RetentionStrategy\$Always',+'\$class':+'hudson.slaves.RetentionStrategy\$Always'},+'nodeProperties':+{'stapler-class-bag':+'true',+'hudson-slaves-EnvironmentVariablesNodeProperty':+{'env':+[{'key':+'JAVA_HOME',+'value':+'/docker-java-home'},+{'key':+'JENKINS_HOME',+'value':+'/home/jenkins'}]},+'hudson-tools-ToolLocationNodeProperty':+{'locations':+[{'key':+'hudson.plugins.git.GitTool\$DescriptorImpl#Default',+'home':+'/usr/bin/git'},+{'key':+'hudson.model.JDK\$DescriptorImpl#JAVA-8',+'home':+'/usr/bin/java'},+{'key':+'hudson.tasks.Maven\$MavenInstallation\$DescriptorImpl#MAVEN-3.5.2',+'home':+'/usr/bin/mvn'}]}}}"
curl -L -s -o /dev/null -v -k -w "%{http_code}" -u "${JENKINS_USER}:${JENKINS_API_TOKEN}" -H "Content-Type:application/x-www-form-urlencoded" -X POST -d "json=${JSON_OBJECT}" "${JENKINS_URL}/computer/doCreateItem?name=${NODE_NAME}&type=hudson.slaves.DumbSlave"
In order to get the agent secret via REST API checkout this, which would look something like this:
curl -L -s -u ${JENKINS_USER}:${JENKINS_API_TOKEN} -X GET ${JENKINS_URL}/computer/${NODE_NAME}/slave-agent.jnlp | sed "s/.*<application-desc main-class=\"hudson.remoting.jnlp.Main\"><argument>\([a-z0-9]*\).*/\1/"
Windows (PS)
And here my solution for Windows using Powershell:
$JENKINS_URL="https://jenkins.intra"
$JENKINS_USER="papanito"
$JENKINS_API_TOKEN="xxxxxxxxxxxxxxxxxxxxxxxx"
$NODE_NAME="testnode-ps"
# https://stackoverflow.com/questions/27951561/use-invoke-webrequest-with-a-username-and-password-for-basic-authentication-on-t
$bytes = [System.Text.Encoding]::ASCII.GetBytes("${JENKINS_USER}:${JENKINS_API_TOKEN}")
$base64 = [System.Convert]::ToBase64String($bytes)
$basicAuthValue = "Basic $base64"
$headers = #{ Authorization = $basicAuthValue; }
$hash=#{
name="${NODE_NAME}";
nodeDescription="Linux slave";
numExecutors="5";
remoteFS="/home/jenkins/agent";
labelString="SLAVE-DOCKER linux";
mode="EXCLUSIVE";
""=#(
"hudson.slaves.JNLPLauncher";
'hudson.slaves.RetentionStrategy$Always'
);
launcher=#{
"stapler-class"="hudson.slaves.JNLPLauncher";
"\$class"="hudson.slaves.JNLPLauncher";
"workDirSettings"=#{
"disabled"="true";
"workDirPath"="";
"internalDir"="remoting";
"failIfWorkDirIsMissing"="false"
};
"tunnel"="";
"vmargs"="-Xmx1024m"
};
"retentionStrategy"=#{
"stapler-class"= 'hudson.slaves.RetentionStrategy$Always';
'$class'= 'hudson.slaves.RetentionStrategy$Always'
};
"nodeProperties"=#{
"stapler-class-bag"= "true";
"hudson-slaves-EnvironmentVariablesNodeProperty"=#{
"env"=#(
#{
"key"="JAVA_HOME";
"value"="/docker-java-home"
};
#{
"key"="JENKINS_HOME";
"value"="/home/jenkins"
}
)
};
"hudson-tools-ToolLocationNodeProperty"=#{
"locations"=#(
#{
"key"= 'hudson.plugins.git.GitTool$DescriptorImpl#Default';
"home"= "/usr/bin/git"
};
#{
"key"= 'hudson.model.JDK\$DescriptorImpl#JAVA-8';
"home"= "/usr/bin/java"
};
#{
"key"= 'hudson.tasks.Maven$MavenInstallation$DescriptorImpl#MAVEN-3.5.2';
"home"= "/usr/bin/mvn"
}
)
}
}
}
#https://stackoverflow.com/questions/17929494/powershell-convertto-json-with-embedded-hashtable
$JSON_OBJECT = $hash | convertto-json -Depth 5
$JSON_OBJECT
Invoke-WebRequest -Headers $headers -ContentType "application/x-www-form-urlencoded" -Method POST -Body "json=${JSON_OBJECT}" -Uri "${JENKINS_URL}/computer/doCreateItem?name=${NODE_NAME}&type=hudson.slaves.DumbSlave"
Just chiming in a bit late to the party here, but I would highly recommend looking at the Jenkins Client plugin instead. Once the plugin is installed, you need only to start the client JAR from the build node and give it the IP address of the master.
As far as the master goes, you don't need to bother configuring anything. Nodes that register with the master are available automatically to start executing jobs. This is much easier than any of the slave.jar-based approaches.