I use docker-compose. The promblem: docker-compose logs returns me logs in format:
service_1 | {"timestamp":"2017-11-28T15:31:47.065Z","correlationId":"NO_CORRELATION_ID","tags":....
which is hard to query with simple scripts. Is there are a way to change the format? ideally to something like:
{"name": "service_1", "data": "{"timestamp":"2017-11-28T15:31:47.065Z","correlationId":"NO_CORRELATION_ID","tags":...." }
The goal is to filter a chain of log messages from different microservices which have the same request id.
Related
I'm using ECSOperator in airflow and I need to pass flags to the docker run. I searched the internet but I couldn't find a way to give an ECSOperator flags such as: -D, --cpus and more.
Is there a way to pass these flags to a docker run (if a certain condition is true) using the ECSOperator (same way we can pass tags, and network configuration), or they can only be defined in the ECS container running the docker image?
I'm not familiar with ECSOpearor but if I understand correctly that is python library. And you can create new task using python
As I can see in this exmaple it is possible to set task_definition and overrides:
...
ecs_operator_task = ECSOperator(
task_id = "ecs_operator_task",
dag=dag,
cluster=CLUSTER_NAME,
task_definition=service['services'][0]['taskDefinition'],
launch_type=LAUNCH_TYPE,
overrides={
"containerOverrides":[
{
"name":CONTAINER_NAME,
"command":["ls", "-l", "/"],
},
],
},
network_configuration=service['services'][0]['networkConfiguration'],
awslogs_group="mwaa-ecs-zero",
awslogs_stream_prefix=f"ecs/{CONTAINER_NAME}",
...
So if you want to set CPU and Memory specs for whole task you have to update task_definition dictionary parameters (something like service['services'][0]['taskDefinition']['cpu'] = 2048)
If you want to specify parameters for exact container, overrides should be proper way:
overrides={
"containerOverrides":[
{
"cpu": 2048,
...
},
],
},
Or edited containerDefinitions may be set directly inside task_definition in theory...
Anyway most of docker parameters should be pass inside containerDefinitions section.
So about your question:
Is there a way to pass these flags to a docker run
If I understand correctly you have a JSON TaskDefinition file and want to run it locally using docker?
Then try to check these tools. It allows you to convert docker-compose.yml into ECS definition, and that is opposite of what you looking for, but maybe some of these tools able to convert it vice-versa..?
In other way you have to parse TaskDefinition's JSON manually and convert it to docker command arguments
I can not find an example to output Kongs logs as JSONto system out. I am currently using Fluentd to ingest logs from my Kubernetes cluster but I have no idea how to send those logs to Fluentd as structured JSON.
For anyone who is struggling with this, I made the following updates to the kong helm chart values.
env:
admin_access_log: '/dev/stdout structured_logs'
proxy_access_log: '/dev/stdout structured_logs'
nginx_http_log_format: |
structured_logs escape=json '{"remote_addr": "$remote_addr", "remote_user": "$remote_user", "host": "$host"...}
Have you looked at the file-log plugin? https://docs.konghq.com/hub/kong-inc/file-log/
It lets you log to /dev/stdout and use lua to remove/add fields if necessary.
Is there a way to provide custom variables via Docker-Compose that can be referenced within a Kafka Connector config?
I have the following setup in my docker-compose.yml:
- "sql_server=1.2.3.4"
- "sql_database=db_name"
- "sql_username=some_user"
- "sql_password=nahman"
- "sql_applicationname=kafka_connect"
Here is my .json configuration file:
{
"name": "vwInv_Tran_Amounts",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"src.consumer.interceptor.classes": "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor",
"tasks.max": 2,
"connection.url": "jdbc:sqlserver://${sql_server};database=${sql_database};user=${sql_username};password={sql_password};applicationname={sql_applicationname}",
"query": "SELECT * FROM vwInv_Tran_Amounts",
"mode": "timestamp",
"topic.prefix": "inv_tran_amounts",
"timestamp.column.name": "timestamp",
"incrementing.column.name": "Inv_Tran_ID"
}
}
I was able to reference the environment variables using this method with Elastic Logstash, but it doesn't appear to work here.
Whenever loading it via curl I receive:
The connection string contains a badly formed name or value. for configuration Couldn't open connection to jdbc:sqlserver://${sql_server};database=${sql_database};user=${sql_username};password={sql_password};applicationname={sql_applicationname}\nInvalid value com.microsoft.sqlserver.jdbc.SQLServerException: The connection string contains a badly formed name or value.
EDIT/////////
I tried prefixing environment varibles like CONNECT_SQL_SERVER and that didn't work.
I feel like you are looking for Externalizing Kafka Connect secrets, but that would require mounting a file, not using env vars.
JSON Connector config files aren't loaded on Docker container startup. I made this issue to see if this would be possible.
You would have to template out the JSON file externally, then HTTP-POST them to the port exposed by the container.
Tried prefixing environment varibles like CONNECT_SQL_SERVER
Those values would go into the Kafka Connect Worker properties, not the properties that need to be loaded by a specific connector task.
We are collecting the logs of our applications. Since we containerize our applications, the way to collect logs needs a little bit changes.
We log via the Docker Logging Driver:
Application output the logs to container’s stdout and stderr
Using json-file logging driver, docker output logs to json file on
the host machine
Service on the host machine forwards the log files.
But the logs from Docker has additional information which unnecessary and make the forward step complicated because we need to remove those additional information before forward.
For example, the log from Docker is as below, but all we want is the value of log field. Is there a way to customize log format and only output the information wanted by override some Docker's configurations?
{
“log”: "{“level”: “info”,“message”: “data is correct”,“timestamp”: “2017-08-01T11:35:30.375Z”}\r\n",
“stream”: “stdout”,
“time”: “2017-08-03T07: 58: 02.387253289Z”
}
I don't know of any way to customize the output of the json-file docker log plugin. However docker supports the gelf plugin which allows you to send logs to logstash. Using logstash you can output logs in many different ways (by using output plugins) and at the same time customize the format.
For instance to output logs to a file (without any other metadata) you can use something like the following:
output {
file {
path => "/path/to/logfile"
codec => line { format => "%{message}"}
}
}
If you don't want to add complexity to your logging logic, you can keep using the json-file driver and use an utility such as jq to parse the file and extract only the relevant information. For instance with jq you can do: jq -r .log </path/to/logfile>
This will read each line of the specified file as a json object and output only the log field.
My Fluent Bit Docker container is adding a timestamp with the local time to the logs that received via STDIN; otherwise all the logs received via rsyslog or journald seem to have a UTC time format.
I have a basic EFK stack where I am running Fluent Bit containers as remote collectors which are forwarding all the logs to a FluentD central collector, which is pushing everything into Elasticsearch.
I've added a filter to the Fluent Bit config file where I have experimented with many ways to modify the timestamp, to no avail. It seems like I am overthinking it; it should be much easier to modify the timestamp.
These are all the ways I've tried to modify the timestamp with the fluent-bit.conf filter
[FILTER]
Name record_modifier
Match_Regex ^(?!log.*).*$ ## only match the input received via stdin
Tag log.stdout ## tag to mark input received via stdin
Add sourcetype timestamp ## tried to add timestamp from lua script
Parser docker ## tried to use docker parser for timestamp
Time_key utc ## tried to add timestamp as a key
script test.lua ## sample lua script from fluentbit docs
call cb_print ## call a function from within lua script
What is the de facto method to make all the timestamps uniform to UTC? Any help or suggestion is appreciated.
The way it works is that the docker parser extracts the content of 'log' and respect the timestamp defined by docker.
One quick workaround would be to modify your parsers.conf and make sure the docker parser does not resolve the timestamp, on that way Fluent Bit will assign the current time in UTC for you.