My custom rsyslog template:
template(name="outfmt" type="list" option.jsonf="on") {
property(outname="#timestamp"
name="timereported"
dateFormat="rfc3339" format="jsonf")
property(outname="host"
name="hostname" format="jsonf")
property(outname="severity"
name="syslogseverity-text" caseConversion="upper" format="jsonf")
property(outname="facility"
name="syslogfacility-text" format="jsonf")
property(outname="syslog-tag"
name="syslogtag" format="jsonf")
property(outname="source"
name="app-name" format="jsonf")
property(outname="message"
name="msg" format="jsonf")
}
My rsyslog example output:
{
"#timestamp": "2018-03-01T01:00:00+00:00",
"host": "172.20.245.8",
"severity": "DEBUG",
"facility": "local4",
"syslog-tag": "app[1666]",
"source": "app",
"message": " this is my syslog message"
}
How can I parse this log with fluentd and send to elasticsearch?
You can receive logs directly in elasticsearch (without even having to format them to json) through the syslog plugin. This probably would be the most straightforward solution to your problem.
If for some reason u need to use some kind of log aggregator, I personally would not recommend fluentd, as it can bring unecessary complexity with it.
But you could use logstash which is supported by elasticsearch and you can find plenty of documentation about it.
Related
I'm trying to create a Swagger UI configuration to show several of my apis. They are not hosted publicly, the definition files are in my local file system. I'm using swagger ui with docker. I run it with the following command:
docker run -p 8080:8080 -v $(pwd)/_build:/spec swaggerapi/swagger-ui
In _build directory is where I have my yaml spec files. This is the swagger-config.yaml config file:
urls:
- /spec/openapi2.yaml
- /spec/openapi.yaml
plugins:
- topbar
I have also tried:
urls:
- url: /spec/openapi2.yaml
name: API1
- url: /spec/openapi.yaml
name: API2
plugins:
- topbar
After running it, this is what I see:
That's the default API of Swagger UI, so I suppose there's an error in my configuration. I have tried several things, but they have not worked and I do not seem to find any good documentation about the swagger-config.yaml configuration file.
Any idea to make it work with several APIs?
According to the comments in the Swagger UI issue tracker, the Docker version needs the config file in the JSON format rather than YAML.
Try using this swagger-config.json:
{
"urls": [
{
"url": "/spec/openapi2.yaml",
"name": "API1"
},
{
"url": "/spec/openapi.yaml",
"name": "API2"
}
],
"plugins": [
"topbar"
]
}
Also add -e CONFIG_URL=/path/to/swagger-config.json to the docker run command.
so I have established two connections aws_default and google_cloud_default in a json file like so
{
"aws_default": {
"conn_type": "s3",
"host": null,
"login": "sample_login",
"password": "sample_secret_key",
"schema": null,
"port": null,
"extra": null
},
"google_cloud_default": {
"conn_type": "google_cloud_platform",
"project_id": "sample-proj-id123",
"keyfile_path": null,
"keyfile_json": {sample_json},
"scopes": "sample_scope",
"number_of_retries": 5,
}
}
I have a local airflow server containerized in docker. What I am trying to do, is to import the connections from this file, that way I don't need to define the connections in the Airflow UI.
I have an entrypoint.sh file which runs everytime the airflow image is built.
I have included this line airflow connections import connections.json in that shell file.
in my docker-compse.yaml file, I have added a binded a volume like so
- type: bind
source: ${HOME}/connections.json
target: /usr/local/airflow/connections.json
However, when I run my DAG locally, which includes hooks that connect to these connections, I receive errors: i.e.
The conn_id `google_cloud_default` isn't defined
So I'm not too sure how to proceed. I was reading about Airflow's local filesystem secrets backend here
And it mentions this code chunk to establish the file path
[secrets]
backend = airflow.secrets.local_filesystem.LocalFilesystemBackend
backend_kwargs = {"variables_file_path": "/files/var.json", "connections_file_path": "/files/conn.json"}
But, as I check my airflow.cfg, I can't find this code chunk. Am I supposed to add this to airflow.cfg?
Could use some guidance here.. I know the solution is simple but I've naive to setting up a connection like this. Thanks!
I have an Elasticsearch deployment on Kubernetes (AKS). I'm using the official Elastic's docker images for deployments. Logs are being stored in a persistent Azure Disk. How can I migrate some of these logs to another cluster with a similar setup? Only those logs that matches a filter condition based on datetime of the logs needs to be migrated.
Please use Reindex API for achieving the same
POST _reindex
{
"source": {
"remote": {
"host": "http://oldhost:9200",
"username": "user",
"password": "pass"
},
"index": "source",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "dest"
}
}
Note:
Run the aforementioned command on your target instance.
Make sure that the source instance is whitelisted in elasticsearch.yml
reindex.remote.whitelist: oldhost:9200
Run the process asynchronously using below query param
POST _reindex?wait_for_completion=false
just looking for some guidance on how to properly invoke a command when a container starts, when creating it via azure-arm-containerinstance package. There is very little documentation on this specific part and I wasn't able to find any examples out there on the internet.
return client.containerGroups
.beginCreateOrUpdate(process.env.AZURE_RESOURCE_GROUP, containerInstanceName, {
tags: ['server'],
location: process.env.AZURE_INSTANCE_LOCATION,
containers: [
{
image: process.env.CONTAINER_IMAGE,
name: containerInstanceName,
command: ["./some-executable","?Type=Fall?"],
ports: [
{
port: 1111,
protocol: 'UDP',
},
],
resources: {
requests: {
cpu: Number(process.env.INSTANCE_CPU),
memoryInGB: Number(process.env.INSTANCE_MEMORY),
},
},
},
],
imageRegistryCredentials: [
{
server: process.env.CONTAINER_REGISTRY_SERVER,
username: process.env.CONTAINER_REGISTRY_USERNAME,
password: process.env.CONTAINER_REGISTRY_PASSWORD,
},
],```
Specifically this part below, is this correct? Just an array of strings? Are there any good examples anywhere? (tried both google and bing) Is this equivalent of docker's CMD ["command","argument"]?
```command: ["./some-executable","?Type=Fall?"],```
With your issue, most you have done is right, but there are points should pay attention to.
one is the command property will overwrite the CMD setting in the Dockerfile. So if the command will not always keep running, then the container will in a terminate state when the command finish execute.
Second is the command property is an array with string members and they will execute like a shell script. So I suggest you can set it like this:
command: ['/bin/bash','-c','echo $PATH'],
And you'd better keep the first two strings no change, just change the after.
If you have any more questions please let me know. Or if it's helpful you can accept it :-)
I'm trying to build a new Debian image with Packer, but the building process halts at ==> openstack: Waiting for server to become ready..., while Packers building instance is stuck in the Spawning state.
(Edit: My last test build was stuck for ~45 minutes, and exited with this error message: Build 'openstack' errored: Error waiting for server ({uuid}) to become ready: unexpected state 'ERROR', wanted target '[ACTIVE]')
The source image is a cloud image of Debian, and my template file looks like this:
{
"variables": {
"os_auth_url": " ( Keystone URL ) ",
"os_domain_name": " ( Domain Name ) ",
"os_tenant_name": " ( Project Name ) ",
"os_region_name": " ( Region Name ) "
},
"builders": [
{
"type": "openstack",
"flavor": "b.tiny",
"image_name": "packer-openstack-{{timestamp}}",
"source_image": "cd8da3bf-66cd-4847-8970-447533b86b30",
"ssh_username": "debian",
"username": "{{user `username`}}",
"password": "{{user `password`}}",
"identity_endpoint": "{{user `os_auth_url`}}",
"domain_name": "{{user `os_domain_name`}}",
"tenant_name": "{{user `os_tenant_name`}}",
"region": "{{user `os_region_name`}}",
"floating_ip_pool": "internet",
"security_groups": [
"deb_test_uni"
],
"networks": [
"a4151f4e-fd88-4df8-97e1-2b113f149ef8",
"71b10496-2617-47ae-abbc-36239f0863bb"
]
}
]
}
The username and password fields are being added by a separate file, located on the (Jenkins) build server.
The building process managed to get past this at one point, but exited with a ssh timeout error. I have no idea why that happened, and why only then.
Is there anything blindingly obvious that I'm missing? Or has anyone else suffered the same problem, but managed to find a solution?
Thanks in advance!
It turns out that, in my case, there was nothing I (personally) could do. It was neither the Packer template nor the environment variables (as I had a suspicion it could be), but a fault in the server-side configuration.
I'm sorry that I don't know what the bug or fix was, as I wasn't the one who found or fixed the problem, but knowing that it could be good idea to double-check the server setup might help someone in the future.