splunkuniversalforwarder:
image: splunk/universalforwarder
environment:
- SPLUNK_START_ARGS=--accept-license
- SPLUNK_FORWARD_SERVER=ops-splunkhead02.dop.sfdc.net:9997
- SPLUNK_USER=root
- SPLUNK_PASSWORD=xxxx
ports:
- 9997:9997
I store the log flie in /var/logs/serviceLog.log (Not in the container but in the local machine)
I don't see the parameter to pass the file path;;; Seems like the splunk forwarder is running in the background and I just realized I never pass the log source variable to the container!
Does anyone perhaps have an idea?
you will need to add SPLUNK_ADD directive to your sample to specify the behavior.
you can use docker image related documentation to see multiples examples.
I wrote a ready to use splunk docker bootstrap project that will use SPLUNK_ADD to collect logs.
a short extract:
SPLUNK_ADD_2: 'monitor /var/log/app2/ -index docker_file -sourcetype _json'
Related
I have a local setup for SCDF with docker-compose.
I'm trying to understand how can I pass environment variables to a triggered task.
I tried via the 'Deployer properties' but it doesn't seem to work.
Ideally I would have liked to be able to set LOG4J2_APPENDER=PatternAppender in the UI...
If its not possible via the UI, what other options do I have?
I tried to add under 'enviroment' in the docker-compose.yml (where there are other vars):
but it also didnt work:
dataflow-server:
user: root
image: springcloud/spring-cloud-dataflow-server:${DATAFLOW_VERSION:-2.9.1}${BP_JVM_VERSION:-}
container_name: dataflow-server
ports:
- "9393:9393"
- "1142:1142"
environment:
# Set CLOSECONTEXTENABLED=true to ensure that the CRT launcher is closed.
- SPRING_CLOUD_DATAFLOW_APPLICATIONPROPERTIES_TASK_SPRING_CLOUD_TASK_CLOSECONTEXTENABLED=true
- LOG4J2_APPENDER=PatternAppender
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/dataflow
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=rootpw
- SPRING_DATASOURCE_DRIVER_CLASS_NAME=org.mariadb.jdbc.Driver
# (Optionally) authenticate the default Docker Hub access for the App Metadata access.
#- SPRING_CLOUD_DATAFLOW_CONTAINER_REGISTRY_CONFIGURATIONS_DEFAULT_USER=${METADATA_DEFAULT_DOCKERHUB_USER}
#- SPRING_CLOUD_DATAFLOW_CONTAINER_REGISTRY_CONFIGURATIONS_DEFAULT_SECRET=${MET
In your snippet, you're passing the environment variable SPRING_CLOUD_TASK_CLOSECONTEXTENABLED with the value true to all tasks.
The same should work for LOG4J2_APPENDER, i.e. add the environment variable
SPRING_CLOUD_DATAFLOW_APPLICATIONPROPERTIES_TASK_LOG4J2_APPENDER=PatternAppender
to the environment variables of the Dataflow server.
Setting the environment variable as an application property (not a deployer property) in the UI should have the same effect.
I am using K6 for Load Testing.
I have cloned the K6, Grafana, InfluxDB docker-compose set up from here:
https://github.com/loadimpact/k6
Each time I start Grafana, I have to manually import the dashboard I want to use ('Import' - ID2587 - Load).
I am new to Docker (and Grafana!)....is there anyway to have this dashboard preloaded in the container so I don't have to manually add it each time?
mount your dashboard and datasources into grafana container
when running docker-compose up -d influxdb grafana
refer the docker-compose file and grafana folder here
And make sure the datasource in your dashboard.json is updated with name of the datasource mentioned in datasource.yml
I have created a small tutorial in k6 community. Hope this solves your case.
A few small improvements which I think can help the docker-compose setup be awesome to use:
Use the awesome 'k6 Load Testing Results - by dcadwallader' dashboard:
https://grafana.com/grafana/dashboards/2587
Map a local dashboards directory, as well as the settings for the dashboard with all of the org ids and settings pre-configured, e.g.:
volumes:
- ./dashboards:/var/lib/grafana/dashboards
- ./grafana-dashboard.yaml:/etc/grafana/provisioning/dashboards/dashboard.yaml
- ./grafana-datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yaml
https://github.com/luketn/docker-k6-grafana-influxdb/blob/master/docker-compose.yml#L32-L35
Set the uid in the dashboard JSON file for consistent links, e.g.:
{
uid: "k6",
https://github.com/luketn/docker-k6-grafana-influxdb/blob/master/dashboards/k6-load-testing-results_rev3.json#L53
Ref: https://medium.com/swlh/beautiful-load-testing-with-k6-and-docker-compose-4454edb3a2e3
And: https://github.com/luketn/docker-k6-grafana-influxdb
I recently installed Manjaro on my Computer and I'm doing a few tests.
I tried to build and lanch a Docker which works perfectly on Windows, Mac Os, Ubuntu, etc.
But when I run sudo docker-compose up I get an error.
Everything seems to work fine except at the end:
Successfully built d72aa4c69ad6
Successfully tagged code_interface:latest
WARNING: Image for service interface was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating code_db_1 ... done
Creating code_web_1 ... done
Creating code_interface_1 ... done
Attaching to code_db_1, code_web_1, code_interface_1
code_db_1 exited with code 139
Here's what my docker-compose.yml looks like:
db:
image: mongo:3.0.2
ports:
- "27017:27017"
web:
build: X
ports:
- "5000:5000"
links:
- db
interface:
build: Y
ports:
- "8080:8080"
links:
- web
Any idea why I get this error or how to fix it ?
This may be related with the kernel version of your computer, like reported in this issue:
This is probably related to the changes in vsyscall linking in the 4.11 kernel. Try booting the kernel with vsyscall=emulate and see if it helps. This does run ok under the linuxkit 4.11 kernel config without issues, so it is to do with the config.
Try to implement the solution from this comment:
Hi, specifying this command in /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="vsyscall=emulate"
Let us know if it solves the issue for you.
as far as i know error 139 is a segmentation fault raised by hardware with memory protection. it tells you that your programm is trying to access a restricted area of memory.
maybe you try to get access to read only memory, got a null pointer dereference anywhere in your code or produced a stack overflow.
Finally made it work.
I had to update my kernel to the last version (from 4.19.16-1 to 4.20.3-1).
Don't really know why though.
I am looking at the code in eShopOnContainer under the docker-compose.override.yml. I can see a line in
volumes:
- ./src/ApiGateways/Web.Bff.Shopping/apigw:${ESHOP_OCELOT_VOLUME_SPEC:-/app/configuration}
webshoppingapigw:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- IdentityUrl=http://identity.api #Local: You need to open your local dev-machine firewall at range 5100-5110.
ports:
- "5202:80"
volumes:
- ./src/ApiGateways/Web.Bff.Shopping/apigw:${ESHOP_OCELOT_VOLUME_SPEC:-/app/configuration}
What does the line in the volumes ${ESHOP_OCELOT_VOLUME_SPEC .. is? I would think it will create a volumes of something but the ${ESHOP_OCELOT_VOLUME_SPEC … I can't see where it define in the project even not inside the .env file.
When I went inside the docker-compose.override.prod, the line ${ESHOP_OCELOT_VOLUME not even there.
Currently I have exception running the sample code, therefore I tried to do follow the code from eShopOnContainer but code a simple version so I can easily to follow. I start doing the ApiGateway and building up from there.
I don't know is this question eligible to be asked. People here very fuzzy of the question.
volumes: - ./src/ApiGateways/Web.Bff.Shopping/apigw:${ESHOP_OCELOT_VOLUME_SPEC:-/app/configuration}
That means:
Mount the ./src/ApiGateways/Web.Bff.Shopping/apigw to the path mentioned by $ESHOP_OCELOT_VOLUME_SPEC
If $ESHOP_OCELOT_VOLUME_SPEC is empty (not defined), then use as a mount path /app/configuration.
That gives the opportunity to a user to override the default path by a path of his/her choosing.
docker run -e ESHOP_OCELOT_VOLUME_SPEC=/my/path ...
ESHOP_OCELOT_VOLUME_SPEC which is an environment variable. The variable value may be exported/set in some place of the code or in the instance. ESHOP_OCELOT_VOLUME_SPEC will be replaced with value, that's why you where not able to see ESHOP_OCELOT_VOLUME_SPEC in docker instead the value in ESHOP_OCELOT_VOLUME_SPEC.
We are collecting the logs of our applications. Since we containerize our applications, the way to collect logs needs a little bit changes.
We log via the Docker Logging Driver:
Application output the logs to container’s stdout and stderr
Using json-file logging driver, docker output logs to json file on
the host machine
Service on the host machine forwards the log files.
But the logs from Docker has additional information which unnecessary and make the forward step complicated because we need to remove those additional information before forward.
For example, the log from Docker is as below, but all we want is the value of log field. Is there a way to customize log format and only output the information wanted by override some Docker's configurations?
{
“log”: "{“level”: “info”,“message”: “data is correct”,“timestamp”: “2017-08-01T11:35:30.375Z”}\r\n",
“stream”: “stdout”,
“time”: “2017-08-03T07: 58: 02.387253289Z”
}
I don't know of any way to customize the output of the json-file docker log plugin. However docker supports the gelf plugin which allows you to send logs to logstash. Using logstash you can output logs in many different ways (by using output plugins) and at the same time customize the format.
For instance to output logs to a file (without any other metadata) you can use something like the following:
output {
file {
path => "/path/to/logfile"
codec => line { format => "%{message}"}
}
}
If you don't want to add complexity to your logging logic, you can keep using the json-file driver and use an utility such as jq to parse the file and extract only the relevant information. For instance with jq you can do: jq -r .log </path/to/logfile>
This will read each line of the specified file as a json object and output only the log field.