I am using K6 for Load Testing.
I have cloned the K6, Grafana, InfluxDB docker-compose set up from here:
https://github.com/loadimpact/k6
Each time I start Grafana, I have to manually import the dashboard I want to use ('Import' - ID2587 - Load).
I am new to Docker (and Grafana!)....is there anyway to have this dashboard preloaded in the container so I don't have to manually add it each time?
mount your dashboard and datasources into grafana container
when running docker-compose up -d influxdb grafana
refer the docker-compose file and grafana folder here
And make sure the datasource in your dashboard.json is updated with name of the datasource mentioned in datasource.yml
I have created a small tutorial in k6 community. Hope this solves your case.
A few small improvements which I think can help the docker-compose setup be awesome to use:
Use the awesome 'k6 Load Testing Results - by dcadwallader' dashboard:
https://grafana.com/grafana/dashboards/2587
Map a local dashboards directory, as well as the settings for the dashboard with all of the org ids and settings pre-configured, e.g.:
volumes:
- ./dashboards:/var/lib/grafana/dashboards
- ./grafana-dashboard.yaml:/etc/grafana/provisioning/dashboards/dashboard.yaml
- ./grafana-datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yaml
https://github.com/luketn/docker-k6-grafana-influxdb/blob/master/docker-compose.yml#L32-L35
Set the uid in the dashboard JSON file for consistent links, e.g.:
{
uid: "k6",
https://github.com/luketn/docker-k6-grafana-influxdb/blob/master/dashboards/k6-load-testing-results_rev3.json#L53
Ref: https://medium.com/swlh/beautiful-load-testing-with-k6-and-docker-compose-4454edb3a2e3
And: https://github.com/luketn/docker-k6-grafana-influxdb
Related
I have a docker compose file which sets up influxdb.
These are the env variables used-
DOCKER_INFLUXDB_INIT_BUCKET: test
DOCKER_INFLUXDB_INIT_MODE: setup
DOCKER_INFLUXDB_INIT_ORG: test_org
DOCKER_INFLUXDB_INIT_PASSWORD: test_pass
DOCKER_INFLUXDB_INIT_RETENTION: 1w
DOCKER_INFLUXDB_INIT_USERNAME: test_user
and volumes-
/mount/influxdb:/var/lib/influxdb2
I am able to login using DOCKER_INFLUXDB_INIT_USERNAME and DOCKER_INFLUXDB_INIT_PASSWORD.
The issue is when I execute influx user list it returns an empty table. Same goes for influx org list. Due to this I'm not able to add any users to the test_org or change the password of default user.
Is there any solution to this?
I am facing an issue while trying to run RabbitMQ docker container.
It says user does not exist (https://i.stack.imgur.com/SOeqq.png).
I am passing userid and password to RabbitMQ docker-compose file as environment variables during runtime.(user and password wont be fix)
docker-compose file
I have created rabbitmq-conf.json as i need to use some predefined queues.
rabbitmq-conf.json file
It was working fine with RabbitMQ:3.8.14-management-alpine image but not working with Rabbitmq:3.11.3-management-alpine
rabbitmq.conf file contains:-
management.load_definitions = /etc/rabbitmq/rabbitmq-conf.json
While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)
I'm using DreamFactory REST API in a Docker container and I need to disable wrapper "resource" in payload. How can I achieve this?
I have replaced the following in all of these four files:
opt/bitnami/dreamfactory/.env-dist
opt/bitnami/dreamfactory/vendor/dreamfactory/df-core/config/df.php
opt/bitnami/dreamfactory/installer.sh
bitnami/dreamfactory/.env
DF_ALWAYS_WRAP_RESOURCES=true
with:
DF_ALWAYS_WRAP_RESOURCES=false
but this doesn't fix my problem.
The change you describe is indeed the correct one as found in the DreamFactory wiki. Therefore I suspect the configuration has been cached. Navigate to your DreamFactory project's root directory and run this command:
$ php artisan config:clear
This will wipe out any cached configuration settings and force DreamFactory to read the .env file in anew. Also, keep in mind you only need to change the .env file (or manage your configuration variables in your server environment). Those other files won't play any role in configuration changes.
I am new to prometheus/alertmanager.
I have created a cron job which executes shell script every minute. This shell script generates "test.prom" file (with a gauge metric in it) in the same directory which is assigned to --textfile.collector.directory argument (to node-exporter). I verified (using curl http://localhost:9100/metrics) that the node-exporter exposes that custom metric correctly.
When I tried to run a query against that custom metric in prometheus dashboard, it does not show up any results (it says no data found).
I could not figure out why the query against the metric exposed via node-exporter textfile collector fails. Any clues what I missed ? Also please let me know how to check and ensure that prometheus scraped my custom metric 'test_metric` ?
My query in prometheus dashboard is test_metric != 0 (in prometheus dashboard) which did not give any results. But I exposed test_metric via node-exporter textfile.
Any help is appreciated !!
BTW, the node-exporter is running as docker container in Kubernetes environment.
I had a similar situation, but it was not a configuration problem.
Instead, my data included timestamps:
# HELP network_connectivity_rtt Round Trip Time to each node
# TYPE network_connectivity_rtt gauge
network_connectivity_rtt{host="home"} 53.87 1541426242
network_connectivity_rtt{host="hop_1"} 58.8 1541426242
network_connectivity_rtt{host="hop_2"} 21.93 1541426242
network_connectivity_rtt{host="hop_3"} 71.69 1541426242
PNE was picking them up without any problem once I reloaded it. As prometheus is running under systemd, I had to check the logs like this:
journalctl --system -u prometheus.service --follow
There I read this line:
msg="Error on ingesting samples that are too old or are too far into the future"
Once I removed the timestamps, values started appearing. This lead me to read more in detail about the timestamps, and I found out they have to be in miliseconds. So this format now is ok:
# HELP network_connectivity_rtt Round Trip Time to each node
# TYPE network_connectivity_rtt gauge
network_connectivity_rtt{host="home"} 50.47 1541429581376
network_connectivity_rtt{host="hop_1"} 3.38 1541429581376
network_connectivity_rtt{host="hop_2"} 11.2 1541429581376
network_connectivity_rtt{host="hop_3"} 20.72 1541429581376
I hope it helps someone else.
Its my bad. I did not included scrape instructions for node-exporter in prometheus.yaml file. It worked after including them.
This issue is happening because of stale metrics.
Lets say you have written you metric in file at 13.00
by default after 5min prometheus will consider you metric stale and it might disappear from there at the time you are making query.