The database, username, and password combination definitely work. The following configuration for grafana doesn't tho.
datasources: {
influxdb: {
type: 'influxdb',
url: "http://XXX.XXX.XXX.XX:8086/db/dbname",
username: 'username',
password: 'password',
default: true
},
},
I've tried removing the default parameter, changing influxdb to influx, and append /series to the url, all to no avail. Has anyone gotten this to work?
InfluxDB v0.7.3 (git: 216a3eb)
Grafana 1.6.0 (2014-06-16)
I'm using this below configuration and it works. Try insert the grafana database into your db and add grafana db configuration.
...
datasources: {
influxdb: {
type: 'influxdb',
url: "http://localhost:8086/db/test",
username: 'root',
password: 'XXXX'
},
grafana: {
type: 'influxdb',
url: "http://localhost:8086/db/grafana",
username: 'root',
password: 'XXXX',
grafanaDB: true
}
},
...
I had the same issue using the config shown by annelorayne above. It turned out that Grafana was not able to connect to localhost:8086, but it could connect to the actual IP address of the server (ie. 10.0.1.100:8086).
This was true even though 'telnet localhost 8086' worked.
I changed the Grafana config to this, and it worked:
datasources: {
influxdb: {
type: 'influxdb',
url: "http://10.0.1.100:8086/db/collectd",
username: 'root',
password: 'root',
grafanaDB: true
},
grafana: {
type: 'influxdb',
url: "http://10.0.1.100:8086/db/grafana",
username: 'root',
password: 'root'
},
},
I'm sorry I can't explain why this happens. Since telnet works, I have to assume it's a Grafana issue.
This question has been asked multiple times on the mailing list. See these threads for more info thread1, thread2, thread3. There's also a blog post on how to get grafana and InfluxDB working together here's a link
The browser sometimes caches the config.js and therefore looks at old configurations.
Please try clearing the cache or use incognito/private mode to load grafana dashboard.
I faced the same issue and using incognito worked for me.
Verify the config.js contents using grafana( host:port/config.js) .
Related
I am trying to create a user and password for Jenkins using JCASC. I can set up Jenkins however when I go to the GUI on my local host I do not see any users. Here is my code
jenkins:
systemMessage: "Jenkins configured automatically by Jenkins Configuration as Code plugin\n\n"
disabledAdministrativeMonitors:
- "jenkins.diagnostics.ControllerExecutorsNoAgents"
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
id: "admin-cred"
username: "jenkins-admin"
password: "butler"
scope: GLOBAL
I believe I have all the necessary plugins installed but something is missing clearly. Any help would be appreciated.
The way I got users to pop up is by setting up the (local) security realm, rather than credentials, like so:
jenkins:
securityRealm:
local:
users:
- id: jenkins-admin
password: butler
I'm finding this a great resource to get ideas from: https://github.com/oleg-nenashev/demo-jenkins-config-as-code
I've used a local security Realm with disabled signups to add the user "jenkins-admin".
jenkins:
. . .
securityRealm:
local:
allowsSignup: false
users:
- id: jenkins-admin
password: butler
You can refer below links to know more about Jcasc:
https://www.jenkins.io/projects/jcasc/
https://github.com/jenkinsci/configuration-as-code-plugin
I'm having a hard time using custom Elasticsearch ingest pipelines with Filebeat's Docker autodiscovery. I've started out with custom processors in my filebeat.yml file, however I would prefer to shift this to custom ingest pipelines I've created.
Firstly, here is my configuration using custom processors that works to provide custom grok-like processing for my Servarr app Docker containers (identified by applying a label to them in my docker-compose.yml file). The processor copies the 'message' field to 'log.original', uses dissect to extract 'log.level', 'log.logger' and overwrite 'message'. The final processor is a JavaScript function used to convert the log.level to lowercase (overkill perhaps, but humour me).
Filebeat configuration:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
processors:
- if:
equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
then:
- copy_fields:
fields:
- from: message
to: log.original
fail_on_error: false
ignore_missing: true
- dissect:
tokenizer: "[%{log.level}] %{log.logger}: %{message}"
field: message
target_prefix: ""
overwrite_keys: true
ignore_failure: true
- script:
lang: javascript
id: lowercase
source: >
function process(event) {
var level = event.Get("log.level");
if(level != null) {
event.Put("log.level", level.toString().toLowerCase());
}
}
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
Excerpt from docker-compose.yml file...
lidarr:
image: ghcr.io/linuxserver/lidarr:latest
container_name: lidarr
labels:
co.elastic.logs/custom_processor: "servarr"
And an example log line (in json):
{"log":"[Info] DownloadDecisionMaker: Processing 100 releases \n","stream":"stdout","time":"2021-08-07T10:10:49.125702754Z"}
This works well, and achieves my aims of extracting fields, but ideally I'd like to use Elasticsearch's (more powerful) ingest pipelines instead, and live with a cleaner filebeat.yml, so I created a working ingest pipeline "filebeat-7.13.4-servarr-stdout-pipeline" like so (ignore the fact that for now, this only does the grokking):
[
{
"grok": {
"field": "message",
"patterns": [
"\\[%{LOGLEVEL:log.level}\\] %{WORD:log.logger}: %{GREEDYDATA:message}"
],
"trace_match": true,
"ignore_missing": true
}
}
]
I tested the pipeline against existing documents (not ones that have had my custom processing applied, I should note). The pipeline worked against all the documents I tested it against in the Kibana interface.
So now I come to shift my Filebeat config to use this pipeline for containers with my custom_processor label. This is the filebeat.yml I came up with, which is apparently valid and works for the most part, but doesn't apply the grokking:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition.equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
config:
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
If I use Filebeat's inbuilt modules for my other containers such as nginx, by using a label such as in this example below, the inbuild module pipelines are used:
nginx-repo:
image: nginx:latest
container_name: nginx-repo
mem_limit: 2048m
environment:
- VIRTUAL_HOST=repo.***.***.***,repo
- VIRTUAL_PORT=80
- HTTPS_METHOD=noredirect
networks:
- default
- proxy
labels:
co.elastic.logs/module: "nginx"
co.elastic.logs/fileset.stdout: "access"
co.elastic.logs/fileset.stderr: "error"
What am I doing wrong here? The logs still end up in Elasticsearch and Kibana, and are processed, but my grok isn't applied, new fields aren't created, and the 'message' field is unchanged.
EDIT: In response to one of the comments linking to a post on the elastic forums, which suggested both the path(s) and the pipeline need to be made explicit, I tried the following filebeat.yml autodiscovery excerpt, which also fails to work (but is apparently valid config):
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition:
equals:
docker.container.labels.co_elastic_logs/custom_processor: "servarr"
config:
- type: docker
containers:
ids:
- "${data.docker.container.id}"
stream: all
paths:
- /var/lib/docker/containers/${data.docker.container.id}/${data.docker.container.id}-json.log
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
I tried with the docker.container.labels.co_elastic_logs/custom_processor value both quoted and unquoted. I have the same behaviour where the logs end up in Elasticsearch / Kibana, but they are processed as if they skipped my ingest pipeline.
We're using Kubernetes instead of Docker with Filebeat but maybe our config might still help you out.
We have autodiscover enabled and have all pod logs sent to a common ingest pipeline except for logs from any Redis pod which use the Redis module and send their logs to Elasticsearch via one of two custom ingest pipelines depending on whether they're normal Redis logs or slowlog Redis logs, this is configured in the following block:
All other detected pod logs get sent in to a common ingest pipeline using the following catch-all configuration in the "output" section:
Something else that we do is add the name of the ingest pipeline to ingested documents using the "set" processor:
This has proven to be really helpful when diagnosing whether or not a pipeline was actually executed when viewing an event document in Kibana.
I'm using Cypress 7.5.0 and I run my E2E tests in a Docker container based on cypress/browsers:node12.16.1-chrome80-ff73.
The tests have been running on Chrome for a while now.
When trying to execute them on Firefox, I've got the following error :
CypressError: `cy.setCookie()` had an unexpected error setting the requested cookie in Firefox.
When I run the tests locally (outside the Docker container) and use the version of Firefox installed on my computer (Ubuntu 18.04), the same code works fine.
In order to authenticate in my application, I retrieve the following cookies :
[
{
name: 'XSRF-TOKEN',
value: '7a8b8c79-796a-401a-a45e-1dec4b8bc3c3',
domain: 'frontend',
path: '/',
expires: -1,
size: 46,
httpOnly: false,
secure: false,
session: true
},
{
name: 'JSESSIONID',
value: 'B99C6DD2D423680393046B5775A60B1C',
domain: 'frontend',
path: '/',
expires: 1627566358.621716,
size: 42,
httpOnly: true,
secure: false,
session: false
}
]
and then I set them using :
cy.setCookie(cookie.name);
I've tried overriding the cookie details using different combination like :
cy.setCookie(cookie.name, cookie.value, {
domain: cookie.domain,
expiry: cookie.expires,
httpOnly: cookie.httpOnly,
path: cookie.path,
secure: true,
sameSite: 'Lax',
});
but nothing works.
I can't get my head around why it works when run locally and fails when run in a Docker container. Any ideas?
Thank you.
I have separated dataSourceConfig.yml database config file:
environments:
development:
dataSource:
dbCreate: none
url: jdbc:oracle:thin:xxxxxx
driverClassName: oracle.jdbc.OracleDriver
dialect: org.hibernate.dialect.Oracle10gDialect
username: xxxx
password: xxxx
test:
dataSource:
dbCreate: none
url: jdbc:oracle:thin:xxxxx
driverClassName: oracle.jdbc.OracleDriver
dialect: org.hibernate.dialect.Oracle10gDialect
username: xxxxx
password: xxxxx
Which I connect to the project in the Application.java:
class Application extends GrailsAutoConfiguration implements EnvironmentAware {
static void main(String[] args) {
GrailsApp.run(Application, args)
}
#Override
void setEnvironment(Environment environment) {
String configPath = environment.getProperty("local.config.location")
Resource resourceConfig = new FileSystemResource(configPath)
YamlPropertiesFactoryBean ypfb = new YamlPropertiesFactoryBean()
ypfb.setResources([resourceConfig] as Resource[])
ypfb.afterPropertiesSet()
Properties properties = ypfb.getObject()
environment.propertySources.addFirst(new PropertiesPropertySource("local.config.location", properties))
}
}
When i run integration tests via Intellij IDEA 15, it runs tests at a development environment, but the YAML config file has test section.
Is anyone knows how to fix this?
The command bellow doesn't help.
grails test test-app -integration
If you are going to run tests from the IDE you need to modify the run config to include -Dgrails.env=test. You will want to do that for the default JUnit run config so you don't have to edit every single test run config. Be aware that editing the default JUnit run config will affect all configs that are created in the future but will not update any existing configs. You may want to remove all of the existing run configs so they will be recreated with the new settings the next time you run those tests.
i'v try to deploy the grunt output folder ( dist ) to server space using grunt-deploy in Jenkins. it return success message after grunt deploy.but it actually not deploy to given target.and there is option for username and password of server.so i think its not secure method .if yes give me a correct method for that.also there is no option for source path . this is my deploy code.
deploy: {
liveservers: {
options:{
servers: [{
host: 'host',
port: 'port',
username: 'user',
password: 'pass'
}],
cmds_before_deploy: [],
cmds_after_deploy: [],
deploy_path: '/home/testdeploy'
}
} }
please help me :(
Use the mkdir command to create a releases subfolder:
cd /home/testdeploy
mkdir releases
then retry. The existence of releases is a hardcoded assumption in the source
References
grunt-deploy: deploy.js source