how to resolve influxdb.conf parse config error - docker

while running influxdb:1.0 in docker i get this following error
[run] 2019/01/13 09:04:14 InfluxDB starting, version 1.0.2, branch master, commit ff307047057b7797418998a4ed709b0c0f346324
[run] 2019/01/13 09:04:14 Go version go1.6.2, GOMAXPROCS set to 1
[run] 2019/01/13 09:04:14 Using configuration at: /etc/influxdb/influxdb.conf
run: parse config: Near line 1 (last key parsed 'Merging'): Expected key separator '=', but got 'w' instead.
this is my first 3 lines from .conf file
Merging with configuration at: /etc/influxdb/influxdb.conf
reporting-disabled = false
bind-address = ":8088"
does anybody know how to resolve this?
thanks

According to the configuration file that you posted you have the following issue at the first line:
Merging with configuration at: /etc/influxdb/influxdb.conf
Is being treated as a normal config line so it should be in a form like this
Merging = with configuration at: /etc/influxdb/influxdb.conf
What you need to do is to comment the first line to avoid this issue.
The second note on your config file - it is not related to this issue in specific but you may get other errors - that the third line needs to be under [admin] so the final config should be like this:
# Merging with configuration at: /etc/influxdb/influxdb.conf
reporting-disabled = false
[admin]
bind-address = ":8088"
It would be better if you got the original config file and then start modifying it in order to follow the same standards without getting new issues related to the file format.

Related

Error when running near indexer localnet, fail to generate config.json

So I'm trying to run the indexer on localnet following the official tutorial https://docs.near.org/docs/tutorials/near-indexer
However when I run cargo run -- init to generate the localnet json config I get this error
Finished dev [unoptimized + debuginfo] target(s) in 17.62s
Running `target/debug/example-indexer init`
thread 'main' panicked at 'Failed to deserialize config: Error("expected value", line: 1, column: 1)', /home/francois/.cargo/git/checkouts/nearcore-5bf7818cf2261fd0/a44be20/nearcore/src/config.rs:499:39
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
At some point it seems the json is not created or not created properly I guess, the function crashing in config.rf line 499 is
impl From<&str> for Config {
fn from(content: &str) -> Self {
serde_json::from_str(content).expect("Failed to deserialize config")
}
}
It's quite difficult to debug since cargo run -- init is using some inner near function (also I'm new to rust).
the config.json file is created but it seems the permission are not set properly by the script, the content of config.json is
"<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message> ... "
If anyone from the community has encountered this problem or has a hint it would be great!! thanks a lot !
In the tutorial you referenced, it mentions a similar error, and suggests the following:
Open your config.json located in the .near folder in the root of your home directory. ( ~/.near/config.json )
In this file, locate: "tracked_shards": [] and change the value to [0].
Save the file and try running your indexer again.
So I had the wrong config with download_config: false,
It should be download_config: false, for the localnet use

Starting Zabbix Server within docker replaces strings with nothing in config file →

→ or totally ignored strings like name of new DB for testing purposes.
Firstly tries to add something about ~250 to 250 already added hosts and Z-server shutted down. I've restarted it and inside docker logs I saw this:
6:20191014:091840.201 using configuration file: /etc/zabbix/zabbix_server.conf
6:20191014:091840.223 current database version (mandatory/optional): 04020000/04020001
6:20191014:091840.223 required mandatory version: 04020000
6:20191014:091840.484 __mem_malloc: skipped 7 asked 108424 skip_min 304 skip_max 12192
6:20191014:091840.484 [file:dbconfig.c,line:94] __zbx_mem_realloc(): out of memory (requested 108424 bytes)
6:20191014:091840.484 [file:dbconfig.c,line:94] __zbx_mem_realloc(): please increase CacheSize configuration parameter
6:20191014:091840.484 === memory statistics for configuration cache ===
Solution for those problem was to increase CacheSize in zabbix_server.conf . Okay, that's not a problem and after this Im push a new config to Z-server and restart it... → and z-server stops already after start and logs says the same problem. After reading config in container I saw what string what I corrected to matching my wishes are missing O_o. Strings are deleted.
My config:
LogType=console
DBHost=postgres-server
DBName=zabbix_pwd
DBSchema=public
DBUser=zabbix
DBPassword=zabbix
DBPort=5432
StartPollers=5
StartIPMIPollers=5
StartPollersUnreachable=5
SNMPTrapperFile=/var/lib/zabbix/snmptraps/snmptraps.log
StartSNMPTrapper=1
CacheSize=512M
HistoryCacheSize=512M
HistoryIndexCacheSize=512M
TrendCacheSize=512m
ValueCacheSize=256M
AlertScriptsPath=/usr/lib/zabbix/alertscripts
ExternalScripts=/usr/lib/zabbix/externalscripts
FpingLocation=/usr/sbin/fping
Fping6Location=/usr/sbin/fping6
SSHKeyLocation=/var/lib/zabbix/ssh_keys
SSLCertLocation=/var/lib/zabbix/ssl/certs/
SSLKeyLocation=/var/lib/zabbix/ssl/keys/
SSLCALocation=/var/lib/zabbix/ssl/ssl_ca/
LoadModulePath=/var/lib/zabbix/modules/
And what I've getting after starting z-server:
LogType=console
DBHost=postgres-server
DBName=zabbix_pwd
DBSchema=public
DBUser=zabbix
DBPassword=zabbix
DBPort=5432
SNMPTrapperFile=/var/lib/zabbix/snmptraps/snmptraps.log
StartSNMPTrapper=1
AlertScriptsPath=/usr/lib/zabbix/alertscripts
ExternalScripts=/usr/lib/zabbix/externalscripts
FpingLocation=/usr/sbin/fping
Fping6Location=/usr/sbin/fping6
SSHKeyLocation=/var/lib/zabbix/ssh_keys
SSLCertLocation=/var/lib/zabbix/ssl/certs/
SSLKeyLocation=/var/lib/zabbix/ssl/keys/
SSLCALocation=/var/lib/zabbix/ssl/ssl_ca/
LoadModulePath=/var/lib/zabbix/modules/
Any suggestions to how-to rule the world and don't be captured by doctors ?
With docker you need to send conf parameters in the docker-compose.yml file, or in your docker run command using the -e :
For example from my docker yml file:
zabbix-server:
image: zabbix/zabbix-server-pgsql:ubuntu-4.2.6
environment:
ZBX_MAXHOUSEKEEPERDELETE: 5000
ZBX_STARTPOLLERS: 15
ZBX_CACHESIZE: 8M
ZBX_STARTDBSYNCERS: 4
ZBX_HISTORYCACHESIZE: 16M
ZBX_TRENDCACHESIZE: 4M
ZBX_VALUECACHESIZE: 8M
ZBX_LOGSLOWQUERIES: 3000
Another way to work with zabbix:
https://hub.docker.com/r/monitoringartist/zabbix-3.0-xxl/

pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null

I am trying to run hadoop using docker provided here:
https://github.com/big-data-europe/docker-hadoop
I use the following command:
docker-compose up -d
to up the service and am able to access it and browse file system using: localhost:9870. Problem rises whenever I try to use pyhdfs to put file on HDFS. Here is my sample code:
hdfs_client = HdfsClient(hosts = 'localhost:9870')
# Determine the output_hdfs_path
output_hdfs_path = 'path/to/test/dir'
# Does the output path exist? If not then create it
if not hdfs_client.exists(output_hdfs_path):
hdfs_client.mkdirs(output_hdfs_path)
hdfs_client.create(output_hdfs_path + 'data.json', data = 'This is test.', overwrite = True)
If test directory does not exist on HDFS, the code is able to successfully create it but when it gets to the .create part it throws the following exception:
pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null
What surprises me is that my code is able to create the empty directory but fails to put the file on HDFS. My docker-compose.yml file is exactly the same as the one provided in the github repo. The only change I've made is in the hadoop.env file where I change:
CORE_CONF_fs_defaultFS=hdfs://namenode:9000
to
CORE_CONF_fs_defaultFS=hdfs://localhost:9000
I have seen this other post on sof and tried the following command:
hdfs dfs -mkdir hdfs:///demofolder
which works fine in my case. Any help is much appreciated.
I would keep the default CORE_CONF_fs_defaultFS=hdfs://namenode:9000 setting.
Works fine for me after adding a forward slash to the paths
import pyhdfs
fs = pyhdfs.HdfsClient(hosts="namenode")
output_hdfs_path = '/path/to/test/dir'
if not fs.exists(output_hdfs_path):
fs.mkdirs(output_hdfs_path)
fs.create(output_hdfs_path + '/data.json', data = 'This is test.')
# check that it's present
list(fs.walk(output_hdfs_path))
[('/path/to/test/dir', [], ['data.json'])]

symfony/yaml backed symfony/config not parsing environment variables

I have recreated a simple example in this tiny github repo. I am attempting to use symfony/dependency-injection to configure monolog/monolog to write logs to php://stderr. I am using a yaml file called services.yml to configure dependency injection.
This all works fine if my yml file looks like this:
parameters:
log.file: 'php://stderr'
log.level: 'DEBUG'
services:
stream_handler:
class: \Monolog\Handler\StreamHandler
arguments:
- '%log.file%'
- '%log.level%'
log:
class: \Monolog\Logger
arguments: [ 'default', ['#stream_handler'] ]
However, my goal is to read the path of the log files and the log level from environment variables, $APP_LOG and LOG_LEVEL respectively. According to The symphony documentations on external paramaters the correct way to do that in the services.yml file is like this:
parameters:
log.file: '%env(APP_LOG)%'
log.level: '%env(LOGGING_LEVEL)%'
In my sample app I verified PHP can read these environment variables with the following:
echo "Hello World!\n\n";
echo 'APP_LOG=' . (getenv('APP_LOG') ?? '__NULL__') . "\n";
echo 'LOG_LEVEL=' . (getenv('LOG_LEVEL') ?? '__NULL__') . "\n";
Which writes the following to the browser when I use my original services.yml with hard coded values.:
Hello World!
APP_LOG=php://stderr
LOG_LEVEL=debug
However, if I use the %env(VAR_NAME)% syntax in services.yml, I get the following error:
Fatal error: Uncaught UnexpectedValueException: The stream or file "env_PATH_a61e1e48db268605210ee2286597d6fb" could not be opened: failed to open stream: Permission denied in /var/www/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php:107 Stack trace: #0 /var/www/vendor/monolog/monolog/src/Monolog/Handler/AbstractProcessingHandler.php(37): Monolog\Handler\StreamHandler->write(Array) #1 /var/www/vendor/monolog/monolog/src/Monolog/Logger.php(337): Monolog\Handler\AbstractProcessingHandler->handle(Array) #2 /var/www/vendor/monolog/monolog/src/Monolog/Logger.php(532): Monolog\Logger->addRecord(100, 'Initialized dep...', Array) #3 /var/www/html/index.php(17): Monolog\Logger->debug('Initialized dep...') #4 {main} thrown in /var/www/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php on line 107
What am I doing wrong?
Ok you need a few things here. First of all you need version 3.3 of Symfony, which is still in beta. 3.2 was the released version when I encountered this. Second you need to "compile" the environment variables.
Edit your composer.json with the following values and run composer update. You might need to update other dependencies. You can substitute ^3.3 with dev-master.
"symfony/config": "^3.3",
"symfony/console": "^3.3",
"symfony/dependency-injection": "^3.3",
"symfony/yaml": "^3.3",
You will likely have to do this for symfony/__WHATEVER__ if you have other symfony components.
Now in you're code after you load your yaml configuration into your dependency container you compile it.
So after you're lines here (perhaps in bin/console):
$container = new ContainerBuilder();
$loader = new YamlFileLoader($container, new FileLocator(__DIR__ . DIRECTORY_SEPARATOR . '..'));
$loader->load('services.yml');
Do this:
$container->compile(true);
Your IDE's intellisense might tell you compile takes no parameters. That's ok. That's because compile() grabs its args indirectly via func_get_arg().
public function compile(/*$resolveEnvPlaceholders = false*/)
{
if (1 <= func_num_args()) {
$resolveEnvPlaceholders = func_get_arg(0);
} else {
. . .
}
References
Github issue where this was discussed
Pull request to add compile(true)
Using this command after loading your services.yaml file should help.
$containerBuilder->compile(true);
given your files gets also validated by the checks for proper configurations which this method also does. The parameter is $resolveEnvPlaceholders which makes environmental variables accessible to the yaml services configuration.

How do I configure SaltStack to transfer a file (or install a package) for the first time?

I am running two instances of RedHat. I have SaltMaster installed on one machine and SaltMinion installed on another. I am using a free version of Salt. I want to test SaltStack to do a basic configuration management task. If it can transfer a file from SaltMaster to SaltMinion, that would be great. If it can install Apache web server on SaltMinion, that would be great. Either task will help me learn. My learning goal is semi-flexible.
I can use salt '*' test.ping. The response is True. I tried this command: salt '*' state.apply
I got this error:
> hostname.fqdn:
> Data failed to compile:
> ----------
> No matching salt environment for environment 'qa' found
> ----------
> No matching sls found for 'qa1' in env 'qa'
> ----------
> No matching sls found for 'base1' in env 'base'
> ----------
> No matching salt environment for environment 'dev' found
> ----------
> Specified SLS base1 in saltenv dev is not available on the salt master or through a configured fileserver
I modified the /etc/salt/master file. I uncommented these lines:
fileserver_backend:
- git
- roots
I tried this command again: salt '*' state.apply
I received this error:
> [ERROR ] Error parsing configuration file: /etc/salt/master -
> expected '<document start>', but found '<block mapping start>' in
> "<string>", line 547, column 1:
> fileserver_backend:
> ^ [ERROR ] Error parsing configuration file: /etc/salt/master - expected '<document start>', but found '<block mapping start>' in
> "<string>", line 547, column 1:
> fileserver_backend:
> ^
I have been following these directions here:
https://docs.saltstack.com/en/latest/topics/tutorials/states_pt1.html
I created a webserver.sls file.
I inserted these lines as the content:
apache: # ID declaration
pkg: # state declaration
- installed # function declaration
I do not see how the three lines in the directions above would be enough to configure SaltStack to work. Where would the apache installation media need to be? Where would the transfer happen from? Am I supposed to download the media to SaltMaster? I would assume so. But where would I put it? I have a satellite server for yum commands to work.
Alternatively, how do I get SaltStack to transfer a file from SaltMaster to SaltMinion?
The first error ([...]No matching sls found for 'qa1' in env 'qa'[...]) indicates that you have configured a lot of different environments (file_roots), which are not present on your master's filesystem. Your approach to solve this goes in the correct direction, but leads to this error:
[ERROR ] Error parsing configuration file: /etc/salt/master - expected '', but found '' in "", line 547, column 1: fileserver_backend: ^ [ERROR ] Error parsing configuration file: /etc/salt/master - expected '', but found '' in "", line 547, column 1: fileserver_backend: ^
You should no longer be able to test.ping your minion, as the salt master should not run anymore, does it? To solve it just read the error message. It tells you with which point in your salt master configuration file salt is unhappy.
The fileserver_backend configures which types of backend should be available. You should check the file_roots configuration to actually define which roots are available. Roots refer to salt states folders in your filesystem.
A very simple config might look like that:
file_roots:
base:
- /srv/salt
It assumes that /srv/salt is the root of your state tree - which effectively means, that your webserver.sls should be located in this folder.
Your webserver.sls looks promising - it should install apache2 on a minion, when you apply it.
Managing configuration files on the master and transferring them to the minions is something salt can easily achieve. A simple state might look like:
/etc/myawesomeconfigurationfile.conf:
file.managed:
source: salt://myawesomefile # refers to /srv/salt/myawesomefile
user: root
group: root
mode: 640
You also asked for media files that you want to manage. If you talk about application related data it is not a good idea to use salt to move them around. IMO other approaches like NFS, GlusterFS or anything else that decouples user content from your application would be a better approach.

Resources