Telegraf unable to connect to InfluxDB - docker

I am new to docker, influx grafana etc. I got grafana and influxdb running, but seems to be unable to connect telegraf to influxdb. I followed many guides, but I am missing something.
I created a Telegraf conf file on E:\docker\containers\telegraf and try to use it with:
docker run -v e:/docker/containers/telegraf/:/etc/telegraf/telegraf:ro telegraf
But I keep getting the following error:
2017/05/13 20:32:39 I! Using config file: /etc/telegraf/telegraf.conf
2017-05-13T20:32:39Z E! Database creation failed: Post
http://localhost:8086/query?db=&q=CREATE+DATABASE+%22telegraf%22: dial tcp
[::1]:8086:
getsockopt: connection refused
I have this in the influxdb output part of the conf file:
[[outputs.influxdb]]
# urls = ["udp://localhost:8089"] # UDP endpoint example
urls = ["http://10.0.75.1:8086"] # required
database = "telegraf" # required
retention_policy = ""
write_consistency = "any"
timeout = "5s"
#username = "telegraf"
#password = "telegraf"
If you look ad the urls, it does not seem to read the conf file. I just keeps trying to connect to localhost. (localhost:8083 and 10.0.75.1:8083 both open the influxdb webpage)

This sounds like the mapping and / or E drive is now allowed to be mapped in Docker for Windows.
First, your mapping doesn't appear correct. If you have a file of telegraf.conf at e:/docker/containers/telegraf/ then your current mapping will end up with the file at /etc/telegraf/telegraf/telegraf.conf which is one extra telegraf folder deep. The error states it is looking for /etc/telegraf/telegraf.conf. In this case, it is likely using a default telegraf.conf.
Next, I believe the Docker on Windows doesn't allow mapping of drives other than C by default. Check the shared drive settings to make sure that E is allowed to be mapped (an article I found that shows this is at https://rominirani.com/docker-on-windows-mounting-host-directories-d96f3f056a2c).
After fixing both of these errors, if it still persists, I would get into the container with docker exec and confirm that the /etc/telegraf/telegraf.conf file does appear to have the contents that it should.

Related

How to add a Minio connection to Airflow connections?

I am trying to add a running instance of MinIO to Airflow connections, I thought it should be as easy as this setup in the GUI (never mind the exposed credentials, this is a blocked of environment and will be changed afterwards):
Airflow as well as minio are running in docker containers, which both use the same docker network. Pressing the test button results in the following error:
'ClientError' error occurred while testing connection: An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid.
I am curious about what I am missing. The idea was to set up this connection and then use a bucket for data-aware scheduling (= I want to trigger a DAG as soon as someone uploads a file to the bucket)
I am also facing the problem that the endpoint URL refused connection. what I have done is the is actually running in the docker container so we should give docker host url
{
"aws_access_key_id":"your_minio_access_key",
"aws_secret_access_key": "your_minio_secret_key",
"host": "http://host.docker.internal:9000"
}
I am also facing this error in Airflow 2.5.0.
I've found workaround using boto3 library that already buit-in.
Firsty I created connection with parameters:
Connection Id: any label (Minio in my case)
Connection Type: Generic
Host: minio server ip and port
Login: Minio access key
Password: Minio secret key
And here's my code:
import boto3
from airflow.hooks.base import BaseHook
conn = BaseHook.get_connection('Minio')
s3 = boto3.resource('s3',
endpoint_url=conn.host,
aws_access_key_id=conn.login,
aws_secret_access_key=conn.password
)
s3client = s3.meta.client
#and then you can use boto3 methods for manipulating buckets and files
#for example:
bucket = s3.Bucket('test-bucket')
# Iterates through all the objects, doing the pagination for you. Each obj
# is an ObjectSummary, so it doesn't contain the body. You'll need to call
# get to get the whole body.
for obj in bucket.objects.all():
key = obj.key

Error: endorsement failure during invoke. response: status:500 message:"error in simulation: failed to execute transaction [duplicate]

I just reinstalled Fabric Samples v2.2.0 from Hyperledger Fabric repository according to the documentation.
But when I try to run asset-transfer-basic application located in fabric-samples/asset-transfer-basic/application-javascript directory by running node app.js the wallet is created and an admin and user is registered. But then it tries to invoke the function as given in app.js and shows this error
error: [Transaction]: Error: No valid responses from any peers. Errors:
peer=peer0.org1.example.com:7051, status=500, message=error in simulation: failed to execute transaction
aa705c10403cb65cecbd360c13337d03aac97a8f233a466975773586fe1086f6: could not launch chaincode basic_1.0:b359a077730d7
f44d6a437ad49d1da951f6a01c6d1eed4f85b8b1f5a08617fe7: error starting container: error starting container:
API error (404): network _test not found
Response of a transaction to invoke a function
This error never occured before. But somehow after reinstalling docker and Hyperledger Fabric fabric-samples it never seems to find the network _test.
N.B. : Before reinstalling name of the network was net_test. But now when I try docker network ls it shows a network called docker_test. I am using Windows Subsystem for Linux (WSL) version 1.
NETWORK ID NAME DRIVER SCOPE
b7ac05456f46 bridge bridge local
acaa5856b871 docker_test bridge local
866f58b9078d host host local
4812f94efb15 none null local
How can I fix the issue occurring when I try to run the application?
In my opinion, the CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE setting seems to be wrong.
you can check docker-compose.yaml or core.yaml
1. docker-compose.yaml
I will explain fabric-samples/test-network as targeting according to your current situation.
You can check in CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE in docker-compose.yaml
Perhaps in your case(fabric-samples/test-network), the value of ${COMPOSE_PROJECT_NAME} was not set properly, so it was set to _test.
Make sure the value is set correctly and change it to your network name.
# hyperledger/fabric-samples/test-network/docker/docker-compose-test-net.yaml
# based v2.2
...
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer:2.2
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_test
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=docker_test
...
2. core.yaml
If you have not set the value in the docker-compose.yaml peer, you need to check the core.yaml referenced by the peer.
you can find the networkMode parameter in core.yaml
# core.yaml
...
vm:
docker:
hostConfig:
# NetworkMode: host
NetworkMode: docker_test
...
If neither is set, it will be set to the default value. However, as you see _test being logged, the wrong value have been set in one of the two section, and you need to correct the value to the value you intended.
This issue is related to docker networking. In complete to #nezuko-response.
Create a file and name it ".env" in the same directory where your docker-compose file exists.
Add the following line in it:
COMPOSE_PROJECT_NAME=net
Use docker-compose up to update the container with the new configurations.
Or bring the HL network down (./network.sh down) and up (./network.sh up), restarting the test-nework.
Otherwise you'll still get the same error even after creating ".env" file.
More explanation about docker networking
run ./network down
then
export COMPOSE_PROJECT_NAME=net
afterwards
./network start
I copied this from someone .This one worked for me !!
Please create a file named ".env" in the same directory where your docker-compose file exists. Add the following line in ".env" file:-
COMPOSE_PROJECT_NAME=net
This worked for me
export COMPOSE_PROJECT_NAME=net

fail2ban won't start using nextcloud.log with jail

I have nextcloud installed and working fine in a docker but want to have fail2ban monitor the log files for brute force attempts. I know nextcloud has it's own baked in but it just throttles the log in attempts and I would like to all out ban them (I also have this problem with other containers as well). The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log. I followed this guide to create the jail
https://www.c-rieger.de/nextcloud-installation-guide-ubuntu/#c06
Fail2ban is running on the host machine however, fail2ban fails to start with:
[447]: ERROR Failed during configuration: Have not found any log file for nextcloud jail
[447]: ERROR Async configuration of server failed
Thinking it was simply a permission issue, I chowned everything to root and tried to start again but still the service won't start. What am I doing wrong?
Thanks for the help!
The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log
Be sure this file really exists and your jail.local has correct entry logpath:
[nextcloud]
...
logpath = /mnt/nextcloud/log/nextcloud.log
You can also check resulting config using dump:
fail2ban-client -d | grep 'nextcloud.*logpath'
But I'm still not sure the error message you provide was throwed by fail2ban, because its error messages look different, see https://github.com/fail2ban/fail2ban/commit/27947407bc7910f0f50972113218ebc73c4a22c7
It should be something like:
-have not found a log file for nextcloud log
+Have not found any log file for nextcloud jail

Connecting to a Progress Openedge database from ABL

This code works fine if I run it in the Progress Editor. If I save this as a .p file and click on right button "RUN", it gives me an error that database doesn't exist. I understand that maybe I should insert some code to connect to a database.
Does anybody know what statement I should use?
DEF STREAM st1.
OUTPUT STREAM st1 TO c:\temp\teste.csv.
FOR EACH bdName.table NO-LOCK:
PUT STREAM st1 UNFORMATTED bdName.Table.attr ";" SKIP.
END.
OUTPUT STREAM st1 CLOSE.
Exactly as you say you need to connect to your database. This can be done in a couple of different ways.
Connect by CONNECT statement
You can connect a database using the CONNECT statement. Basically:
CONNECT <database name> [options]
Here's a simple statement that is connecting to a local database named "database" running on port 43210.
CONNECT database.db -H localhost -S 43210.
-H specifies the host running the database. This can be a name or an IP-address. -S specifies the port (or service) that the database uses for connections. This can be a number or a service-name (in that case it must be specified in /etc/services or similar)
However you cannot connect to a database and work with it's tables in the same program. Instead you will need to connect in one program and then run the logic in a second program
/* runProgram.p */
CONNECT database -H dbserver -S 29000.
RUN program.p.
DISCONNECT database.
/* program.p */
FOR EACH exampletable NO-LOCK:
DISPLAY exampletable.
END.
Connect by command line parameters
You can simple add parameters in your startup command so that the new session connects to one or more databases from start.
Windows:
prowin32.exe -db mydatabase -H localhost -S 7777
Look at the option below (parameter file) before doing this
Connect by command line parameter (using a parameter file)
Another option is to use a parameter file, normally with the extension .pf.
Then you will have to modify how you start your session so instead of just doing prowin32.exe (if your on windows) you add the -pf parameter:
prowin32.exe -pf myparameterfile.pf
The parameterfile will then contain all your connection parameters:
# myparameterfile.pf
-db database -S localhost -P 12345
Hashtag (#) is used for comments in parameter files.
On Linux/Unix you would run:
pro -pf myparameterfile.pf
You can also mix the different ways for different databases used in the same session.

How to fix the [unixODBC][Driver Manager]Data source name not found, and no default driver specified (ODBC::Error)

/local/rvm/gems/ruby-1.9.2-p320/gems/activerecord-sqlserver-adapter-3.2.12/lib/active_record/connection_adapters/sqlserver_adapter.rb:455:in `initialize': IM002 (0) [unixODBC][Driver Manager]Data source name not found, and no default driver specified (ODBC::Error)
I have working copy of my app but suddenly overnight I left my system like that and this error started surfacing. Can anyone tell how to fix this one please?
There is no definitive answer to your question since you gave us nothing to work on.
However, the possible reasons for this are:
the DSN you specified could not be found in your user or system odbc.ini files
Run odbcinst -j to find where those files are
Has someone changed/removed them?
You set ODBCINI env var or ODBCSYSINI env var to point unixODBC at the location of your odbc.ini and odbcinst.ini files and now they are not set (or changed).
Someone has removed or moved your ODBC driver
You normally run your code as user A and now you are running it as user B and you are using user datasources or set ODBCINI env var.
... probably others but if you'd given us better information we wouldn't have to guess.
You should start by setting up and configuring FreeTDS. Here is a sample configurations from my files, but I'm sure other variants will work also. One difference is that I'm using Django, but the result below still worked eventually, but it works much better with SQL authentication than with Windows Authentication.
From /etc/freetds/freetds.conf (use the IP of the server if DNS is not active for the server name).
# A typical Microsoft server
[MyServer]
host = 10.0.0.10\path
port = 1433
tds version = 7.0
From /etc/odbcinst.ini
[FreeTDS]
Description = FreeTDS
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
From /etc/odbc.ini
[ServerDSN]
Description = "Some Description"
Driver = FreeTDS
ServerName = MyServer
Server = ip_address
Port = 1433
Database = DBNAME
Then this command connects me to the database.
tsql -S MyServer -U username#servername -P password
Please verify the following:
The driver configuration file is named odbcinst.ini and is provided in the same path / directory / folder as odbc.ini
The ODBC Initialization Path is a path / directory and not an actual path to the file (i.e. /root/odbc.ini). Please provide a directory path to where both odbcinst.ini and odbc.ini files exist.
The Driver name defined in odbcinst.ini is the same as the Driver attribute defined in the datasource of odbc.ini.
Note: If odbcinst.ini has the driver defined as “[ODBC Driver 13 for SQL Server]” then verify the odbc.ini references “Driver=ODBC Driver 13 for SQL Server”
This solved my problem.
Source: https://support.microfocus.com/kb/doc.php?id=7017884
just a tip, in my case was not possible with Driver = FreeTDS and both variables "servername" and "server" in odbc.ini. I let only "server = ip" and
"Driver = /usr/lib/i386-linux-gnu/odbc/libtdsodbc.so", worked fine.

Resources