Cannot enable basic auth on Windows-Exporter to secure node between Windows and Prometheus - windows-services

As a test environment to monitor status of Windows Servers (CPU, Disk usage, Memory, network etc) I have placed two testing nodes with Windows-Exporter configured on custom port :15000
Next, I have created proper jobs for each separate Windows instances and created dashboard in Grafana.
The problem is that I'm looking for securing nodes so only Prometheus server can access node output and all other computers in same network get deny access to node website.
I have tried to install Windows Node with setting:
msiexec /i windows_exporter-0.19.0-amd64.msi LISTEN_PORT="15000" EXTRA_FLAGS="--web.config.file=C:\Configuration\web.yml"
As well as with different configurations of " and ' in commandline for EXTRA_FLAGS parameter - yet it seems they are being ignored. The only parameter working fine is change of listen port.
I have followed instructions provided at https://prometheus.io/docs/guides/basic-auth/ to set up basic auth.
Web.yml looks like this:
basic_auth:
username: 'scrapper'
password: '$2a$14$AWpxyT1KcRPSE07IfmqTqOZznpMfGwxHP8uPVQV8G0qdjggND3hgC'
However, after installation with msiexec - entry in Windows services for windows_exporter is without web.config.file entry:
"C:\Program Files\windows_exporter\windows_exporter.exe" --log.format logger:eventlog?name=windows_exporter --telemetry.addr 0.0.0.0:15000
I have tried to edit service entry with sc command but it broke node completely, making me rolling back to unprotected access to node.
Does basic auth work on windows-exporter same way as on node-exporter for Linux OSes?
Or is there other possible way to secure access to node exposed data without need to install IIS?

I have never worked with node exporter in Windows, but in Linux, your Web.yml configuration file should be as follows:
basic_auth_users:
<string>: <secret>
like this:
basic_auth_users:
scrapper: $2a$14$AWpxyT1KcRPSE07IfmqTqOZznpMfGwxHP8uPVQV8G0qdjggND3hgC

Related

Neo4j setup in OpenShift

I am having difficulties deploying Neo4j official docker image https://hub.docker.com/_/neo4j to an OpenShift environment and accessing it from outside (from my local machine)
I have performed the following steps:
oc new-app neo4j
Created route for port 7474
Set up the environment variable NEO4J_dbms_connector_bolt_listen__address to 0.0.0.0:7687 which is the equivalent of seting up the dbms.connector.bolt.listen_address=0.0.0.0:7687 in the neo4j.conf file.
Access the route url from local machine which will open the neo4j browser which requires authentication. At this point I am blocked because any combination of urls I try are unsuccessful.
As a workaround I have managed to forward 7687 port to my local machine, install Neo4j Desktop solution and connect via bolt://localhost:7687 but this is not the ideal solution.
Therefore there are two questions:
1. How can I connect from the neo4j browser to it's own database
How can I connect from external environment (trough OpenShift route) to the Neo4j DB
I have no experience with the OpenShift, but try to add the following config:
dbms.default_listen_address=0.0.0.0
Is there any other way for you to connect to Neo4j, so that you could further inspect the issue?
Short answer:
To connect to the DB that is most likely a configuration issue, maybe Tomaž Brataničs answer is the solution. As for accessing the DB from outside, you will most likely need a NodePort.
Long answer:
Note that OpenShift Routes are for HTTP / HTTPS traffic and not for any other kind of traffic. Typically, the "Routers" of an OpenShift cluster listen only on Port 80 and 443, so connecting to your database on any other port will most likely not work (although this heavily depends on your cluster configuration).
The solution for non-HTTP(S) traffic is to use NodePorts as described in the OpenShift documentation: https://docs.openshift.com/container-platform/3.11/dev_guide/expose_service/expose_internal_ip_nodeport.html
Note that also for NodePorts, you might need to have your cluster administrator add additional ports to the loadbalancer or you might need to connect to the OpenShift Nodes directly. Refer to the documentation on how to use NodePorts.

Connecting to scality/s3 server between docker containers

We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure

Possible to Change Jenkins URL? : http://localhost:8080

Possible to Change Jenkins URL? : http://localhost:8080
Currently I have jenkins setup on a virtual machine.
Is it possible to setup Jenkins on a URL which is more is accessible for other users?
For example I dont want other users to access test results by connecting to the Virtual machine instead I want them to access a URL from their own device in turn enabling them to login and see test results via jenkins.
thanks for you help
Let's say IP of your virtual machine is 192.168.x.x.
Open the Port 8080 via firewall and then change the URL of jenkins from
"Manage Jenkins >>Config Sys >> Jenkins Location>" to "http://192.168.x.x:8080"
Now you can access it from other machine on same network domain.Just have to hit the url http://192.168.x.x:8080
You can now create different users with different privileges for the same.
You can find it under Manage Jenkins >> Manage Users >> Create Users.
It is related to networking. The machines should be on the same network, so they can talk to each other (unless you have public IP).
The ONLY rule to give access is, that machines can talk to each other (of course, in their language, 0's & 1's).
I suggest following things to do to share the URL:
Ask the users to connect to the same network that your machine is in.
Verify whether they are able to PING your machine IP (get it from ipconfig command for windows - router assigns IP for your machine, that starts with 192.168 or 10.10). command example: ping 10.10.1.10
If any issues in Ping, it might be Windows Firewall or Anti-virus which might be blocking, so allow the IPs in your firewall so they can access your machine.
Then ask them to access Jenkins using the following URL http://[IP of your machine:8080]
We want the Jenkins web interface to be accessible from anywhere (not
just on the local machine), so we’re going to open up the config file:
sudo nano /usr/local/opt/jenkins-lts/homebrew.mxcl.jenkins-lts.plist
Find this line:
<string>--httpListenAddress=127.0.0.1</string>
And change it to:
<string>--httpListenAddress=0.0.0.0</string>
RF : Installing Jenkins on macOS

Running an Ant script to prepare a Database in Bluemix

I have an Ant script that I use to populate/prepare a database. All I need is to set the host, port and credentials for the database. It works fine for MySQL and DB2, the DB just need to be reachable from were the script is executed.
The DB service in Bluemix gives me a DB with an IP (75.x.x.x) that is only reachable from the internal network of Bluemix, it is not accessible externally.
My understanding is that my ant script needs to be executed from inside the Bluemix network/servers.
How can I do that?
What would be the alternatives?
I'm considering to create a NodeJS script to trigger that ant internally, but I'm not sure if it will work properly.
dashDB always had the ability for local clients (outside of Bluemix) to connect to the cloud database, and SQL Database later added the feature as well. So you should be able to populate a database as long as you have the correct driver client installed on your local machine.
Can you provide more details on how you tested that the IP is not reachable? Is there a firewall put in place between your local machine and Bluemix? Note that ping is not a good test because the port is blocked for security reasons. You may try the JDBC port indicated on the connection page from the console.
See link for instructions on how to make a connection:
https://www.ng.bluemix.net/docs/#services/SQLDB/index.html#connecting-to-sqldb
You might be able to use a simple custom buildpack. You can start with a sample like this one:
https://github.com/dmikusa-pivotal/cf-test-buildpack
fork it and modify the bin/compile script to run your ant task instead. Then put your ant script (and probably executable as I expect it is not installed in the Bluemix environment) in a directory and run
cf push <appname> -b <your forked git url>
To push it to Bluemix and run it. If you're just using it once you can probably get away with hard-coding the address and credentials, or else you can bind to the same service instance and get the info from VCAP_SERVICES.

Jenkins - remote access denied

I'm using the ArtifactDeployer plugin to deploy the build job artifacts to a remote location (Windows share SMB).
However Jenkins never manages to succeed. Throwing errors like:
[ArtifactDeployer] - Starting deployment from the post-action ...
[ArtifactDeployer] - [ERROR] - Failed to deploy. Can't create the directory ... Build step
[ArtifactDeployer] - Deploy artifacts from workspace to remote directories' changed build result to FAILURE
Local deployment works fine.
The Jenkins machine OS is Windows 7 32-bit Prof.
Jenkins is running as a service using a local system account.
I tried using another account, my user account but the service failed to start (Windows error 1069: the service did not start due to a logon failure).
The network service account did run but than Jenkins throws errors it can't access the .NET framework.
When manually trying the remote copy, this works fine. I can create directories and write to it. On the same machine of course.
I tried two different remote reference in Jenkins:
1) \\targetdirectory
2) I:\ - by mapping a drive letter to the remote dir in windows
No success...
Any tips or suggestions? Thanks!
Update 15/02/2012:
Still no solution or workaround for this issue.
It's not only the plugin, I hit also this issue using "Execute Windows batch command".
I found a bug report that I want to share.
Solution
I found a solution. You have to grant access persmission to the computer in a domain instead of the user of that machine. Seems very logic if you look back to it.
A 2nd solution is to run the service using a domain user account. Above I made a mistake by using the local user .\user in stead of DOMAIN\user.
If you don't have a domain, the following will work for sure. This should work even if you have a domain.
Background Info:
You need your mapped drive to be mapped for the same account that the service is using AND be available at the right time. Normally mapped drives are mapped only for the logged in user, at the time that they log in. Service user contexts don't get "logged in" per se -- for example, if I map a drive as MyUser and the service runs as MyUser, the drive won't be available until I actually log in by typing in my password. However, we can use a script to map the drive at startup (instead of login) for a particular user. Jenkins normally runs as Local System Account, so if you don't want to change that, you'll need to run the script below as the SYSTEM user. You can instead create a specific user for Jenkins to run as, if you don't want to grant this mapped drive to all services/processes that run as SYSTEM, and run both the service and the script below as that user (this is probably more secure).
Solution Steps:
In ArtifactDeployer you want to deploy to a mapped network drive. In my case this is S:.
There is no special setup for permissions on the remote share. (In my case, a Windows Server 2008 share with a username and password that is used for mapping the drive.)
Write a batch file MapDrives.bat in a place that your chosen user (default: SYSTEM) has access to, with the following in it:
net use S: "\\server_name\share_name" /persistent:yes password_here /USER:username_here
Note that I am mapping to S: in that line.
Via Task Scheduler, create a task that runs as the same user as the service (default: SYSTEM), triggers At Startup, and as it's action, runs the batch file MapDrives.bat.
Reboot and it should work!
Citations:
After diving through many pages and many tests, ultimately, the best suggestions were found here, and led me to the above solution.
https://stackoverflow.com/a/4763324/150794
Make sure your 'local system account' has access rights to the remote directory (including write access). Then use the notation
\\targetdirectory
Mapping drive letters to remote directories only applies to the user account you are currently working with. The drive letter mapping will not be available to any other account.

Resources