Running an Ant script to prepare a Database in Bluemix - ant

I have an Ant script that I use to populate/prepare a database. All I need is to set the host, port and credentials for the database. It works fine for MySQL and DB2, the DB just need to be reachable from were the script is executed.
The DB service in Bluemix gives me a DB with an IP (75.x.x.x) that is only reachable from the internal network of Bluemix, it is not accessible externally.
My understanding is that my ant script needs to be executed from inside the Bluemix network/servers.
How can I do that?
What would be the alternatives?
I'm considering to create a NodeJS script to trigger that ant internally, but I'm not sure if it will work properly.

dashDB always had the ability for local clients (outside of Bluemix) to connect to the cloud database, and SQL Database later added the feature as well. So you should be able to populate a database as long as you have the correct driver client installed on your local machine.
Can you provide more details on how you tested that the IP is not reachable? Is there a firewall put in place between your local machine and Bluemix? Note that ping is not a good test because the port is blocked for security reasons. You may try the JDBC port indicated on the connection page from the console.
See link for instructions on how to make a connection:
https://www.ng.bluemix.net/docs/#services/SQLDB/index.html#connecting-to-sqldb

You might be able to use a simple custom buildpack. You can start with a sample like this one:
https://github.com/dmikusa-pivotal/cf-test-buildpack
fork it and modify the bin/compile script to run your ant task instead. Then put your ant script (and probably executable as I expect it is not installed in the Bluemix environment) in a directory and run
cf push <appname> -b <your forked git url>
To push it to Bluemix and run it. If you're just using it once you can probably get away with hard-coding the address and credentials, or else you can bind to the same service instance and get the info from VCAP_SERVICES.

Related

Local IMAP server on docker

I want to setup a local IMAP server within my home network for archiving emails. The server does not need to be accessable via the internet. Therefore I can pass on a secured access via SSL (If this makes it easier). I want to integrate the server in my current docker setup. So the server has to run within a docker container.
I already tried the following containers:
https://hub.docker.com/r/blackflysolutions/dovecot
https://hub.docker.com/r/dovecot/dovecot
https://hub.docker.com/r/mailu/dovecot
https://hub.docker.com/r/mailcow/dovecot
https://hub.docker.com/r/eilandert/dovecot
But i could not get any of them to run. At the same time none of them have a forum or anything where I can put a question. Two of them (mailu/dovecot and mailcow/dovecot) are part of a bigger mailserver package. Which I do not need, I only want a IMAP server to put some email locally. But I tried them anyway.
Does anyone know how to get any of those to run? Or suggest me another stable docker container solution.

Cannot enable basic auth on Windows-Exporter to secure node between Windows and Prometheus

As a test environment to monitor status of Windows Servers (CPU, Disk usage, Memory, network etc) I have placed two testing nodes with Windows-Exporter configured on custom port :15000
Next, I have created proper jobs for each separate Windows instances and created dashboard in Grafana.
The problem is that I'm looking for securing nodes so only Prometheus server can access node output and all other computers in same network get deny access to node website.
I have tried to install Windows Node with setting:
msiexec /i windows_exporter-0.19.0-amd64.msi LISTEN_PORT="15000" EXTRA_FLAGS="--web.config.file=C:\Configuration\web.yml"
As well as with different configurations of " and ' in commandline for EXTRA_FLAGS parameter - yet it seems they are being ignored. The only parameter working fine is change of listen port.
I have followed instructions provided at https://prometheus.io/docs/guides/basic-auth/ to set up basic auth.
Web.yml looks like this:
basic_auth:
username: 'scrapper'
password: '$2a$14$AWpxyT1KcRPSE07IfmqTqOZznpMfGwxHP8uPVQV8G0qdjggND3hgC'
However, after installation with msiexec - entry in Windows services for windows_exporter is without web.config.file entry:
"C:\Program Files\windows_exporter\windows_exporter.exe" --log.format logger:eventlog?name=windows_exporter --telemetry.addr 0.0.0.0:15000
I have tried to edit service entry with sc command but it broke node completely, making me rolling back to unprotected access to node.
Does basic auth work on windows-exporter same way as on node-exporter for Linux OSes?
Or is there other possible way to secure access to node exposed data without need to install IIS?
I have never worked with node exporter in Windows, but in Linux, your Web.yml configuration file should be as follows:
basic_auth_users:
<string>: <secret>
like this:
basic_auth_users:
scrapper: $2a$14$AWpxyT1KcRPSE07IfmqTqOZznpMfGwxHP8uPVQV8G0qdjggND3hgC

Attaching a Vagrant-hosted Gremlin-Server as a Data Source in PhpStorm

I am running the Gremlin-Server from this Tinkerpop Docker Image within a Vagrant box. I am trying to link this server as a data source so that I can utilize the "Graph Database Console" plugin in PhpStorm. I am attempting to do this through the driver wizard workflow.
However, in the class dropdown it won't give me any configuration options other than java.sql.Driver. It does give me the option of connecting custom driver files, but I am not sure which file I would need to attach from the Gremlin-Server docker image.
What steps would it take to connect a Gremlin-Server as a data-source in PhpStorm?
As it turns out, the TinkerPop3 stack does not offer a JDBC Driver, as it is not neccessarily a database in-and-of itself. There is a SQL port of Gremlin which is purportedly working on a JDBC driver for Gremlin-Server, but there is no option to currently reference a local hosted Gremlin-Server.
However in this instance there is a plugin for PhpStorm called Graph Database Support which allows you to then configure a Local Graph Database by pointing to your local environment and port. In my case it was a Vagrant IP address and a forwarded Docker port which meets that need.

call jmx operation on a local running process

I have a java process on a linux server, which runs with this option: -Dcom.sun.management.jmxremote
So I cannot just connect to this process via jconsole running on my local pc (because neither port nor -Dcom.sun.management.jmxremote.ssl=false options are set up).
But still, how can I connect to the application and run some operations over some of its MBeans? It this possible? I have a ssh access to the server and would be able to run it "locally" on the server (but not changing the options unfortunately)
According to JMX documentation the -Dcom.sun.management.jmxremote option
Enables the JMX remote agent and local monitoring via JMX connector published on a private
interface used by jconsole. The jconsole tool can use this connector if it is executed by
the same user ID as the user ID that started the agent. No password or access files are
checked for requests coming via this connector.
The naming is a bit unfortunate because it in fact enables the local monitoring only.
Since you can not change the options but can access the server via SSH the only option is to use X server forwarding (ssh -X ...) and run jconsole (or better yet jvisualvm which has specific optimisations for running remotely).

Jenkins - remote access denied

I'm using the ArtifactDeployer plugin to deploy the build job artifacts to a remote location (Windows share SMB).
However Jenkins never manages to succeed. Throwing errors like:
[ArtifactDeployer] - Starting deployment from the post-action ...
[ArtifactDeployer] - [ERROR] - Failed to deploy. Can't create the directory ... Build step
[ArtifactDeployer] - Deploy artifacts from workspace to remote directories' changed build result to FAILURE
Local deployment works fine.
The Jenkins machine OS is Windows 7 32-bit Prof.
Jenkins is running as a service using a local system account.
I tried using another account, my user account but the service failed to start (Windows error 1069: the service did not start due to a logon failure).
The network service account did run but than Jenkins throws errors it can't access the .NET framework.
When manually trying the remote copy, this works fine. I can create directories and write to it. On the same machine of course.
I tried two different remote reference in Jenkins:
1) \\targetdirectory
2) I:\ - by mapping a drive letter to the remote dir in windows
No success...
Any tips or suggestions? Thanks!
Update 15/02/2012:
Still no solution or workaround for this issue.
It's not only the plugin, I hit also this issue using "Execute Windows batch command".
I found a bug report that I want to share.
Solution
I found a solution. You have to grant access persmission to the computer in a domain instead of the user of that machine. Seems very logic if you look back to it.
A 2nd solution is to run the service using a domain user account. Above I made a mistake by using the local user .\user in stead of DOMAIN\user.
If you don't have a domain, the following will work for sure. This should work even if you have a domain.
Background Info:
You need your mapped drive to be mapped for the same account that the service is using AND be available at the right time. Normally mapped drives are mapped only for the logged in user, at the time that they log in. Service user contexts don't get "logged in" per se -- for example, if I map a drive as MyUser and the service runs as MyUser, the drive won't be available until I actually log in by typing in my password. However, we can use a script to map the drive at startup (instead of login) for a particular user. Jenkins normally runs as Local System Account, so if you don't want to change that, you'll need to run the script below as the SYSTEM user. You can instead create a specific user for Jenkins to run as, if you don't want to grant this mapped drive to all services/processes that run as SYSTEM, and run both the service and the script below as that user (this is probably more secure).
Solution Steps:
In ArtifactDeployer you want to deploy to a mapped network drive. In my case this is S:.
There is no special setup for permissions on the remote share. (In my case, a Windows Server 2008 share with a username and password that is used for mapping the drive.)
Write a batch file MapDrives.bat in a place that your chosen user (default: SYSTEM) has access to, with the following in it:
net use S: "\\server_name\share_name" /persistent:yes password_here /USER:username_here
Note that I am mapping to S: in that line.
Via Task Scheduler, create a task that runs as the same user as the service (default: SYSTEM), triggers At Startup, and as it's action, runs the batch file MapDrives.bat.
Reboot and it should work!
Citations:
After diving through many pages and many tests, ultimately, the best suggestions were found here, and led me to the above solution.
https://stackoverflow.com/a/4763324/150794
Make sure your 'local system account' has access rights to the remote directory (including write access). Then use the notation
\\targetdirectory
Mapping drive letters to remote directories only applies to the user account you are currently working with. The drive letter mapping will not be available to any other account.

Resources