SPSS: Switch server by syntax command - spss

Mostly, I run SPSS on a server. However, there are occasions, when it needs to be run locally.
I didn't find a way to tell SPSS by syntax, whether it has to run on the server or locally. Any ideas how to solve that 'problem'?

There is no SPSS syntax to do that.
There may be methods in scripting to do it. From the Python Reference Guide for SPSS Statistics, I see this:
GetLocalServer Method
Returns an SpssServerConf object representing the local computer.
Syntax
SpssServerConf=SpssClient.GetLocalServer()
That would be the first thing to try.
I guess you could start the server locally and then use the following in a BEGIN .. END PROGRAM block to run stuff on the server:
Example: Connecting to a Server Using a Saved Configuration
import SpssClient
SpssClient.StartClient()
ServerConfList = SpssClient.GetConfiguredServers()
for i in range(ServerConfList.Size()):
server = ServerConfList.GetItemAt(i)
if server.GetServerName()=="myservername":
server.ConnectWithSavedPassword()
SpssClient.StopClient()
SpssClient.GetConfiguredServers() gets an SpssServerConfList object that provides access to the list of configured servers.
-The GetItemAt method of an SpssServerConfList object returns the SpssServerConf object at the specified index. Index values start from 0 and represent the order in which the servers were added to the list.
The ConnectWithSavedPassword method uses the connection information (domain, user ID, and password) to connect to the server.

Related

Format of %TIME% environment variable

I have a batch script that starts with saving its start time for future use, which it does thusly:
set BATCHSTART=%TIME%
At the end of the batch, I'd like to re-convert it into a C# DateTime, and I need to do it in a locale-independent way. So, how can I do this?
I tried using the registry values described here (HKCU\Control Panel\International value sTimeFormat), unfortunately it doesn't work: On the Windows 10 Server machine I tried it on, the registry says "HH:mm:ss", but %TIME% returns "HH:mm:ss.ff" instead (I also tried on a French Windows 10 machine, got the same registry value but %TIME returned "HH:mm:ss,ff")
So, is there any way to fetch the real time format used by %TIME%? Or failing that, to parse it in a way that won't run afoul of locale problems?
Or even another way to store the time during the execution of a batch file?
From experimentation, for now I'll simply assume this is the format used:
"HH:mm:ss" + CultureInfo.CurrentUICulture.NumberFormat.NumberDecimalSeparator + "ff"

FireDAC (FDQuery) - database with dot in it's name

I have got this problem with FireDAC -> FDQuery component when it tries to select data from a database with '.' (dot) in its name.
The database name is TEST_2.0 and the error on Opening the dataset says:
Could not find server 'TEST_2' in sys.servers [...]
I have tried {TEST_2.0} (curly brackets) and [TEST_2.0] (square brackets). Also setting QuotedIdentifiers (Format Opetions) property to True does not seem to fix the problem. In SQL query I can add 'SET QUOTED_IDENTIFIER ON;' but this breaks inserts to the dataset.
The FDConnection component can connect to that server and that database using MSSQL driver without problems. It seems it is the dataset that dosn't handle it. UniDAC seems to handle everything without any problems.
I am using RadStudio 10.2.
Has anyone found any solution to this? Thanks in advance for any replies
I got a response from Emarcadero and it works for me:
"The problem is not in FireDAC, but in SQL Server ODBC driver
SQLPrimaryKeys function. It fails to work with a catalog name
containing a dot. FireDAC uses this function to get primary key fields
for a result set, when fiMeta is included into FetchOptions.Items. So,
as a workaround / solution, please exclude fiMeta from
FetchOptions.Items."
What is wrong?
I was able to reproduce what you've described here. I've ended up on metainformation command, specifically the SQLPrimaryKeys ODBC function call. I have used SQL Server Native Client 11.0 driver connected to Microsoft SQL Server Express 12.0.2000.8, local database server instance.
When I tried to execute the following SQL command (with TEST_2.0 database created) through a TFDQuery component instance with default settings (linked connection object was left with empty database connection parameter) in Delphi Tokyo application:
SELECT * FROM [TEST_2.0].INFORMATION_SCHEMA.TABLES
I got this exception raised when the SQLPrimaryKeys function was called with the CatalogName parameter set to TEST_2.0 (from within the metainformation statement method Execute):
[FireDAC][Phys][ODBC][Microsoft][SQL Server Native Client 11.0][SQL
Server]Could not find server 'TEST_2' in sys.servers. Verify that the
correct server name was specified. If necessary, execute the stored
procedure sp_addlinkedserver to add the server to sys.servers.'.
My next attempt was naturally modifying that CatalogName parameter value to [TEST_2.0] whilst debugging, but even that failed with similar reason (just failed for the name [TEST_2), so for me it seems that the SQLPrimaryKeys ODBC function implementation with the driver I've used cannot properly handle dotted CatalogName parameter values (it seems to ignore everything after dot).
What can I do?
The only solution seems to be just fixing ODBC drivers. Workaround I would suggest is not using dots in database names (as discussed e.g. in this thread). Another might be preventing FireDAC from getting dataset object metadata (by excluding fiMeta option from the Items option set). That will bring you the responsability of supplying dataset object metadata by yourself (at this time only primary key definition).

Zabbix LLD Value should be a JSON object

Alright! Following is the scenario with respective queries:
1) I am using a bash script to generate JSON object for status of custom processes.
2) Providing the bash inside zabbix_agentd.conf file:
UserParameter=service.check[*],/usr/lib/zabbix/externalscripts/service_check.bash
I want to provide the process names as parameters to the bash file here in UserParameter, how do I do that?
3) Restarting the zabbix-agent and checking with zabbix-get yields an empty JSON (because we have not given any process names):
{
"data":[
]
}
4) If I provide a process name into UserParameter as:
UserParameter=service.check[*],/usr/lib/zabbix/externalscripts/service_check.bash apache2 ntp cron
It yields the following:
{
"data":[
which I know is wrong, since I need to pass the process names in a different way. I tried passing them inside the bash script and even then it generates an invalid json as above.
5) The JSON generated will be taken care by the Zabbix discovery rule of type "Zabbix agent", where it will create different items out of process names. Following is the JSON that my script should send:
{"data":[{"{#NAME}":"apache2","{#STATUS}":"RUNNING","{#VALUE}":"1"},{"{#NAME}":"ntp","{#STATUS}":"RUNNING","{#VALUE}":"1"},{"{#NAME}":"cron","{#STATUS}":"STOPPED","{#VALUE}":"0"}]}
I could have used zabbix-sender for the same, but it would need me to run the sender for every key-value that I need to send. Also, this way I have to be concerned with manipulating data at one place only, and the rest will be taken care of.
Hope this is clear enough and explains my situation.

Dataflow/Beam Templates, Productionization, Initialization, and ValueProviders

I have an Apache Beam job running on Google Cloud Dataflow, and as part of its initialization it needs to run some basic sanity/availability checks on services, pub/sub subscriptions, GCS blobs, etc. It's a streaming pipeline intended to run ad infinitum that processes hundreds of thousands of pub/sub messages.
Currently it needs a whole heap of required, variable parameters: which Google Cloud project it needs to run in, which bucket and directory prefix it's going to be storing files in, which pub/sub subscriptions it needs to read from, and so on. It does some work with these parameters before pipeline.run is called - validation, string splitting, and the like. In its current form in order to start a job we've been passing these parameters to to a PipelineOptionsFactory and issuing a new compile every single time, but it seems like there should be a better way. I've set up the parameters to be ValueProvider objects, but because they're being called outside of pipeline.run, Maven complains at compile time that ValueProvider.get() is being called outside of a runtime context (which, yes, it is.)
I've tried using NestedValueProviders as in the Google "Creating Templates" document, but my IDE complains if I try to use NestedValueProvider.of to return a string as shown in the document. The only way I've been able to get NestedValueProviders to compile is as follows:
NestedValueProvider<String, String> pid = NestedValueProvider.of(
pipelineOptions.getDataflowProjectId(),
(SerializableFunction<String, String>) s -> s
);
(String pid = NestedValueProvider.of(...) results in the following error: "incompatible types: no instance(s) of type variable(s) T,X exist so that org.apache.beam.sdk.options.ValueProvider.NestedValueProvider conforms to java.lang.String")
I have the following in my pipelineOptions:
ValueProvider<String> getDataflowProjectId();
void setDataflowProjectId(ValueProvider<String> value);
Because of the volume of messages we're going to be processing, adding these checks at the front of the pipeline for every message that comes through isn't really practical; we'll hit daily account administrative limits on some of these calls pretty quickly.
Are templates the right approach for what I want to do? How do I go about actually productionizing this? Should (can?) I compile with maven into a jar, then just run the jar on a local dev/qa/prod box with my parameters and just not bother with ValueProviders at all? Or is it possible to provide a default to a ValueProvider and override it as part of the options passed to the template?
Any advice on how to proceed would be most appreciated. Thanks!
The way templates are currently implemented there is no point to perform "post-template creation" but "pre-pipeline start" initialization/validation.
All of the existing validation executes during template creation. If the validation detects that there the values aren't available (due to being a ValueProvider) the validation is skipped.
In some cases it is possible to approximate validation by adding runtime checks either as part of initial splitting of a custom source or part of the #Setup method of a DoFn. In the latter case, the #Setup method will run once for each instance of the DoFn that is created. If the pipeline is Batch, after 4 failures for a specific instance it will fail the pipeline.
Another option for productionizing pipelines is to build the JAR that runs the pipeline, and have a production process that runs that JAR to initiate the pipeline.
Regarding the compile error you received -- the NestedValueProvider returns a ValueProvider -- it isn't possible to get a String out of that. You could, however, put the validation code into the SerializableFunction that is run within the NestedValueProvider.
Although I believe this will currently re-run the validation everytime the value is accessed, it wouldn't be unreasonable to have the NestedValueProvider cache the translated value.

how to debug this lithium request?

I am trying to work on whats wrong with my lithium current setup. I have installed the Xdebug and verified that remote host can establish the connection as requested.
http://myinstance.com/test/lithium/tests/cases/analysis/logger/adapter/CacheTest?filters[]=lithium\test\filter\Coverage
Please note in fresh installation in local environment , "Coverage" Filter is working as expected.
I added some test code inside the "apply" function in coverage.php but it is not even called !!!! Can some have experience in debugging the above URL ?
I am not able to understand why coverage filter is not called up and executed ...Any hints are highly appreciated !
The filters in the query string are added to the options in lithium\test\Controller::__invoke() and then passed into the test Report object created by the test Dispatcher. The Report object finds the test filter class and then runs the applyFilter() method for that test filter as can be seen in lines 140 to 143 of the current code. So those lines would be another place to debug. Those should wrap the run() method of your tests with this filter code inside the apply() method that uses xdebug_get_code_coverage() and related functions. You said you added test code in the apply method and it isn't called. I'm not sure what the issue is. Are you sure you are pointing to the right server and code location? It is possible to run tests from the command line. Maybe you should try that. See code comments in lithium\console\command\Test or run li3 test --help for info on how to use the command-line test runner.
I can confirm on nginx I also have /test/lithium/tests/cases/analysis/logger/adapter/CacheTest?filters[]=lithium\x5Ctest\x5Cfilter\x5CCoverage in my access log. The \x5C is expected url encoding of the backslash character.

Resources