How to access the content of the log file after using Pyomo to call a solver in NEOS server? - using

I am using Pyomo to run BONMIN solver from the NEOS server. However, it returns a simple log file without any useful information such as the number of iterations. Can you let me know what I should do to retrieve the full log file?
The code in Pyomo is as follows:
opt_prob = pyomo.opt.SolverFactory(bonmin, solver_io = minlp)
opt_prob.options['max_iter'] = self.max_iter
opt_prob.options['tol'] = self.tol
solver_manager = pyomo.opt.SolverManagerFactory('neos')
results = solver_manager.solve(self.model, keepfiles=True, tee=True, opt=opt_prob)
The content of the log file right now is as follows:
Job 6915952 dispatched
password: lBdrJjXS
---------- Begin Solver Output -----------
Condor submit: 'neos.submit'
Condor submit: 'watchdog.submit'
Job submitted to NEOS HTCondor pool.

Not sure if this question still needs answering, but would it help to save the logfile externally?
So add:
results = solver_manager.solve(self.model, keepfiles=True, tee=True, opt=opt_prob, logfile = "name.csv")

Related

In pyspark, reading csv files gets failed if even 1 path does not exist. How can we avoid this?

In pyspark reading csv files from different paths gets failed if even one path does not exist.
Logs = spark.read.load(Logpaths, format="csv", schema=logsSchema, header="true", mode="DROPMALFORMED");
Here Logpaths is an array that contain multiple paths. And these paths are created dynamically depending upon given startDate and endDate range. If Logpaths contain 5 paths and first 3 exists but 4th does not exist. Then whole extraction gets failed. How can I avoid this in pyspark or how can I check there existance before reading?
In scala I did this by checking file existance and filter out non-existed records by using hadoop hdfs filesystem globStatus function.
Path = '/bilal/2018.12.16/logs.csv'
val hadoopConf = new org.apache.hadoop.conf.Configuration()
val fs = org.apache.hadoop.fs.FileSystem.get(hadoopConf)
val fileStatus = fs.globStatus(new org.apache.hadoop.fs.Path(Path));
So I got what I was looking for. Like the code I posted in the question which can be used in scala for file existance check. We can use below code in case of PySpark.
fs = sc._jvm.org.apache.hadoop.fs.FileSystem.get(sc._jsc.hadoopConfiguration())
fs.exists(sc._jvm.org.apache.hadoop.fs.Path("bilal/logs/log.csv"))
This is exactly the same code also used in scala, so in this case we are using java library for hadoop and java code runs on JVM on which spark is running.

Play 2.6, URI length exceeds the configured limit of 2048 characters

I am trying to migrate a Play 2.5 version to 2.6.2. I keep getting the URI-length exceeds error. Anyone knows how to override this?
I tried below Akka setting but still no luck.
play.server.akka{
http.server.parsing.max-uri-length = infinite
http.client.parsing.max-uri-length = infinite
http.host-connection-pool.client.parsing.max-uri-length = infinite
http.max-uri-length = infinite
max-uri-length = infinite
}
Simply add
akka.http {
parsing {
max-uri-length = 16k
}
}
to your application.conf. The prefix play.server is only used for a small subset of convenience features for Akka-HTTP integration into the Playframework, e.g. play.server.akka.requestTimeout. Those are documented in the Configuring the Akka HTTP server backend documentation.
I was getting error due to header length exceeding default 8 KB(8192). Added the following to build.sbt and it worked for me :D
javaOptions += "-Dakka.http.parsing.max-header-value-length=16k"
You can try similar for uri length if other options don't work
This took me way to long to figure out. It is somehow NOT to be found in the documentation.
Here is a snippet (confirmed working with play 2.8) to put in your application.conf which is also configurable via an environment variable and works for BOTH dev and prod mode:
# Dev Mode
play.akka.dev-mode.akka.http.parsing.max-uri-length = 16384
play.akka.dev-mode.akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
# Prod Mode
akka.http.parsing.max-uri-length = 16384
akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
You can then edit the config or with an already deployed application just set PLAY_MAX_URI_LENGTH and it is dynamically configurable without the need to modify commandline arguments.
env PLAY_MAX_URI_LENGTH=16384 sbt run
If anyone getting this type of error in chrome browser when trying to access a site or login. [HTTP header value exceeds the configured limit of 8192 characters]
, Go to chrome
settings -> Security and Privacy -> Site Settings , View Permission and data stored across sites
Search for the specific website and on that site do Clear all data.

Log standard out to file in airflow

I want to save the standard out from my dag separately from the logging that airflow generates.
My scripts print to standard out and I want to capture that (and only that) into a specified log file each time it runs. I've tried
logfile = 'mylog_01.log'
myfunc = def()...
log = open(logfile, "ab")
sys.stdout = log
print_params = PythonOperator(task_id="print_params",
python_callable=myfunc,
provide_context=True,
dag=dag)
... but std.out is reset within each PythonOperator. How do I control where standard out is printing to within all python operators?
Also, I am using hundreds of print statements within the scripts so modifying each of those isn't practical in this case.

Jenkins/Gerrit: Multiple builds with different labels from one gerrit event

I create two labels in one of our projects that requires builds on Windows and Linux, so the project.config for that project now looks as follows
[label "Verified"]
function = NoBlock
[label "Verified-Windows"]
function = MaxWithBlock
value = -1 Fails
value = 0 No score
value = +1 Verified
[label "Verified-Unix"]
function = MaxWithBlock
value = -1 Fails
value = 0 No score
value = +1 Verified
This works as intended. Submits require that one succesful build reports verified-windows and the other one verified-linux [1].
However, the two builds are now triggered by the same gerrit event (from 'different' servers, see note), but when they report back only one of the two labels 'survives'.
It seems as though, the plugin collates the two messages that arrive into one comment and only accepts whichever label was the first one to be set.
Is this by design or a bug? Can I work around this?
This is using the older version of the trigger: 2.11.1
[1] I got this to work by adding more than one server and then reconfiguring the messages that are sent back via SSH to gerrit. This is cumbersome and quite non-trivial. I think jobs should be able to override the label that a succesful build will set on gerrit.
This can be adressed by using more than one user name, so the verdicts on labels don't get mixed up. However this is only partially satisfactory, since multiple server connections for the same server also duplicate events from the event stream.
I am currently working on a patch for the gerrit trigger plugin for jenkins to address this issue and and make using different labels more efficient.
Maybe you can solve this challenge by using a post build groovy script.
I provided an example at another topic: https://stackoverflow.com/a/32825278
To be more specific as mentioned by arman1991
Install the Groovy Postbuild Plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin
Use the following example script as PostBuild action in each of your jobs. Modify it to your needs for the Linux verification.
It will do for you:
collect necessary environment variables and status of the job
build feedback message
build ssh command
execute ssh command -> send feedback to gerrit
//Collect all environment variables of the current build job
def env = manager.build.getEnvironment(manager.listener)
//Get Gerrit Change Number
def change = env['GERRIT_CHANGE_NUMBER']
//Get Gerrit Patch Number
def patch = env['GERRIT_PATCHSET_NUMBER']
//Get Url to current job
def buildUrl = env['BUILD_URL']
//Build Url to console output
def buildConsoleUrl = buildUrl + "/console"
//Verification will set to succeded (+1) and feedback message will be generated...
def result = +1
def message = "\\\"Verification for Windows succeeded - ${buildUrl}\\\""
//...except job failed (-1)...
if (manager.build.result.isWorseThan(hudson.model.Result.SUCCESS)){
result = -1
message = "\\\"Verification for Windows failed - ${buildUrl}\\\""
}
//...or job is aborted
if (manager.build.result == hudson.model.Result.ABORTED){
result = 0
message = "\\\"Verification for Windows aborted - ${buildConsoleUrl}\\\""
}
//Send Feedback to Gerrit via ssh
//-i - Path to private ssh key
def ssh_message = "ssh -i /path/to/jenkins/.ssh/key -p 29418 user#gerrit-host gerrit review ${change},${patch} --label=verified-Windows=${result} --message=${message}"
manager.listener.logger.println(new ProcessBuilder('bash','-c',"${ssh_message}").redirectErrorStream(true).start().text)
I hope this will help you to solve your challenge without using the Gerrit Trigger Plugin to report the results.

How to execute a .bat file from a C# windows form app?

What I need to do is have a C# 2005 GUI app call a .bat and several VBScript files at user's request. This is just a stop-gap solution until the end of the holidays and I can write it all in C#. I can get the VBScript files to execute with no problem but I am unable to execute the .bat file. When I "click" in the C# app to execute the .bat file a DOS window opens up and closes very fast and the test .bat file does not execute - "Windows does not recognize bat as an internal or external command" is the error returned in the DOS box. If I simply doubl-click the .bat file or manually run it from the command prompt it does execute. I also need the .bat file to execute silently unless a user interaction is required - this script copies 11k+ files to folders on a networked machine and occasionally Windows "forgets" if the destination is a file or directory and asks for the user to tell it what it is (this is a whole other issue not for discussion here...needless to say I am annoyed by it).
So far in my C# source I have this :
Process scriptProc = new Process();
if (File.Exists("c:\\scripts\\batchfile1.bat"))
{
scriptProc.StartInfo.FileName = #"cscript";
scriptProc.StartInfo.Arguments = ("cmd.exe", "/C C:\\scripts\\batchfile1.bat"); // Wacky psuedo code //
scriptProc.Start();
scriptProc.WaitForExit(1500000);
scriptProc.Close();
}
if (!File.Exists("c:\\scripts\\batchfile1.bat"))
{
}
I am aware that this code does not work - but it is essentially what I want it to do. What I am looking at is something like this for .bat files. I assume I have to tell the system to use cmd to run the .bat. I am at a loss as to how to do this. I have checked out this site which is for C# 2003. Not much help for me as I am very green with C#.
EDIT: Using Kevin's post I attempted it again. Same solution script from that post but modified for me since I do not need to redirect:
System.Diagnostics.Process proc = new System.Diagnostics.Process();
proc.StartInfo.FileName = "C:\\scripts\\batchfile1.bat";
proc.StartInfo.RedirectStandardError = false;
proc.StartInfo.RedirectStandardOutput = false;
proc.StartInfo.UseShellExecute = false;
proc.Start();
proc.WaitForExit();
Here is what you are looking for:
Service hangs up at WaitForExit after calling batch file
It's about a question as to why a service can't execute a file, but it shows all the code necessary to do so.
For the problem you're having about the batch file asking the user if the destination is a folder or file, if you know the answer in advance, you can do as such:
If destination is a file:
echo f | [batch file path]
If folder:
echo d | [batch file path]
It will essentially just pipe the letter after "echo" to the input of the batch file.

Resources