How to set optuna's study.optimize verbosity to 0? - machine-learning

I want to set optuna's study.optimize verbosity to 0. I thought optuna.logging.set_verbosity(0) might do it, but I still get the Trial 0 finished with value .... updates for every trial
What is the correct way to do this? Unfortunately, extensive searching through the docs still only results in the above method.
Many thanks in advance

Try this:
optuna.logging.set_verbosity(optuna.logging.WARNING)
It basically changes the level of verbosity.
For more level choices, check optuna's official guides here.

optuna warnings tend to be raised using standard pythonic warnings.warn() (which explains why optuna.logging.set_verbosity() does not always work to suppress them), so you can silence them all at once with:
# treat all python warnings as lower-level "ignore" events
warnings.filterwarnings("ignore")
Be aware however, that this will silence also the useful and infrequent ones like deprecation warnings.

Related

How to run a set of TWEANN benchmarks in Erlang correctly?

I've been reading Gene I Sher's Handbook to neuroevolution through Erlang and trying to replicate all the experiments described there.
And it all worked, until I came to chapter 19, which read
Having set everything up, we execute the benchmark for every noted experimental setup, and run it to completion. To do this, we simply modify the constraints used in our benchmarker module, and then execute benchmarker:start(Experiment_Name), for every of our experimental setups.
There are 14 experiments altogether, named:
1. [SlidingWindow5]2. [SlidingWindow10]3. [SlidingWindow20] 4. [SlidingWindow50]5. [SlidingWindow100]
and
1. [ChartPlane5X10]2. [ChartPlane5X20]3. [ChartPlane10X10]4. [ChartPlane10X20]5. [ChartPlane20X10]6. [ChartPlane20X20]7. [ChartPlane50X10]8. [ChartPlane50X20]9. [ChartPlane100x10]
Just starting a polis and running benchmarker:start(SlidingWindow5). in an erlang shell results in * 1: variable 'SlidingWindow5' is unbound.
I'm probably being obtuse and not doing something obvious, but I really wish to understand which arguments to pass to the benchmarker:start().
Reading the source code didn't help so far.
I think that reading the whole book is not necessary to figure out how to make this particular piece of code work.
All right. I needed to use an all-lowercase name: benchmarker:start(slidingwindow5)..
This answer is correct, but inconclusive; it gives the solution but does not explain why the solution is as it is from an Erlang perspective. Please don't upvote it until it does.

Code commenter gem

I was sure a few months ago there was a gem that made sure you've commented every single line of code. Or at least every single action. If you hadn't, it brought your attention to it after you'd run some sort of rake task.
Can't remember for the life of me what it was called.
But is it a good idea to comment every single line of code? I say yes, it solidifies your knowledge, gives you a last change to catch bugs/security holes and eases future development.
However, projects in github and really sparsely commented. Personally, I need to comment most lines before I start to realise what a piece of code does. Is this not the case for most? Do comments just trip the code ninjas up?
Commenting every single line of code is a horrible idea:
It's noisy and obfuscates the source
It becomes out of date trivially easily
Things to comment:
Tricky and/or hacky code
Complicated algorithms
Why something is doing what it's doing
Those comments should almost always live at the method level.
If a code block needs a comment it should probably be a method
If a line of code needs a comment it should probably be refactored
Code should speak for itself as much as possible. Appropriate naming goes a long way to eliminating the need for wads of commenting. Some documentation may absolutely be necessary, but in the case of large structural comments, it may make more sense to keep it out of the code, and in your wiki.
No, it's not a good idea to comment every line of code. A lot of code is self-explanatory. In fact, you should strive to make your code self-explanatory.
For example, you would never want to comment the following:
sum = 1 + 3
You should save your comments for things that need explaining.
What I think you mean is a gem that enforces proper documentation. Documentation is a comment that explains the purpose of a method or class, as well as details its parameters and return values.
Regarding the gem you're thinking of, it may be rubocop.

Any way to find more detail about WARNING: ID3D10Buffer::SetPrivateData: Existing private data of same name with different size found!

I'm encountering this error when I'm running my DirectX10 program in debug mode:
D3D10: WARNING: ID3D10Buffer::SetPrivateData: Existing private data of same name with different size found! [ STATE_SETTING WARNING #55: SETPRIVATEDATA_CHANGINGPARAMS ]
I'm trying to make the project highly OOP as a learning exercise, so there's a chance that this may be occurring, but is there a way to get some more details?
It appears this warning is raised by D3DX10CreateSprite, which is internally called by font->DrawText
You can ignore this warning, seems to be a bug in the Ms code :)
Direct3D11 doesn't have built-in text rendering anymore, so you won't encounter it in the future.
Since this is a D3D11 warning, you could always turn it off using ID3D11InfoQueue:
D3D11_MESSAGE_ID hide [] = {
D3D11_MESSAGE_ID_SETPRIVATEDATA_CHANGINGPARAMS,
// Add more message IDs here as needed
};
D3D11_INFO_QUEUE_FILTER filter;
memset(&filter, 0, sizeof(filter));
filter.DenyList.NumIDs = _countof(hide);
filter.DenyList.pIDList = hide;
d3dInfoQueue->AddStorageFilterEntries(&filter);
See this page for more. I found your question while googling for the answer and had to search a bit more to find the above snippet, hopefully this will help someone :)
What other data are you looking for or interested in?
The warning is pretty clear about what is going on, but if you want to hunt down a bit more data, there may be a few things to try.
Try calling ID3D10Buffer::GetPrivateData with the same name or do some other check to see if there is data with that name already, and if so, what the contents are. Print your results to a file, output window, or console. This may be combined with breakpoints to see where the duplicate is occurring (break when there's already data).
You may (not positive) be able to set the D3D runtimes to debug mode and to break on warnings (not sure if it can do warnings or just errors). Debug your app in VS or your preferred debugger, and when the warning is shown, it will break and you can look at the parameters.
Go through your code and track down all calls to ID3D10Buffer::SetPrivateData and look to see if there are any obvious duplicates. If there are, work up the program flow and see why and what you can do about them (this may work best after you use one of the former methods to know where to start).
How are your data names set up, and what is the buffer used for? Examining one or both may lead you to a conflict somewhere.
You may also try unicorns, they've been known to help with this kind of problem.

Rhino - Set FEATURE_LOCATION_INFORMATION_IN_ERROR in code?

I'd like fileName, lineNumber and stack traces to automatically be provided by Rhino for any errors.
I've been told that I need to set FEATURE_LOCATION_INFORMATION_IN_ERROR on the current context, but I'm not sure how to do this in code.
Does anybody have an example of turning this feature on so that I can see stacktrace dumps on crashes?
I'm using Rhino as part of Narwhal/Jack, and so that complicates things a bit, and I think the easiest way to at least get moving forward is if I can set it through code.
Thanks.
I solved this by overridding Context and providing my own implementation for hasFeature(int) that returned true for the feature(s) I want. Pretty lame that mozilla didn't put that in config somewhere.

Examples of getting it wrong first, on purpose

I just caught myself doing something I do a lot, and wanted to generalize it, express it, share it and see who else is following this general practice, to find some other example situations where it might be relevant.
The general practice is getting something wrong first, on purpose, to establish that everything else is right before undertaking the current task.
What I was trying to do, specifically, was to find examples in our code base where the dojo TextArea widget was used. I knew (because I had it in front of me - existence proof) that the TextBox widget was present in at least one file. So I looked first for what I knew was there:
grep -r digit.form.TextBox | grep -v
svn
This wasn't right - I had made a common (for me) mistake of leaving off the star, so I fixed that:
grep -r digit.form.TextBox * | grep
-v svn
which found no results! Quick comparison with the file I was looking at showed me I had misspelled "dijit":
grep -r dijit.form.TextBox * | grep
-v svn
And now I got results. Cool; doing it wrong first on purpose meant my query was correct except for looking for the wrong thing, so now I could construct the right query:
grep -r dijit.form.TextArea * | grep
-v svn
and be confident that when it gave me no results, it was because there are no such files, and not because I had malformed the query.
I'll add three other examples as answers; please add any others you're aware of.
TDD
The red-green-refactor cycle of test-driven development may be the archetype of this practice. With red, demonstrate that the functionality doesn't exist; then make it exist and demonstrate that you've done so by witnessing the green bar.
http://support.microsoft.com/kb/275085
This VBA routine turns off the "subdatasheets" property for every table in your MS Access database. The user is instructed to make sure error-handling is set to "Break only on unhandled errors." The routine identifies tables needing the fix by the error that is thrown. I'm not sure this precisely fits your question, but it's always interesting to me that the error is being used in a non-error way.
Here's an example from VBA:
I also use camel case when I Dim my variables. ThisIsAnExampleOfCamelCase. As soon as I exit the VBA code line if Access doesn't change the lower case variable to camel case then I know I've got a typo. [OR, Option Explicit isn't set, which is the post topic.]
I also use this trick, several times an hour at least.
arrange - assert - act - assert
I sometimes like, in my tests, to add a counter-assertion before the action to show that the action is actually responsible for producing the desired outcome demonstrated by the concluding assertion.
When in doubt of my spelling, and of my editor's spell-checking
We use many editors. Many of them highlight misspelled words as I type them - some do not. I rely on automatic spell checking, but I can't always remember whether the editor of the moment has that feature. So I'll enter, say, "circuitx" and hit space. If it highlights, I'll back up over the space and the "x" and type another space - and learn that I spelled circuit correctly - but if it doesn't, I'll copy the word and paste it into a known spell-checker to see whether I did.
I'm not sure it's the best way to act, as it does not prevent you from mispelling the final command, for example typing "TestArea" or something like that instead of "TextArea" (your finger just have to slip a little for such a mistake).
IMHO the best way is to run your "final" command, but on two sample files first : one containing the requested text, another that doesn't.
In other words, instead of running a "similar" command, run the real one, but over "similar" data.
(Not sure if this would be a good idea to try for real!)
For example, you might give the system to the users for testing and tell them the password to get started is "Apple".
You know the users are fully up and ready to test (everything is installed and connections to databases working) when they contact you and say the password doesn't work (it's actually "Orange").

Resources