I can't find an exact description of the differences between the OpenAI Gym environments 'CartPole-v0' and 'CartPole-v1'.
Both environments have seperate official websites dedicated to them at (see 1 and 2), though I can only find one code without version identification in the gym github repository (see 3). I also checked out the what files exactly are loaded via the debugger, though they both seem to load the same aforementioned file. The only difference seems to be in the their internally assigned max_episode_steps and reward_threshold, which can be accessed as seen below. CartPole-v0 has the values 200/195.0 and CartPole-v1 has the values 500/475.0. The rest seems identical at first glance.
import gym
env = gym.make("CartPole-v1")
print(self.env.spec.max_episode_steps)
print(self.env.spec.reward_threshold)
I would therefore appreciate it if someone could describe the exact differences for me or forward me to a website that is doing so. Thank you very much!
As you probably have noticed, in OpenAI Gym sometimes there are different versions of the same environments. The different versions usually share the main environment logic but some parameters are configured with different values. These versions are managed using a feature called the registry.
In the case of the CartPole environment, you can find the two registered versions in this source code. As you can see in lines 50 to 65, there exist two CartPole versions, tagged as v0 and v1, whose differences are the parameters max_episode_steps and reward_threshold:
register(
id='CartPole-v0',
entry_point='gym.envs.classic_control:CartPoleEnv',
max_episode_steps=200,
reward_threshold=195.0,
)
register(
id='CartPole-v1',
entry_point='gym.envs.classic_control:CartPoleEnv',
max_episode_steps=500,
reward_threshold=475.0,
)
Both parameters confirm your guess about the difference between CartPole-v0 and CartPole-v1.
Related
I am making use of MantisBT to track issues and have so far collected a number of issues. However, my changelog remains empty
No Change Log information available. Issues are included once projects
have versions and issues are resolved with "fixed in version" set.
Each bug report has
Product version, target version (needed for roadmap) and a fixed in version (needed for changelog).
Likewise I have released certain versions.
I have customized my workflow and I suspect this is part of the reason.
# custom access list
$g_access_levels_enum_string = '10:VIEWER,20:REPORTER,30:ENGINEER,40:CCB,90:ADMINISTRATOR';
# custom resolution list
$g_resolution_enum_string = '10:OPEN,20:REOPEN,30:WONTFIX,60:DISPOSITIONED, 70:FIXED';
From what I have been able to determine, for a changelog to appear you need
1) a released version (done)
2) a bug with a fixed in version matching this (done
3) a bug closed as "fixed"
now in a fresh MantisBT (and testing shows changelog works), FIXED has a constant of 20 so part of me suspect it is my g_resolution_enum_string but this would also imply that there should be another variable that sets which threshold should be used
$g_bug_resolution_fixed_threshold = FIXED;
This does not work
What am I missing? Also if it is of importance... my versions are labeled: v0.0, v0.1, v0.2 (ie prepended by 'v')
I suggest you read the documentation's Enumerations section, in particular
The strings included in the enumerations here are just for documentation purposes
So, your enum definition of 70:FIXED does not actually match the constant FIXED which, as you have pointed out, is still set to 20 which means that $g_bug_resolution_fixed_threshold actually points to your 20:REOPEN... You may want to define your own constants.
Also note that there is another important threshold in this context, $g_bug_resolution_not_fixed_threshold - resolutions above it are considered to be resolved in an unsuccessful way. By default it is set to _UNABLE_TO_REPRODUCE_ (40).
In other words, for an issue to appear in the Changelog, it must match all of the following criteria (reference):
status >= bug_resolution_fixed_threshold
resolution >= bug_resolution_fixed_threshold
resolution < bug_resolution_not_fixed_threshold
Note that this standard behavior can be changed with a custom function.
So your problem is indeed caused by your custom resolution_enum_string, most likely in combination with the values in the 2 above-mentioned thresholds.
I'm trying to use the new 'evaluation' action after inference to generate some metrics for my output. However, the .csv files just show scores of '0' for average_distance and '1' for Jaccard and Dice for each of my data volumes. I can't seem to find any documentation for the evaluation action, so I'm not sure what I'm doing wrong. Also, the --dataset_to_infer=Validation option doesn't seem to work, both inference and evaluation are being applied to all data rather than just the validation set.
Thanks!
For the evaluation issue, we're working on the documentation. The dataset_to_infer option is only tested for the applications in NiftyNet/niftynet/application; applications from the model zoo are not upgraded to support it yet (please file an issue with more details https://github.com/NifTK/NiftyNet/issues if you believe it's a bug).
For the time being, pointing directly to the inference result in the config.ini worked for me.
e.g.
[inferred]
csv_file = model_dir/save_seg_dir/inferred.csv
I believe this file is not found currently and then evaluation defaults to comparing labels to labels. See the issue on GitHub.
i use one Neos installation for multiple domains with different content.
duplicating the package TYPO3.NeosDemoTypo3Org, removing the node-identifier and doing some replacements brought me nearby everything i need.
But only the first Settings.yaml found in Packages/Sites/ seems to be parsed. All changes to the Settings.yaml found in other packages (Test1 and Test2 in the following example) are ignored.
Packages/Sites/TYPO3.NeosDemoTypo3Org/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://TYPO3.NeosDemoTypo3Org/Private/Form/'
Packages/Sites/UDF.Test1/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://UDF.Test1/Private/Form/'
Packages/Sites/UDF.Test2/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://UDF.Test2/Private/Form/'
When i delete the first Settings.yaml (Packages/Sites/UDF.Test2/Configuration/Settings.yaml), the next Setting.yaml in alphabetical order (Packages/Sites/UDF.Test1/Configuration/Settings.yaml) is used for all 3 site packages. When i also delete this file, the Settings.yaml from UDF.Test2 is used and so on.
would be awesome if somebody can enlighten me. I am new to flow and neos and any help is welcome. RTFM, i know, but as described here i have to believe, that it should work like i did?
alternative way?
is it possible not to set the savePath in the site package configuration but in the common settings ./Packages/Application/TYPO3.Form/Configuration/Settings.yaml
I see a {#package} placeholder in
### BASE ELEMENTS ###
# NAMING: base class for everything is RENDERABLE
'TYPO3.Form:Base':
renderingOptions:
templatePathPattern: 'resource://{#package}/Private/Form/{#type}.html'
but this doesn't work here
TYPO3:
Form:
yamlPersistenceManager:
#savePath: '%FLOW_PATH_DATA%Forms/'
savePath: 'resource://{#package}/Private/Form/'
as you see i am not really experienced with this stuff but i am very motivated.
All Settings.yaml are used, but the settings are merged in order of the package loading.
The loading order of packages again is based on their dependencies.
All three packages probably have the same dependencies so they are loaded one after the other (would need to check with which ordering), so third Settings.yaml is loaded, then second Settings.yaml is loaded and overwrites the third, then the first is loaded and again overwrites the second. Every setting path can only be set once, that's why.
In any case what you are trying to archive probably won't work. This is one of the things we have to fix (site package dependent configuration).
A possible workaround is either using a common package with the form configuration and just set the savePath to this package or using diferent subcontexts (like Production/Domain1 Production/Domain2) and setting this setting different per subcontext, then you could define the subcontext by domain (as the sites are triggered by domain anyway).
I'm using the iPhone library for MeCab found at https://github.com/FLCLjp/iPhone-libmecab . I'm having some trouble getting it to tokenize all possible words. Specifically, I cannot tokenize "吉本興業" into two pieces "吉本" and "興業". Are there any options that I could use to fix this? The iPhone library does not expose anything, but it uses C++ underneath the objective-c wrapper. I assume there must be some sort of setting I could change to give more fine-grained control, but I have no idea where to start.
By the way, if anyone wants to tag this 'mecab' that would probably be appropriate. I'm not allowed to create new tags yet.
UPDATE: The iOS library is calling mecab_sparse_tonode2() defined in libmecab.cpp. If anyone could point me to some English documentation on that file it might be enough.
There is nothing iOS-specific in this. The dictionary you are using with mecab (probably ipadic) contains an entry for the company name 吉本興業. Although both parts of the name are listed as separate nouns as well, mecab has a strong preference to tag the compound name as one word.
Mecab lacks a feature that allows the user to choose whether or not compounds should be split into parts. Note that such a feature is generally hard to implement because not everyone agrees on which compounds can be split and which ones can't. E.g. is 容疑者 a compound made up of 容疑 and 者? From a purely morphological point of view perhaps yes, but for most practical applications probably no.
If you have a list of compounds you'd like to get segmented, a quick fix is to create a user dictionary for the parts they consist of, and make mecab use this in addition to the main dictionary.
There is Japanese documentation on how to do this here. For your particular example, it would involve the steps below.
Make a user dictionary with two entries, one for 吉本 and one for 興業:
吉本,,,100,名詞,固有名詞,人名,名,*,*,よしもと,ヨシモト,ヨシモト
興業,,,100,名詞,一般,*,*,*,*,こうぎょう,コウギョウ,コウギョウ
I suspect that both entries exist in the default dictionary already, but by adding them to a user dictionary and specifying a relatively low specificness indicator (I've used 100 for both -- the lower, the more likely to be split), you can get mecab to tend to prefer the parts over the whole.
Compile the user dictionary:
$> $MECAB/libexec/mecab/mecab-dict-index -d /usr/lib64/mecab/dic/ipadic -u mydic.dic -f utf-8 -t utf-8 ./mydic
You may have to adjust the command. The above assumes:
Mecab was installed from source in $MECAB. If you use mecab installed by a package manager, you might have difficulties finding the mecab-dict-index tool. Best install from source.
The default dictionary is in /usr/lib64/mecab/dict/ipadic. This is not part of the mecab package; it comes as a separate package (e.g. this) and you may have difficulties finding this, too.
mydic is the name of the user dictionary created in step 1. mydic.dic is the name of the compiled dictionary you'll get as output (needs not exist).
Both the system dictionary (-t option) and the user dictionary (-f option) are encoded in UTF-8. This may be wrong, in which case you'll get an error message later when you use mecab.
Modify the mecab configuration. In a system-wide installation, this is a file named /usr/lib64/mecab/dic/ipadic/dicrc or similar. In your case it may be located somewhere else. Add the following line to the end of the configuration file:
userdic = home/myhome/mydic.dic
Make sure the absolute path to the dictionary compiled above is correct.
If you then run mecab against your input, it will split the compound into its parts (I tested it, using mecab 0.994 on a Linux system).
A more thorough fix would be to get the source of the default dictionary and manually remove all compoun nouns you want to get split, then recompile the dictionary. As a general remark, using a CJK tokenizer for a serious application in production mode over a longer period of time usually involves a certain amount of dictionary maintenance (adding/removing entries) regularly.
I'm using a closed-source application that loads Lua scripts and allows some customization through modifying these scripts. Unfortunately that application is not very good at generating useful log output (all I get is 'script failed') if something goes wrong in one of the Lua scripts.
I realize that dynamic languages are pretty much resistant to static code analysis in the way C++ code can be analyzed for example.
I was hoping though, there would be a tool that runs through a Lua script and e.g. warns about variables that have not been defined in the context of a particular script.
Essentially what I'm looking for is a tool that for a script:
local a
print b
would output:
warning: script.lua(1): local 'a' is not used'
warning: script.lua(2): 'b' may not be defined'
It can only really be warnings for most things but that would still be useful! Does such a tool exist? Or maybe a Lua IDE with a feature like that build in?
Thanks, Chris
Automated static code analysis for Lua is not an easy task in general. However, for a limited set of practical problems it is quite doable.
Quick googling for "lua lint" yields these two tools: lua-checker and Lua lint.
You may want to roll your own tool for your specific needs however.
Metalua is one of the most powerful tools for static Lua code analysis. For example, please see metalint, the tool for global variable usage analysis.
Please do not hesitate to post your question on Metalua mailing list. People there are usually very helpful.
There is also lua-inspect, which is based on metalua that was already mentioned. I've integrated it into ZeroBrane Studio IDE, which generates an output very similar to what you'd expect. See this SO answer for details: https://stackoverflow.com/a/11789348/1442917.
For checking globals, see this lua-l posting. Checking locals is harder.
You need to find a parser for lua (should be available as open source) and use it to parse the script into a proper AST tree. Use that tree and a simple variable visibility tracker to find out when a variable is or isn't defined.
Usually the scoping rules are simple:
start with the top AST node and an empty scope
item look at the child statements for that node. Every variable declaration should be added in the current scope.
if a new scope is starting (for example via a { operator) create a new variable scope inheriting the variables in the current scope).
when a scope is ending (for example via } ) remove the current child variable scope and return to the parent.
Iterate carefully.
This will provide you with what variables are visible where inside the AST. You can use this information and if you also inspect the expressions AST nodes (read/write of variables) you can find out your information.
I just started using luacheck and it is excellent!
The first release was from 2015.