Currently training models using AllenNLP 1.2 and the commands api:
allennlp train -f --include-package custom-exp /usr/training_config/mock_model_config.jsonnet -s test-mock-out
I'm trying to execute a forward pass on a test dataset after training is completed. I know how to add an epoch_callback, but am not sure about the syntax for the end_callback.
In my config.json, I have the following:
{
...
"trainer": {
...
"epoch_callbacks": [{"type": 'log_metrics_to_wandb',},]
}
...
}
I've tried:
"end_callback": [{"type": 'my_custom_function',},]
but got an illegal argument error. Also, I am not sure how I would accurately specify the exact custom function and communicate it to the trainer.
I think you can create a new callback function/object that inherits from TrainerCallback and override the on_end method, and then it should work as expected if you register it the same way as you did log_metrics_to_wandb above.
Just an example a bit more complete for people who is as lost as I am using allennlp, this worked for me:
Define the callback, register it and overwrite whatever method you want to call:
from allennlp.training.callbacks.callback import TrainerCallback
#TrainerCallback.register("log_metrics_to_wandb")
class LogMetricCallback(TrainerCallback):
def on_end(self, trainer, metrics, epoch, is_primary=True, **kwargs):
...
And add it in the config file under trainer -> callbacks
{
...
"trainer": {
...
"callbacks": [{"type": 'log_metrics_to_wandb',},]
}
...
}
I tested it with version 2.4.0, but according to the documentation it should not have changed much.
Related
I have grails 2.4.4 and Cobertura as covert test.
I have code like:
lstPerspectives = Perspectives.findAllByDbAndSysDelete(dbInstance, new Long(0))
But Cobertura don´t pass the test because don´t search in my DB, How can I pass this line?, How can overwrite this value? I send this lstPerspectives but it don´t take it.
Thanks
Thanks
Try something like the following:
import grails.test.mixin.Mock
import grails.test.mixin.TestFor
#TestFor(Perspectives)
#Mock([Perspectives])
class PerspectivesSpec
{
void "test Perspectives"(){
given:
def dbInstance = 'aDbInstance' // don't know what this is
def sysDelete = false // is this a boolean?
new Perspectives( dbInstance: dbInstance, sysDelete: sysDelete ).save( failOnError: true )
when:
// run you bit of code that executes the snippet in your question
then:
// check your desired outcome
}
}
I don't know if you are testing your Perspectives class directly here or something else, controller, service? so had to make a few assumptions.
I have the following setup:
one es-docker (live)
one es-docker (working)
So i wish that the working docker can run some data changes and save this inside the es application. (This changes will run over a few hours).
After this changes are done i wish to copy the working-docker (with all data) and override the live-docker.
So i can run the changes over some hours without having a downtime on live (or a minimalistic downtime).
But i don't know how to "copy" the original included all data.
Thank you for your hints.
The Elasticsearch Definitive Guide outlines a process to achieve zero downtime for use cases like yours, making use of Index Aliases.
The idea is to create an Index Alias that your applications will always use to access the live data.
Given an alias named "alias1" that is pointing to an index named "index1", perform the following steps:
Create a new index, named "index2"
Run your batch indexing process
Swap "alias1" to point to "index2"
Clean up "index1"
The alias swapping occurs in a single call, and Elasticsearch performs the action atomically, giving you the zero downtime you desire. The call looks something like this:
POST /_aliases
{
"actions" : [
{ "remove" : { "index" : "index1", "alias" : "alias1" } },
{ "add" : { "index" : "index2", "alias" : "alias1" } }
]
}
In my Test, I have some feature methods that only need to run in certain situations. My code looks something like this:
class MyTest extends GebReportingSpec{
def "Feature method 1"(){
when:
blah()
then:
doSomeStuff()
}
def "Feature method 2"(){
if(someCondition){
when:
blah()
then:
doSomeMoreStuff()
}
}
def "Feature method 3"(){
when:
blah()
then:
doTheFinalStuff()
}
}
I should note I am using a custom spock extension that allows me to run all feature methods of a spec even if a previous feature method fails.
The thing I just realized and the reason I am making this post, is because "Feature method 2" does not show up in my test results for some reason, but method 1 and 3 do. Even if someCondition is set to true, it does not appear in the build results. so I am wondering why this is, and how I can make this feature method conditional
Spock has special support for conditionally executing features, take a look at #IgnoreIf and #Requires.
#IgnoreIf({ os.windows })
def "I'll run everywhere but on Windows"() { ... }
You can also use static methods in the condition closure, they need to use the qualified version.
class MyTest extends GebReportingSpec {
#Requires({ MyTest.myCondition() })
def "I'll only run if myCondition() returns true"() { ... }
static boolean myCondition() { true }
}
Your test is not appearing in the report as you cant have the given, when, then blocks inside of a conditional.
You should always run the test but allow the test to fail gracefully:
Use the #FailsWith attribute. http://spockframework.org/spock/javadoc/1.0/spock/lang/FailsWith.html
#FailsWith(value = SpockAssertionError, reason = "Feature is not enabled")
def "Feature method 2"(){
when:
blah()
then:
doSomeMoreStuff()
}
Important to note that this test will be reported as passed when it fails with the specified exception. And it will also reported as passed if the feature is enabled and the test actually passed.
To Fix this I simply put a when/then block with a 10 ms sleep before the if statement and now that feature method is being executed
I want to test below organizer interactor for, calling the 2 specified interactors without executing the calling interactors('SaveRecord, PushToService') code.
class Create
include Interactor::Organizer
organize SaveRecord, PushToService
end
I found few examples where the overall result of all the interactors logic(record should be saved and pushed to other service) has been tested. But, i dont want to execute the other interactor's logic as they will be tested as part of their separate specs.
1. Is it possible to do so?
2. Which way of testing(testing the overall result/testing only this particular
organizer interactor behavior) is a better practise?
I believe we need to test the interactor organizer for included interactors without executing the included interacors. I am able to find a way stub and test the organizer with below lines
To Stub:
allow(SaveRecord).to receive(:call!) { :success }
allow(PushToService).to receive(:call!) { :success }
To Test:
it { expect(interactor).to be_kind_of(Interactor::Organizer) }
it { expect(described_class.organized).to eq([SaveRecord, PushToService]) }
Found call! method & organized variable from interactor organizer source files where it is trying to call and use internally. Stubbing the call! method and testing the organized variable has fulfilled my requirement.
You can test they are called and the order:
it 'calls the interactors' do
expect(SaveRecord).to receive(:call!).ordered
expect(PushToService).to receive(:call!).ordered
described_class.call
end
See: https://relishapp.com/rspec/rspec-mocks/docs/setting-constraints/message-order
Just iterating over #prem answer.
To Test:
it { expect(interactor).to be_kind_of(Interactor::Organizer) }
it { expect(described_class.organized).to eq([SaveRecord, PushToService]) }
interactor in this case is an instance of the Interactor class, or in Rspec syntax:
let(:interactor) { described_class.new }
I am getting the following error when including in Mixin Build in unit tests:
TestDataConfig.groovy not found, build-test-data plugin proceeding without config file
it works like charm in the integration tests but not part of unit tests. I mean, 'build' plugin works itself in unit test but the 'TestDataConfig' is not populating default values
Thank You
First you should verify the version from build-test-data in your BuildConfig.groovy
test ":build-test-data:2.0.3"
Second, check your test. If you want build objects you need:
import grails.buildtestdata.mixin.Build
...
#TestFor(TestingClass)
#Build([TestingClass, SupportClass, AnotherClass])
class TestingClassTest{
#Test
void testMethod{
def tc1 = TestingClass.build()
def sc1 = SuportClass.build()
def ac1 = AnotherClass.build()
}
}
Third, check the domains constraints, you could have some properties validations like unique that fails when you build two instances. You need set that properties in code:
def tc1 = TestingClass.build(uniqueProperty: 'unique')
def tc2 = TestingClass.build(uniqueProperty: 'special')
I guess the dependency should be:
test ":build-test-data:2.0.3"
Since is just used for testing, right?