Could not reload changes to file [C:\...\UserController.groovy]: Cannot get property 'instanceControllerTagLibraryApi' on null object - grails

Use:
-Windows 8
-Intellij IDEA 14.14
-grails 2.5.1
-JDK 8u51
*** Using IntelliJ IDEA:
1) I think new project
2) I use a simple controller: "User" properly "String name"
3) Run the application
3) I do make this easy modification to the controller: When I create a new User should show "Hello!" on the console.
4) After changing me it appears on the console:
"08/04/2015 12: 43: 51.315 [Thread-9] ERROR plugins.AbstractGrailsPluginManager - Plugin [controllers: 2.5.1] Could not reload Changes to file [C: \ Users \ Ivan Velazquez \ IdeaProjects \ Demo \ grails- app \ controllers \ demo \ UserController.groovy]: Can not get property 'instanceControllerTagLibraryApi' on null object "
5) No changes were made, obviously, when I create a new User no "Hello!" on the console.
*** Using the Windows console:
The error is different:
"08/04/2015 12: 43: 51.315 [Thread-9] ERROR plugins.AbstractGrailsPluginManager - Plugin [controllers: 2.5.1] Could not reload Changes to file [C: \ Users \ Ivan Velazquez \ IdeaProjects \ Demo \ grails- app \ controllers \ demo \ UserController.groovy]: Can not invoke method getPropertyValue () on null object "
I searched the error in several forums but can not find solution.
Thank You!

This is a bug for Windows + Grails, it thinks that it fails to reload the controller if you make changes to it. In most cases, the controller has actually updated and reloaded, but it still shows an error. If you have problems, you can restart the grails server to see your changes.

Related

Eventarc triggers for crossproject

I have created a cloud run service. My event arc is not triggering the cross project to read the data. How to give the event filter for resource name in event arc with insert job/Job completed to trigger to BQ table.
gcloud eventarc triggers create ${SERVICE}-test1\
--location=${REGION} --service-account ${SVC_ACCOUNT} \
--destination-run-service ${SERVICE} \
--destination-run-region=${REGION} \
--event-filters type=google.cloud.audit.log.v1.written \
--event-filters methodName=google.cloud.bigquery.v2.JobService.InsertJob \
--event-filters serviceName=bigquery.googleapis.com \
--event-filters-path-pattern resourceName="/projects/destinationproject/locations/us-central1/jobs/*"
I have tried multiple options giving the resource name like:
"projects/projectname/datasets/outputdataset/tables/outputtable"

How to use a custom dataset for T5X?

I've created a custom seqio task and added it to the TaskRegistry following the instruction per the documentation. When I set the gin parameters, accounting for the new task I've created, I receive an error that says my task does not exist.
No Task or Mixture found with name [my task name]. Available:
Am I using the correct Mixture/Task module that needs to be imported? If not, what is the correct statement that would allow me to use my custom task?
--gin.MIXTURE_OR_TASK_MODULE=\"t5.data.tasks\"
Here is the full eval script I am using.
python3 t5x/eval.py \
--gin_file=t5x/examples/t5/t5_1_0/11B.gin \
--gin_file=t5x/configs/runs/eval.gin \
--gin.MIXTURE_OR_TASK_NAME=\"task_name\" \
--gin.MIXTURE_OR_TASK_MODULE=\"t5.data.tasks\" \
--gin.partitioning.PjitPartitioner.num_partitions=8 \
--gin.utils.DatasetConfig.split=\"test\" \
--gin.DROPOUT_RATE=0.0 \
--gin.CHECKPOINT_PATH=\"${CHECKPOINT_PATH}\" \
--gin.EVAL_OUTPUT_DIR=\"${EVAL_OUTPUT_DIR}\"

Plastic SCM Branch Visual Bug

I was trying to delete an empty branch that accidentally got created, when to my surprise it said that this weird little branch had a child! Which was strange, because visually it did not appear to have a child, it didn't even have any change-sets in it. I looked into its children and somehow a branch that was split off from this empty branches parent somehow became the child of the empty branch! I was wondering if there was a way to move this child back to the parent of the empty where it belongs or should I just resign myself to having a weird outlier?
What branch explorer visually says I have:
----main branch---|ch1|---------|ch2|---------|ch5|
| \ /
| \
| |ch3|---real branch---|ch4|
|
\
\----empty branch---
what I actually have:
----main branch---|ch1|----------------------|ch2|--------------------|ch5|
\ /
\ /
\ /
\----empty branch---\ /
\ /
|ch3|---real branch---|ch4|
what I want:
----main branch---|ch1|---------|ch2|--------|ch5|
\ /
\ /
|ch3|---real branch---|ch4|

Need to make POST request through robot framework

I am new to robot framework and I have a requirement to make a POST request through robot framework. I am able to successfully run the post request through postman tool. Below is the curl command which I generated through postman tool:
curl -X POST \
http://ip:port/ai/data/upload \
-H 'content-type: multipart/form-data \
-F 'fileData=#C:\Users\xyz\Desktop\report.html' \
-F clientId=client \
-F subClientId=test \
-F fileType=compliance
Can somebody help me out with the equivalent of above curl request in robot.
As Alex suggested you would like to have a look on
https://github.com/bulkan/robotframework-requests
Alternatively
store the command (curl...) in a command.sh file and then execute this command.sh file through Process Library http://robotframework.org/robotframework/2.8.6/libraries/Process.html
Code :
*** Settings ***
Library Process
*** Variables ***
*** Test cases ***
Test Requests
test process
*** Keywords ***
test process
${handle}= Start Process command.sh #make sure your robotfile and command.sh is in same directory or give path till sh file
Use Robot RequestsLibrary .
Here is the related link related to multi-part file upload case.
https://github.com/bulkan/robotframework-requests/issues/131
Try out this :
*Settings*
Library RequestsLibrary
Library OperatingSystem
*Test Case*
Test One
&{data}= Create Dictionary foo=bar
Create File foobar content=foobarcontent
&{files}= Evaluate {'foofile': open('foobar')}
${resp}= Evaluate
... requests.post('http://localhost:8000/foo', data=$data, files=$files)
... modules=requests
Log ${resp} console=${TRUE}

Neo4j duplicate input id exception

I am new to neo4j and I am trying to construct bitcoin transaction graph using it. I am following this link behas/bitcoingraph to do so, I came across the neo4j import command to create a database
$NEO4J_HOME/bin/neo4j-import --into $NEO4J_HOME/data/graph.db \
--nodes:Block blocks_header.csv,blocks.csv \
--nodes:Transaction transactions_header.csv,transactions.csv \
--nodes:Output outputs_header.csv,outputs.csv \ .......
After executing the above command I encountered an error
Exception in thread "Thread-1" org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.DuplicateInputIdException: Id '00000000f079868ed92cd4e7b7f50a5f8a2bb459ab957dd5402af7be7bd8ea6b' is defined more than once in Block, at least at /home/nikhil/Desktop/Thesis/bitcoingraph/blocks_0_1000/blocks.csv:409 and /home/nikhil/Desktop/Thesis/bitcoingraph/blocks_0_1000/blocks.csv:1410
Here is the block_header. csv
hash:ID(Block),height:int,timestamp:int
Does anyone know how to fix it? I read there is a solution available in id-spaces but I am not quiet sure how to use it. Thanks in advance for any help
The --skip-duplicate-nodes flag will skip import of nodes with the same ID instead of aborting the import.
For example:
$NEO4J_HOME/bin/neo4j-import --into $NEO4J_HOME/data/graph.db \
--nodes:Block blocks_header.csv,blocks.csv --skip-duplicate-nodes \
--nodes:Transaction transactions_header.csv,transactions.csv \
--nodes:Output outputs_header.csv,outputs.csv \ .......

Resources