I am using neo4j in embedded mode. So for some operations in database on server, i am tying to execute groovy script. Groovy script is running successfully without any error,but it is not creating any new record when i am checking neo4j-communinty tool.
Script
/**
* Created by prabjot on 7/1/17.
*/
#Grab(group="org.neo4j", module="neo4j-kernel", version="2.3.6")
#Grab(group="org.neo4j", module="neo4j-lucene-index", version="2.3.6")
#Grab(group='org.neo4j', module='neo4j-shell', version='2.3.6')
#Grab(group='org.neo4j', module='neo4j-cypher', version='2.3.6')
import org.neo4j.graphdb.factory.GraphDatabaseFactory
import org.neo4j.graphdb.Node
import org.neo4j.graphdb.Result
import org.neo4j.graphdb.Transaction
class Neo4jEmbeddedAccess {
public static void main(String[] args) {
def map=[:]
map.put("allow_store_upgrade","true")
map.put("remote_shell_enabled","true")
def db = new GraphDatabaseFactory().newEmbeddedDatabaseBuilder("/opt/neo4j-community-3.0.4/data/databases/graph.db")
.setConfig(map)
.newGraphDatabase()
Transaction tx =db.beginTx()
Node person = db.createNode();
person.setProperty("name","prabjot")
print("id---->" + person.id);
Result result = db.execute("Match (country:Country) where id(country)=73 SET country.modified=true return country")
print(result)
tx.success();
println """starting embedded graph db
use bin/neo4j-shell from a new distribution to connect
we're keeping the graphdb open for 120 secs"""
db.shutdown()
}
Please help what i am doing wrong here, i have checked my db location but is same as i am using in script and tool.
Thanks
You forgot tx.close() which commits the Transaction
Sucess only marks it as successful
Related
According to the Beam website,
Often it is faster and simpler to perform local unit testing on your
pipeline code than to debug a pipeline’s remote execution.
I want to use test-driven development for my Beam/Dataflow app that writes to Bigtable for this reason.
However, following the Beam testing documentation I get to an impasse--PAssert isn't useful because the output PCollection contains org.apache.hadoop.hbase.client.Put objects, which don't override the equals method.
I can't get the contents of the PCollection to do validation on them either, since
It is not possible to get the contents of a PCollection directly - an
Apache Beam or Dataflow pipeline is more like a query plan of what
processing should be done, with PCollection being a logical
intermediate node in the plan, rather than containing the data.
So how can I test this pipeline, other than manually running it? I'm using Maven and JUnit (in Java since that's all the Dataflow Bigtable Connector seems to support).
The Bigtable Emulator Maven plugin can be used to write integration tests for this:
Configure the Maven Failsafe plugin and change your test case's ending from *Test to *IT to run as an integration test.
Install the Bigtable Emulator in the gcloud sdk on command line:
gcloud components install bigtable
Note that this required step is going to reduce code portability (e.g. will it run on your build system? On other devs' machines?) so I'm going to containerize it using Docker before deploying to the build system.
Add the emulator plugin to the pom per the README
Use the HBase Client API and see the example Bigtable Emulator integration test to set up your session and table(s).
Write your test as normal per the Beam documentation, except instead of using PAssert actually call CloudBigtableIO.writeToTable and then use the HBase Client to read the data from the table to verify it.
Here's an example integration test:
package adair.example;
import static org.apache.hadoop.hbase.util.Bytes.toBytes;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.UUID;
import java.util.stream.Collectors;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.transforms.Create;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.Mutation;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;
import org.hamcrest.collection.IsIterableContainingInAnyOrder;
import org.junit.Assert;
import org.junit.Test;
import com.google.cloud.bigtable.beam.CloudBigtableIO;
import com.google.cloud.bigtable.beam.CloudBigtableTableConfiguration;
import com.google.cloud.bigtable.hbase.BigtableConfiguration;
/**
* A simple integration test example for use with the Bigtable Emulator maven plugin.
*/
public class DataflowWriteExampleIT {
private static final String PROJECT_ID = "fake";
private static final String INSTANCE_ID = "fakeinstance";
private static final String TABLE_ID = "example_table";
private static final String COLUMN_FAMILY = "cf";
private static final String COLUMN_QUALIFIER = "cq";
private static final CloudBigtableTableConfiguration TABLE_CONFIG =
new CloudBigtableTableConfiguration.Builder()
.withProjectId(PROJECT_ID)
.withInstanceId(INSTANCE_ID)
.withTableId(TABLE_ID)
.build();
public static final List<String> VALUES_TO_PUT = Arrays
.asList("hello", "world", "introducing", "Bigtable", "plus", "Dataflow", "IT");
#Test
public void testPipelineWrite() throws IOException {
try (Connection connection = BigtableConfiguration.connect(PROJECT_ID, INSTANCE_ID)) {
Admin admin = connection.getAdmin();
createTable(admin);
List<Mutation> puts = createTestPuts();
//Use Dataflow to write the data--this is where you'd call the pipeline you want to test.
Pipeline p = Pipeline.create();
p.apply(Create.of(puts)).apply(CloudBigtableIO.writeToTable(TABLE_CONFIG));
p.run().waitUntilFinish();
//Read the data from the table using the regular hbase api for validation
ResultScanner scanner = getTableScanner(connection);
List<String> resultValues = new ArrayList<>();
for (Result row : scanner) {
String cellValue = getRowValue(row);
System.out.println("Found value in table: " + cellValue);
resultValues.add(cellValue);
}
Assert.assertThat(resultValues,
IsIterableContainingInAnyOrder.containsInAnyOrder(VALUES_TO_PUT.toArray()));
}
}
private void createTable(Admin admin) throws IOException {
HTableDescriptor tableDesc = new HTableDescriptor(TableName.valueOf(TABLE_ID));
tableDesc.addFamily(new HColumnDescriptor(COLUMN_FAMILY));
admin.createTable(tableDesc);
}
private ResultScanner getTableScanner(Connection connection) throws IOException {
Scan scan = new Scan();
Table table = connection.getTable(TableName.valueOf(TABLE_ID));
return table.getScanner(scan);
}
private String getRowValue(Result row) {
return Bytes.toString(row.getValue(toBytes(COLUMN_FAMILY), toBytes(COLUMN_QUALIFIER)));
}
private List<Mutation> createTestPuts() {
return VALUES_TO_PUT
.stream()
.map(this::stringToPut)
.collect(Collectors.toList());
}
private Mutation stringToPut(String cellValue){
String key = UUID.randomUUID().toString();
Put put = new Put(toBytes(key));
put.addColumn(toBytes(COLUMN_FAMILY), toBytes(COLUMN_QUALIFIER), toBytes(cellValue));
return put;
}
}
In Google Cloud you can do e2e testing of your Dataflow pipeline easily using real cloud resources like Pub/Sub topic and BigQuery tables.
By using Junit5 Extension Model (https://junit.org/junit5/docs/current/user-guide/#extensions) you can create custom classes that will handle the creation and deletion of the required resources for your pipeline.
You can find a demo/seed project here https://github.com/gabihodoroaga/dataflow-e2e-demo and a blog post here https://hodo.dev/posts/post-31-gcp-dataflow-e2e-tests/.
I am totally new to Jira. In fact I don't even know where to start. I went to the jira atlassian website but got nothing solid enough to help me. I would like to validate if the information entered into a textbox already exists. I clicked around jira and ended up on the screen below:
Now I would like to find out the following:
Which programming language should be used for validation ? Is it Java
If the name of the custom field(of type Textbox) is XYZ and I wanna if check if value entered into XYZ already exist, how do I go about doing that ? Can I just write conditional statements in Java ?
I wrote some stuff and nothing worked.
That's a screenshot from the Script Runner add-on.
There are some documentation and examples for custom validators here.
You can also find an example here that shows how to query the JIRA (or an external) database from a groovy script. Ie.:
import com.atlassian.jira.component.ComponentAccessor
import groovy.sql.Sql
import org.ofbiz.core.entity.ConnectionFactory
import org.ofbiz.core.entity.DelegatorInterface
import java.sql.Connection
def delegator = (DelegatorInterface) ComponentAccessor.getComponent(DelegatorInterface)
String helperName = delegator.getGroupHelperName("default");
def sqlStmt = """
SELECT project.pname, COUNT(*) AS kount
FROM project
INNER JOIN jiraissue ON project.ID = jiraissue.PROJECT
GROUP BY project.pname
ORDER BY kount DESC
"""
Connection conn = ConnectionFactory.getConnection(helperName);
Sql sql = new Sql(conn)
try {
StringBuffer sb = new StringBuffer()
sql.eachRow(sqlStmt) {
sb << "${it.pname}\t${it.kount}\n"
}
log.debug sb.toString()
}
finally {
sql.close()
}
For anything that gets a bit complex it's easier to implement your script in a groovy file and make it available to Script Runner via the file system. That also allows you use a vcs like git to easily push/pull your changes. More info about how to go about that, is here.
I want to use dynamic scheduling feature of Grails quartz plugin.
I am running grails 2.3.5 and the quartz plugin (quartz:1.0.2).
I am able to persist the quartz information to my mysql database and I am able to run normal quartz Jobs.
The problem is scheduling tasks dynamically. I am not getting this to work.
Here is my setup and what I am trying to do:
I have a simple Job in "grails-app/tao/marketing/MarketingJob" which looks like this:
package tao.marketing
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
class MarketingJob {
static triggers ={}
def execute(JobExecutionContext context) {
try{
def today = new Date()
println today
}
catch (Throwable e) {
throw new JobExecutionException(e.getMessage(), e);
}
}
}
Which I now try to schedule dynamically from a Service.
package tao
import grails.transaction.Transactional
import tao.marketing.CampaignSchedule
import tao.Person
import jobs.tao.marketing.*
class ScheduleService {
def scheduleMarketingForPerson(CampaignSchedule campaignSchedule, Person person) {
log.info("Schedule new Marketing for: "+person.last_name)
campaignSchedule.scheduleActions.each {
Date today = new Date();
Date scheduleDate = today+it.afterXdays
log.info("ScheduleAction: "+it.id+": "+scheduleDate)
MarketingJob.schedule(scheduleDate, ["scheduleActions.id":it.id, "person.apiKey":person.apiKey])
}
}
}
In my IDE (STS) MarketingJob cannot be found.
MarketingJob.schedule(scheduleDate, ["scheduleActions.id":it.id, "person.apiKey":person.apiKey])
How do I correctly import the Marking Job?
Do I understand the dynamic scheduling feature correctly?
Could be that your job is in "package tao.marketing" and your import is "import jobs.tao.marketing.*"? I mean, import starts with "jobs"
The problem I had was that in my STS IDE I didn't have the jobs directory marked as a code directory. Thanks for all your comments.
I am using Neo4j BatchInserters to insert nodes in db.I am using LuceneBatchInserterIndexProvider for indexes. I have multiple files from where i am importing the data. I want if my process break then i should be able to restart the process from next file. But whenever i restart process it creates new db in graph folder and new indexes. My initialization code look like this.
Map<String, String> config = new HashMap<String, String>();
config.put("neostore.nodestore.db.mapped_memory", "2G");
config.put("batch_import.keep_db", "true");
BatchInserter db = BatchInserters.inserter("ttl.db", config);
BatchInserterIndexProvider indexProvider = new LuceneBatchInserterIndexProvider(
db);
index = indexProvider.nodeIndex("ttlIndex",
MapUtil.stringMap("type", "exact"));
index.setCacheCapacity(URI_PROPERTY, indexCache + 1);
Can somebody please help here?
To provide more details. I have multiple files ( around 400) which i want to import to Neo4j.
I want to divide my process into batches. After every batch i want to restart the process.
I used neo4j batch inserter config batch_import.keep_db = "true". This does not clear the graph but after restart indexer has lost information. I have this method to check for node existence. I am sure before restart i have created node.
private Long getNode(String nodeUrl)
{
IndexHits<Long> hits = index.get(URI_PROPERTY, nodeUrl);
if (hits.hasNext()) { // node exists
return hits.next();
}
return null;
}
I am able to create nodes and relationships through Java on a Neo4j database. When I try to access the created nodes in the next run I get this error:
Exception in thread "main" org.neo4j.graphdb.NotFoundException: Node 27 not found
In webadmin interface the dashboard shows the number of nodes/relationships created through Java, but when I issue this query: START n=node(*) RETURN n; I get only 1 node in the ouput.
(FYI I have installed Ne04j in my windows machine(local) and using embedded database java code to create nodes.)
Java code I used to connect to db:
final String dbpath = "C:\\neo4j-community-1.9.4\\data\\graph.db";
GraphDatabaseService graphdb = new GraphDatabaseFactory().newEmbeddedDatabase(dbpath);
The settings I have used in ne04j-server.properties are:
org.neo4j.server.database.location=/C:/neo4j-community-1.9.4/data/graph.db/
org.neo4j.server.webserver.https.keystore.location=data/keystore
org.neo4j.server.webadmin.rrdb.location=data/rrd
org.neo4j.server.webadmin.data.uri=/C:/neo4j-community-1.9.4/data/graph.db/
org.neo4j.server.webadmin.management.uri=/db/manage/
When I create node through Java the data/keystore file does not get populated, and only gets populated when creating a node through webadmin interface. Changing the path of keystore file to absolute path also did not work.
Can anybody point the mistake in this scenario, Thanks .
The problem was the nodes created were not comitted. To commit the nodes we got to give finish() :
try{
Transaction tx = graphdb.beginTx();
final String dbpath = "/C:/neo4j-community-1.9.4/data/graph.db/";
GraphDatabaseService graphdb = new GraphDatabaseFactory().newEmbeddedDatabase(dbpath);
Node n1 = graphdb.createNode();
n1.setProperty("type", "company");
n1.setProperty("location", "india");
....
...
}} catch(Exception e){
tx.failure();
} finally {
tx.success();
**tx.finish();**
}
Ranjith's answer was correct until recently, but tx.finish() has now been deprecated.
tx.close(); is now the correct way to commit or rollback the transaction - it will do one or the other depending on whether you've previously called tx.success().
They changed this so the transaction is autocloseable in a try with resources block.
Have you tried:
String dbpath = "C:/neo4j-community-1.9.4/data/graph.db";