How to create table order by index in informix - informix

How I can table which is index order in informix ? This means when I insert new element it first order and then insert this element. In oracle I can do something like this:
ORGANIZATION INDEX
is there some equivalent in informix ?

No, there is no similar resource at Informix.
The option you have is reorder the physical rows based on one index from time to time by setting it to cluster with the alter index statement. (Setting an index to NOT CLUSTER is always very quick.)
But there some limitations in using it. The main limitation, in my opinion is:
cannot be done online; this means, you need exclusive access during the operation.
If the table is big, could take a while.
Quote the source: IBM® Informix® 12.10 Index-type options
CLUSTER option usage
You cannot specify the CLUSTER option and the ONLINE keyword in the
same statement. In addition, some secondary-access methods (such as
R-tree) do not support clustering. Before you specify CLUSTER for your
index, be sure that the index uses an access method that supports
clustering. The CREATE CLUSTER INDEX statement fails if a CLUSTER
index already exists on the same table.
CREATE CLUSTER INDEX c_clust_ix ON customer (zipcode);
This statement creates an index on the customer table and physically
orders the rows according to their postal code values, in (by default)
ascending order.
If the CLUSTER option is specified and fragments exist on the data,
values are clustered only within each fragment, and not globally
across the entire table. If the CREATE CLUSTER INDEX statement also
includes the COMPRESSED keyword as a storage option, the database
server issues error -26950. To create a cluster index that supports
compression requires two steps:
Use the CREATE CLUSTER INDEX statement to define a cluster index with no index compression.
Call the SQL administration API task( ) or admin( ) function with the 'index compress' argument to compress the existing cluster index.
You cannot use the CLUSTER option on a forest of trees index.

Related

Azure Search: Group By or Distinct in OData context / Query?

Using Azure Search service I need to be able to group by or use distinct by a field in the query.
Use case:
My business model has the concept of "resources" which have >=1 revisions. 1 revision is 1 document in an Azure index. I need to simulate something like "select the most recently changed resources from the index while also allowing pagination", therefore I need something like an ability to group the documents from the index into resources and search by them
Azure Search does not support operators like distinct or group-by directly in the query language. However, there are potentially other ways to achieve what you want.
One way would be to make each document in your index a resource instead of a revision of a resource. Then you could have a complex collection field to represent the revisions of each resource. There are a few potential caveats to this approach:
It doesn't scale well if there are many (i.e. -- thousands) of revisions per resource. In fact, there is a limit of 3000 complex objects across all collections in a document.
To add a new revision, you'd have to read-modify-write the entire collection of revisions, since Azure Search doesn't support intra-collection merges.
If the primary unit of querying is really revisions and not resources, then modeling revisions as documents is more natural. However, you can always have more than one index depending on the query patterns you need.
Another approach would be to add a Boolean field like IsLatestVersion, but then you'd need to set the flag to false on the previous revision whenever you add a new revision to your index. The approach above using complex types would probably be more straightforward.

Use of index in Neo4j

Suppose I have 2 types of nodes :Server and :Client.
(Client)-[:CONNECTED_TO]->(Server)
I want to find the Female clients connected to some Server ordered by age.
I did this
Match (s:Server{id:"S1"})<-[:CONNNECTED_TO]-(c{gender:"F"}) return c order by c.age DESC
Doing this, all the Client nodes linked to my Server node are traversed to find the highest age.
Is there a way to index the Client nodes on gender and age properties to avoid the full scan?
You can create an index on :Client(gender), as follows:
CREATE INDEX ON :Client(gender);
However, your particular query will probably benefit more from creating an index on :Server(id), since there are probably a lot of female clients but probably only a single Server with that id. So, you probably want to do this instead:
CREATE INDEX ON :Server(id);
But, even better, if every Server has a unique id property, you should create a uniqueness constraint (which also automatically creates an index for you):
CREATE CONSTRAINT ON (s:Server) ASSERT s.id IS UNIQUE;
Currently, neo4j does not use indexes to perform ordering, but there are some APOC procedures that do support that. However, the procedures do not support returning results in descending order, which is what you want. So, if you really need to use indexing for this purpose, a workaround would be to add an extra minusAge property to your Client nodes that contains the negative value of the age property. If you do this, then first create an index:
CREATE INDEX ON :Client(minusAge);
and then use this query:
MATCH (s:Server{id:"S1"})<-[:CONNNECTED_TO]-(cl:Client {gender:"F"})
CALL apoc.index.orderedRange('Client', 'minusAge', -200, 0, false, -1) YIELD node AS c
RETURN c;
The 3rd and 4th parameters of that procedure are for the minimum and maximum values you want to use for matching (against minusAge). The 5th parameter should be false for your purposes (and is actually currently ignored by the implementation). The last parameter is for the LIMIT value, and -1 means you do not want a limit.
If that is a request you're doing quite frequently, then you might want to write that data out. As you're experiencing, that query can be quite expensive and it won't get better the more clients you get, as in fact, all of the connected nodes have to be retrieved and get a property check/comparison run on them.
As a workaround, you can add another connection to your clients when you modify their age data.
As a suggestion, you can create an Age node and create a MATURE relationship to your oldest clients.
(:Server)<-[:CONNNECTED_TO]-(:Client)-[:MATURE]->(:Age)
You can do this for all the ages, and run queries off the Age nodes (with an indexed/unique age property on the) as needed. If you have 100,000 clients, but only are interested in the top 100 ordered by age, there's no need to get all the clients and order them... It really depends on your use case and client apps.
While this is certainly not a nice pattern, I've seen it work in production and is the only workaround that's been doing well in different production environments that I've seen.
If this answer didn't solve your problem (I'd rather use an age property, too), I hope it gave you at least an idea on what to do next.

How to determine the Max property on a Relationship in Neo4j 2.2.3

How do you quickly get the maximum (or minimum) value for a property of all instances of a relationship? You can assume the machine I'm running this on is well within the recommended spec's for the cpu and memory size of graph and the heap size is set accordingly.
Facts:
Using Neo4j v2.2.3
Only have access to modify graph via Cypher query language which I'm hitting via PHP or in the web interfacxe--would love to avoid any solution that requires java coding.
I've got a relationship, call it likes that has a single property id that is an integer.
There's about 100 million of these relationships and growing
Every day I grab new likes from a MySQL table to add to the graph within in Neo4j
The relationship property id is actually the primary key (auto incrementing integer) from the raw MySQL table.
I only want to add new likes so before querying MySQL for the new entries I want to get the max id from the likes, so I can use it in my SQL query as SELECT * FROM likes_table WHERE id > max_neo4j_like_property_id
How can I accomplish getting the max id property from neo4j in a optimal way? Please indicate the create statement needed for any index as well as the query you'd used to get the final result.
I've tried creating an index as follows:
CREATE INDEX ON :likes(id);
After the index is online I've tried:
MATCH ()-[r:likes]-() RETURN r.i ORDER BY r.id DESC LIMIT 1
as well as:
MATCH ()-[r:likes]->() RETURN MAX(r.id)
They work but take freaking forever as the explain plan for both indicate no indexes being used.
UPDATE: Holy $?##$?!!!! It looks like the new schema indexes aren't functional for relationships even though you can create them and show them with :schema. It also looks as if there's no way with cypher directly to create Legacy Indexes which look like they might solve this issue.
If you need to query relationship properties, it is generally a sign of a model issue.
The need of this query reveals you that you would better extract these properties into a node, that you'll then be able to query faster.
I don't say it is 100% the case, but certainly 99% of the people seen so far with the same problem has been demonstrating this model concern.
What is your model right now ?
Also you don't use labels at all in your query, likes have a context bound to the nodes.

Neo4j auto increase schema index

It is recommended not to use Neo4j's id property because it may change, but rather create our own identifier. Then to identify my users, I plan to create a user_id property on the nodes labelled User and put an index on it. However, I cannot figure out a way to make it auto increase.
After some searching, I noticed there are two kinds of indexes in Neo4j, the schema index and the legacy index. Could anyone explain to me the difference between them? And is there a way to make my user_id index auto increase?
Schema indices are effectively labels, e.g. :User. You can also create indices on the properties of those labels if you wish. There's also no need to specify which index you're using as this is done automatically, in this case.
Legacy indices are the node indices that were around prior to Neo4j 2.0. They're a traditional index where you can specify what you're indexing and which properties they apply to, but, they're only used in START statements, which are optional (and on their way to deprecation).
For more detail, have a look here (http://docs.neo4j.org/chunked/stable/graphdb-neo4j-schema.html) and here (http://docs.neo4j.org/chunked/stable/indexing.html).
As for auto-incrementing, I'm unaware of any such functionality for user-defined index keys.
HTH

Importing data from oracle to neo4j using java API

Can u please share any links/sample source code for generating the graph using neo4j from Oracle database tables data .
And my use case is oracle schema table names as Nodes and columns are properties. And also need to genetate graph in tree structure.
Make sure you commit the transaction after creating the nodes with tx.success(), tx.finish().
If you still don't see the nodes, please post your code and/or any exceptions.
Use JDBC to extract your oracle db data. Then use the Java API to build the corresponding nodes :
GraphDatabaseService db;
try(Transaction tx = db.beginTx()){
Node datanode = db.createNode(Labels.TABLENAME);
datanode.setProperty("column name", "column value"); //do this for each column.
tx.success();
}
Also remember to scale your transactions. I tend to use around 1500 creates per transaction and it works fine for me, but you might have to play with it a little bit.
Just do a SELECT * FROM table LIMIT 1000 OFFSET X*1000 with X being the value for how many times you've run the query before. Then keep those 1000 records stored somewhere in a collection or something so you can build your nodes with them. Repeat this until you've handled every record in your database.
Not sure what you mean with "And also need to genetate graph in tree structure.", if you mean you'd like to convert foreign keys into relationships, remember to just index the key and in stead of adding the FK as a property, create a relationship to the original node in stead. You can find it by doing an index lookup. Or you could just create your own little in-memory index with a HashMap. But since you're already storing 1000 sql records in-memory, plus you are building the transaction... you need to be a bit careful with your memory depending on your JVM settings.
You need to code this ETL process yourself. Follow the below
Write your first Neo4j example by following this article.
Understand how to model with graphs.
There are multiple ways of talking to Neo4j using Java. Choose the one that suits your needs.

Resources