Is it possible to have a read-only / shared Equinox environment? - equinox

I would like to provision a series of OSGi bundles, and then instruct some of them to unpack data into their data area, as obtained with org.osgi.framework.BundleContext#getDataFile, and then treat the results as read-only and shared between processes.
Based on some aspects of Eclipse, I suspect this to be possible, but it's not obvious what (if any) configuration properties to set to make it possible.

Setting these properties puts the container into a read-only state:
// in spite of the seemingly standard names, these are Equinox-specific.
configProps.put("osgi.configuration.area.readOnly", "true");
configProps.put("osgi.sharedConfiguration.area.readOnly", "true");
configProps.put("osgi.instance.area.readOnly", "true");
configProps.put("osgi.user.area.readOnly", "true");

Related

Genexus Extensions SDK - Is there a built in helper to save data locally?

I Would like to know if the Genexus Extension SDK already implements something to store persistent data locally (KB Independant and per KB), something like PersistentDictionary from ManagedEsent
I know that genexus uses SQL Server to store KB Related information, is there an interface for me to extend that?
I want to save data per genexus instance (locally) and use that data to load my extension config, everytime the users executes Genexus.
We don't use PersistentDictionary. I would advice not to use it, as it's a Windows specific API, and we are trying make everything new cross platform, as part of our journey of making GeneXus BL run on other OS.
There are different options of persistence, depending on the specific details of your scenario.
If you want to store something like configuration settings for your extension, you can use the ConfigurationHelper class located in Artech.Common.Helpers. This class provides read access to the configurations defined in the GeneXus.exe.config file in the GeneXus installation folder, as well as read/write access to the Environment.config file located in %AppData%\GeneXus\GeneXus\<version>\Environment.config. Note this file depends on the current user, and is shared between different GeneXus instances of a same main version.
The ConfigurationHelper class provides operations to read and save settings of basic types string, int and bool.
const string MY_EXTENSION = "MyExtensionSettings";
const string SETTING1 = "Setting1";
const string SETTING1_DEFAULT_VALUE = "This is the default value";
const string SETTING2 = "Setting2";
const int SETTING2_DEFAULT_VALUE = 20;
string setting1Value = ConfigurationHelper.GetUserSetting(MY_EXTENSION, SETTING1, SETTING1_DEFAULT_VALUE);
int setting2Value = ConfigurationHelper.GetUserSetting(MY_EXTENSION, SETTING2, SETTING2_DEFAULT_VALUE);
// Do something and maybe change the setting values
ConfigurationHelper.SetUserSetting(MY_EXTENSION, SETTING1, setting1Value);
ConfigurationHelper.SetUserSetting(MY_EXTENSION, SETTING2, setting2Value);
If you want to store something in a file based on the current opened KB, there's no specific API that'll help you handle the persistence. You can use the properties Location and UserDirectory of the KnowledgeBase class to access the KB location or a directory for the current user under the KB location, but it's up to you the handling of the file. You'll have to decide on the file format (binary or text), file encoding in case of text files, and handle all read and write operations to that file.
We use the kb.UserDirectory path to store non-critical stuff, such as the set of objects that were opened the last time the KB was closed, or the filter values for different dialogs.
In case you'd like to store settings inside the KB, there are plenty of options.
You can add properties to existing objects, KB version or environment. Making it a property doesn't necessary mean you'll have to edit the value in the property grid, although it's usually the way to go.
You can define a new kind of entity. Entities are the basic elements that can be stored in a KB. The entity may be stored depending on the active version of the KB, or may be independent of the current version. Entities can have properties, whose serialization is handled by the property engine, and also can read and store a byte array whose format and content will be handled by you.
You can add a part to an existing object. For instance you may want to add a part to Procedure objects. In order to do this you'll have to extend KBObjectPart, define your part in a BL package, declare that the part composes objects of certain type, and provide an editor for your new part in a UI package. KBObjectPart extends Entity so the serialization of the part is similar as in the previous case. A caveat of this option is that you'll also have to handle how the part content is imported, exported, and compared.
You can add a new kind of object. Objects extend the KBObject class, which extends Entity. Objects are not obliged to have parts (for instance the Folder object doesn't have any). When choosing to provide a new kind of object you have to consider a couple of things, such as:
Do you want to be able to create new instances from the new object dialog?
Will it be shown in the folder view?
Can it be added into modules?
Can it have the same name as other objects of different types?
As a general guideline, if you choose to add a new property, add it to objects, versions, or environments, not parts. Adding properties to parts is not so good for discoverability. Also if you choose to add a new kind of object, even though it inherits from Entity which as mentioned earlier can read and store a byte array, it's preferred to don't use the byte array in KBObject and add a KBObjectPart to it instead. That way the KBObject remains as lightweight as possible, and loading the object definition from the DB remains fast, and the blob content is loaded only when truly needed.
There's no rule of thumb. Depending on the specifics of the scenario, one option may be more suited than others.

Passing structured data between ABAP sessions?

It is generally known that ABAP memory (EXPORT/IMPORT) is used for passing data inside ABAP session across call stack, and SAP memory (SET/GET) is session independent and valid for all the ABAP sessions of user session.
The pitfall here is that SET PARAMETER supports only primitive flat types, otherwise it throws the error:
"LS_MARA" must be a character-type field (data type C, N, D or T). by
Global assignment like ASSIGN '(PrgmName)Globalvariable' TO FIELD-SYMBOLS(<lo_data>). is not always a way, for example if one wants to pass structure to some local method variable.
Creating SHMA shared memory objects seems like an overkill for simple testing tasks.
So far I found only this ancient thread were the issue was raised, but the solution from there is stupid and represents a perfect example of how you shouldn't write, a perfect anti-pattern.
What options (except DB) do we have if we want to pass structure or table to another ABAP session?
As usual Sandra has a good answer.
Export/Import To/From Shared buffer/Memory is very powerful.
But use it wisely and make sure you understand that is is on 1 App server and
is non persistent.
You can use rfc to call the other app servers to get the buffer from other servers if necessary. CALL FUNCTION xyz DESTINATION ''
See function TH_SERVER_LIST . ie what you see in SM59 Internal Connection.
Clearly the lack of persistency of shared buffer/memory is of key consideration.
But what is not immediately obvious until you read the docu carefully is how the shared buffer manager will abandon entries based on buffer size and avaliable memory. You can not assume shared buffer entry will be there when you go to access it. It most likely will be, but it can be "dropped", the server might be restarted etc. Use it as a performance helping tool but always assume the entry might not be there.
Shared memory as opposed to shared buffer, suffers from the upper limit issue, requiring other entries to be discarded before more can be added. Both have pros and cons.
In St02 , look for red entries here, buffer limits reached.
See the current parameters button that tells you which profile parameters need to be changed.
A great use of this language element is for logging or for high performance buffering of data that could be reconstructed . It is also ideal for scenarios in badis etc were you can not issue commits. You can "hold" info without issuing a commit or db commit.
You can also update / store your log without even using locking.
Using the simple principle the current workprocess no. is unique.
CALL FUNCTION 'TH_GET_OWN_WP_NO'
IMPORTING
wp_index = wp_index.
Use the index no as part of the key to your data .
if your kernel is 7.40 or later see class CL_OBJECT_BUFFER
otherwise see function SBUF_OBJ_SHOW_OBJECT
Have fun with Shared Buffers/Memory.
One major advantage of share buffers over share memory objects is the ABAP Garbage Collector. SAPSYS Garbage collection can bite you!
In the same application server, you may use EXPORT/IMPORT ... SHARED BUFFER/MEMORY ....
Probable statement for your requirement:
EXPORT mara = ls_mara TO SHARED BUFFER indx(zz) ID 'MARA'.
Between application servers, you may use ABAP Channels.

Spring Data Neo4j 5 and the application startup time

In my Spring Data Neo4j 5 project I have the following Neo4j Java configuration:
#Bean
public org.neo4j.ogm.config.Configuration configuration() {
// #formatter:off
return new org.neo4j.ogm.config.Configuration.Builder()
.autoIndex("assert")
.credentials(username, password)
.uri(serverDatabaseUri)
.build();
// #formatter:on
}
Right now with a data growth inside of my Neo4j database I experienced a significant slowdown during my application startup.
I think one of the possible reasons for this issue can be the following property:
autoIndex("assert")
How to check it and if I'm right - how to improve the application startup time without losing the functionality provided by autoIndex("assert")?
It’s very likely that you are right, as the creation and validation of indexes will take an amount of time proportional to the size of your data; in other words, the more data you have the longer it takes for indexes to be created or validated each time the application starts up.
Index creation is a convenient feature of SDN. That said, given that the addition or removal of indexes is a fairly infrequent event, typically happening only when you add or remove a domain entity or are starting from an empty database, another option is to remove the #Index annotations and to create a Cypher script that creates or removes the indexes and only execute the Cypher script as needed. This approach allows the application to start as quickly as possible with the tradeoff of having to manually execute a script when needed, which most find to be a reasonable balance.

Clear Neo4j Embedded database

With a new version of Spring Data Neo4j I can't use Neo4jHelper.cleanDb(db);
So, what is the most effective way to completly clear Embedded Neo4j database in my application?
I have implemented my own util method for this purpose, but this method is slow:
public static void cleanDb(Neo4jTemplate template) {
template.query("MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r", null);
}
How to properly clear/delete database ?
UPDATED
This is the similar question How to reset / clear / delete neo4j database? but I don't know how to programmatically shutdown Embedded Neo4j and how to start it after deleting.
I use Spring Data Neo4j and based on the user request I'd like to clear/delete existing database and recreate it with a new data. How to start up new embedded database after suggested invocation of shutdown method ?
USE CASE:
On the working application I have configured embedded database:
GraphDatabaseService graphDb = new GraphDatabaseFactory()
.newEmbeddedDatabaseBuilder(environment.getProperty(NEO4J_EMBEDDED_DATABASE_PATH_PROPERTY))
.setConfig(GraphDatabaseSettings.node_keys_indexable, "name,description")
.setConfig(GraphDatabaseSettings.node_auto_indexing, "true")
.newGraphDatabase();
Also, I pre populate this database with 1000000 nodes. On the user request I need to clear this database and populate it with a new data. How to correctly and quick clear existing database ?
Can I call Neo4j database API for new node creation after database.shutdown() or do I need to initialize new database before it?
See the other answer on the related question. Inside of java, you can shut down an embedded database with the GraphDatabaseService#shutdown() method.
From there, there are a pile of different ways you can delete the underlying directory, see this other answer.
So the general answer can still be the same:
Shutdown the database using the neo4j java API
Delete the database contents off of the disk

Is it possible to use a JNDI dataSource read only in Grails?

I need to add a JNDI datasource from a legacy database to my Grails (1.2.2) application.
So far, the resource is added to my Tomcat (5.5) and DataSource.groovy contains:
development {
dataSource {
jndiName = "jdbc/lrc_legacy_db"
}
}
I also created some domain objects mapping the different tables to comfortably load and handle data from the DB with GORM. But I now want to assure, that every connection to this DB is really read-only. My biggest concern here is the dbCreate- property and the automatic database manipulation through GORM and the GORM classes.
Is it enough to just skip dbCreate?
How do I assure that the database will only be read and never ever manipulated in any way?
You should use the validate option for dbCreate.
EDIT: The documentation is quite a bit different than when I first posted this answer so the link doesn't quite get you to where the validate option is explained. A quick find will get you to the right spot.
According to the Grails documentation:
If your application needs to read but never modify instances of a persistent class, a read-only cache may be used
A read-only cache for a domain class can be configured by
1. Enable Caching
Add something like the following to DataSource.groovy
hibernate {
cache.use_second_level_cache=true
cache.use_query_cache=true
cache.provider_class='org.hibernate.cache.EhCacheProvider'
}
2. Make Cache Read-Only
For each domain class, you will need to add the following to the mapping closure:
static mapping = {
cache usage:'read-only', include:'non-lazy'
}

Resources