MapMessage not recognized as such - log4j2

I am using a software framework which on its part uses log4j-2.7 (I can't update the jars of the framework).
I have written a third party library which provides a RewritePolicy to re-format the log messages. The library uses log4j-2.7 as well.
Within the framework I do some loggings with MapMessage. However, the rewrite policy receives them as SimpleMessage or some other types of Message, but not as MapMessage.
Here is code examples from framwork:
var mapMessage = new MapMessage()
mapMessage.put("first", "first")
mapMessage.put("second", "second")
logger.info(mapMessage)
And here the rewrite method of the RewritePolicy:
#Override
public LogEvent rewrite(LogEvent source) {
final Message modifiedMessage;
Message origMessage = source.getMessage();
if (origMessage instanceof MapMessage) {
Map<String, Object> map = new HashMap<String, Object>(((MapMessage) origMessage).getData());
modifiedMessage = new SimpleMessage(createStringMessage((HashMap<String, Object>) map));
} else {
modifiedMessage = origMessage;
}
LogEvent modifiedLogEvent = new Log4jLogEvent.Builder(source).setMessage(modifiedMessage).build();
return modifiedLogEvent;
}

My problem was due to the fact that the sofware platform mentioned in my post did migrate from log4j 1 to 2, and as explained here https://logging.apache.org/log4j/2.x/manual/migration.html, I had to change the package from org.apache.log4j to org.apache.logging.log4j.

Related

Deriving device connection string from the environment

IoT modules can be created from the environment using :
ModuleClient.CreateFromEnvironmentAsync(settings)
However, there does not seem to be an equivalent method for devices. For now, I am setting the device connection string in the program to test it out, but is there a better way to read teh connection string from iotedge/config.yaml for all the edge devices deployed out there?
Methods to do so for .NET and python would be appreciated.
You can use a yaml parse library to deserialize the document, such as YamlDotNet. In fact, you can refer to YamlDocument in iot edge. But in the class, it does not provide a method to get the key value. Please refer to following code.
public class YamlDocument
{
readonly Dictionary<object, object> root;
public YamlDocument(string input)
{
var reader = new StringReader(input);
var deserializer = new Deserializer();
this.root = (Dictionary<object, object>)deserializer.Deserialize(reader);
}
public object GetKeyValue(string key)
{
if(this.root.ContainsKey(key))
{
return this.root[key];
}
foreach(var item in this.root)
{
var subItem = item.Value as Dictionary<object, object>;
if(subItem != null && subItem.ContainsKey(key))
{
return subItem[key];
}
}
return null;
}
}
And then you can get the device connection string from the config.yaml. If you use python, you can import yaml library to analysis the file.
StreamReader sr = new StreamReader(#"C:\ProgramData\iotedge\config.yaml");
var yamlString = sr.ReadToEnd();
var yamlDoc = new YamlDocument(yamlString);
var connectionString = yamlDoc.GetKeyValue("device_connection_string");
Console.WriteLine("{0}", connectionString);
To get the config file from the host, add the following to the docker deployment file. Note that the source file is config1.yaml which is the same as config.yaml except that it has read permissions for everyone not just root.
"createOptions": "{\"HostConfig\":{\"Binds\":[\"/etc/iotedge/config1.yaml:/app/copiedConfig.yaml\"]}}"
With the above line in place, the copiedConfig.yaml file can be used in the container, along with #Michael Xu's parsing code to derive teh connection string.
Long term, one may want to use the device provisioning service anyway but hope this helps for folks using device conenction strings for whatever reason..

NEO4J Spatial: tips about batch inserter

This is my scenario: we are building a routing system by using neo4j and the spatial plugin. We start from the OSM file and we read this file and import nodes and relationships in our graph (a custom graph model)
Now, if we don't use the batch inserter of neo4j, in order to import a compressed OSM file (with compressed dimension of around 140MB, and normal dimensions around 2GB) it takes around 3 days on a dedicated server with the following characteristics: CentOS 6.5 64bit, quad core, 8GB RAM; pease note that the most time is related to the Neo4J Nodes and relationships creation; in-fact if we read the same file without doing anything with neo4j, the file is read in around 7 minutes (i'm sure about this becouse in our process we first read the file in order to store the correct osm nodes ids and then we read again the file in order to create the neo4j graph)
Obviously we need to improve the import proces so we are trying to use the batchInserter. So far, so good (I need to check how much it will perform by using the batchInserter but I guess it will be faster); so the first thing I did was: let's try to use the batch inserter in a simple test case (very similar to our code, but without modifying our code directly)
I list my software versions:
Neo4j: 2.0.2
Neo4jSpatial: 0.13-neo4j-2.0.1
Neo4jGraphCollections: 0.7.1-neo4j-2.0.1
Osmosis: 0.43.1
Since I'm using osmosis in order to read the osm file, I wrote the following Sink implementation:
public class BatchInserterSinkTest implements Sink
{
public static final Map<String, String> NEO4J_CFG = new HashMap<String, String>();
private static File basePath = new File("/home/angelo/Scrivania/neo4j");
private static File dbPath = new File(basePath, "db");
private GraphDatabaseService graphDb;
private BatchInserter batchInserter;
// private BatchInserterIndexProvider batchIndexService;
private SpatialDatabaseService spatialDb;
private SimplePointLayer spl;
static
{
NEO4J_CFG.put( "neostore.nodestore.db.mapped_memory", "100M" );
NEO4J_CFG.put( "neostore.relationshipstore.db.mapped_memory", "300M" );
NEO4J_CFG.put( "neostore.propertystore.db.mapped_memory", "400M" );
NEO4J_CFG.put( "neostore.propertystore.db.strings.mapped_memory", "800M" );
NEO4J_CFG.put( "neostore.propertystore.db.arrays.mapped_memory", "10M" );
NEO4J_CFG.put( "dump_configuration", "true" );
}
#Override
public void initialize(Map<String, Object> arg0)
{
batchInserter = BatchInserters.inserter(dbPath.getAbsolutePath(), NEO4J_CFG);
graphDb = new SpatialBatchGraphDatabaseService(batchInserter);
spatialDb = new SpatialDatabaseService(graphDb);
spl = spatialDb.createSimplePointLayer("testBatch", "latitudine", "longitudine");
//batchIndexService = new LuceneBatchInserterIndexProvider(batchInserter);
}
#Override
public void complete()
{
// TODO Auto-generated method stub
}
#Override
public void release()
{
// TODO Auto-generated method stub
}
#Override
public void process(EntityContainer ec)
{
Entity entity = ec.getEntity();
if (entity instanceof Node) {
Node osmNodo = (Node)entity;
org.neo4j.graphdb.Node graphNode = graphDb.createNode();
graphNode.setProperty("osmId", osmNodo.getId());
graphNode.setProperty("latitudine", osmNodo.getLatitude());
graphNode.setProperty("longitudine", osmNodo.getLongitude());
spl.add(graphNode);
} else if (entity instanceof Way) {
//do something with the way
} else if (entity instanceof Relation) {
//do something with the relation
}
}
}
Then I wrote the following test case:
public class BatchInserterTest
{
private static final Log logger = LogFactory.getLog(BatchInserterTest.class.getName());
#Test
public void batchInserter()
{
File file = new File("/home/angelo/Scrivania/MilanoPiccolo.osm");
try
{
boolean pbf = false;
CompressionMethod compression = CompressionMethod.None;
if (file.getName().endsWith(".pbf"))
{
pbf = true;
}
else if (file.getName().endsWith(".gz"))
{
compression = CompressionMethod.GZip;
}
else if (file.getName().endsWith(".bz2"))
{
compression = CompressionMethod.BZip2;
}
RunnableSource reader;
if (pbf)
{
reader = new crosby.binary.osmosis.OsmosisReader(new FileInputStream(file));
}
else
{
reader = new XmlReader(file, false, compression);
}
reader.setSink(new BatchInserterSinkTest());
Thread readerThread = new Thread(reader);
readerThread.start();
while (readerThread.isAlive())
{
try
{
readerThread.join();
}
catch (InterruptedException e)
{
/* do nothing */
}
}
}
catch (Exception e)
{
logger.error("Errore nella creazione di neo4j con batchInserter", e);
}
}
}
By executing this code, I get this exception:
Exception in thread "Thread-1" java.lang.ClassCastException: org.neo4j.unsafe.batchinsert.SpatialBatchGraphDatabaseService cannot be cast to org.neo4j.kernel.GraphDatabaseAPI
at org.neo4j.cypher.ExecutionEngine.<init>(ExecutionEngine.scala:113)
at org.neo4j.cypher.javacompat.ExecutionEngine.<init>(ExecutionEngine.java:53)
at org.neo4j.cypher.javacompat.ExecutionEngine.<init>(ExecutionEngine.java:43)
at org.neo4j.collections.graphdb.ReferenceNodes.getReferenceNode(ReferenceNodes.java:60)
at org.neo4j.gis.spatial.SpatialDatabaseService.getSpatialRoot(SpatialDatabaseService.java:76)
at org.neo4j.gis.spatial.SpatialDatabaseService.getLayer(SpatialDatabaseService.java:108)
at org.neo4j.gis.spatial.SpatialDatabaseService.containsLayer(SpatialDatabaseService.java:253)
at org.neo4j.gis.spatial.SpatialDatabaseService.createLayer(SpatialDatabaseService.java:282)
at org.neo4j.gis.spatial.SpatialDatabaseService.createSimplePointLayer(SpatialDatabaseService.java:266)
at it.eng.pinf.graph.batch.test.BatchInserterSinkTest.initialize(BatchInserterSinkTest.java:46)
at org.openstreetmap.osmosis.xml.v0_6.XmlReader.run(XmlReader.java:95)
at java.lang.Thread.run(Thread.java:744)
This is related to this code:
spl = spatialDb.createSimplePointLayer("testBatch", "latitudine", "longitudine");
So now I'm wondering: how can I use the batchInserter for my case? I have to add the created nodes to the SimplePointLayer....so how can I create it by using the batchInserter graph db service?
Is there any little simple sample?
Any tip is really really appreciated
cheers
Angelo
The OSMImporter class in the code has an example of using the batch inserter to import OSM data. The main thing is that the batch inserter is not really supported by neo4j spatial, so you need to do a few things manually. If you look at the class OSMImporter.OSMBatchWriter, you will see how it does things. It is not using the SimplePointLayer at all, since that does not support the batch inserter. It is creating the graph structure it wants directly. The simple point layer is quite simple, certainly much simpler than the OSM model created by the code I'm referencing, so I think you should be able to write a batch-inserter compatible version yourself without too much trouble.
What I would recommend is that you create the layer and nodes using the batch inserter to create the correct graph structure, then switch to the normal embedded API and use that to iterate through the nodes and add them to the spatial index.

How to use Dart http_server:VirtualDirectory

I have been using the dart:route api for serving static files but I noticed that there is a core library called http_server that contains helper classes and functions for dart:io HttpServer.
Of particular interest to me is the class VirtualDirectory which, according to the docs, takes a String Object for the static content of directory and then you call the method serve()
var virtualDirectory = new VirtualDirectory('/var/www/');
virtualDirectory.serve(new HttpServer('0.0.0.0', 8080));
This doesn't work as there is no constructor for HttpServer - at least not in current versions.
virtualDirectory.serve(HttpServer.bind('0.0.0.0', 8080));
Which is how I have been instantiating a server also fails since virtualDirectory.serve() doesn't take a Future<HttpServer> and finally:
virtualDirectory.serve(HttpServer.bind('0.0.0.0', 8080).asStream());
also fails with
The argument type 'Stream' cannot be assigned to the parameter type
'Stream'
So how do I connect a VirtualDirectory to a Server? There are no examples that I can find online and the VirtualDirectory source code does not make it clear. I would RTFM if I could FTFM. Links are fine as answers.
The VirtualDirectory can work from inside the Future returned by HttpServer.bind. You can create a static file web server by using the following five lines of code:
HttpServer.bind('127.0.0.1', 8888).then((HttpServer server) {
VirtualDirectory vd = new VirtualDirectory('../web/');
vd.jailRoot = false;
vd.serve(server);
});
You can make it more sophisticated by parsing the URI and pulling out service requests prior to serving out files.
import 'dart:io';
import 'package:http_server/http_server.dart';
main() {
handleService(HttpRequest request) {
print('New service request');
request.response.write('[{"field":"value"}]');
request.response.close();
};
HttpServer.bind('127.0.0.1', 8888).then((HttpServer server) {
VirtualDirectory vd = new VirtualDirectory('../web/');
vd.jailRoot = false;
server.listen((request) {
print("request.uri.path: " + request.uri.path);
if (request.uri.path == '/services') {
handleService(request);
} else {
print('File request');
vd.serveRequest(request);
}
});
});
}

Replace Sharp Architecture's NHibernate.config with a Fluent Configuration

By default, the solution generated from Sharp Architecture's templify package configures NHibernate using an NHibernate.config file in the {SolutionName}.Web project. I would like to replace it with a fluent configuration of my own and still have the rest of Sharp Architecture work correctly.
Any help will be much appreciated. :)
Solution: Here's how I got it to work:
IPersistenceConfigurer configurer = OracleClientConfiguration.Oracle10
.AdoNetBatchSize(500)
.ShowSql()
.ConnectionString(c => c.FromConnectionStringWithKey("NHibernate.Localhost"))
.DefaultSchema("MySchema")
.ProxyFactoryFactory("NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle")
.UseReflectionOptimizer();
NHibernateSession.Init(
webSessionStorage,
new string[] { Server.MapPath("~/bin/MyProject.Data.dll") },
new AutoPersistenceModelGenerator().Generate(),
null,
null,
null,
configurer);
iirc the NhibernateSession class that is used to configure nhibernate has a bunch of overloads one of them giving you the ability to configure it via code.
Very old post. I'll leave it here in case someone else is interested. On SharpArch 1.9.6.0 you can add two methods to NHibernateSession.cs. This will let you pass-in a FluentConfiguration object.
public static FluentConfiguration Init(ISessionStorage storage, FluentConfiguration fluentConfiguration)
{
InitStorage(storage);
try
{
return AddConfiguration(DefaultFactoryKey, fluentConfiguration);
}
catch
{
// If this NHibernate config throws an exception, null the Storage reference so
// the config can be corrected without having to restart the web application.
Storage = null;
throw;
}
}
private static FluentConfiguration AddConfiguration(string defaultFactoryKey, FluentConfiguration fluentConfiguration)
{
var sessionFactory = fluentConfiguration.BuildSessionFactory();
Check.Require(!sessionFactories.ContainsKey(defaultFactoryKey),
"A session factory has already been configured with the key of " + defaultFactoryKey);
sessionFactories.Add(defaultFactoryKey, sessionFactory);
return fluentConfiguration;
}

Setting timeout for new URL(...).text in Groovy/Grails

I use the following Groovy snippet to obtain the plain-text representation of an HTML-page in a Grails application:
String str = new URL("http://www.example.com/some/path")?.text?.decodeHTML()
Now I want to alter the code so that the request will timeout after 5 seconds (resulting instr == null). What is the easiest and most Groovy way to achieve that?
I checked source code of groovy 2.1.8, below code is available:
'http://www.google.com'.toURL().getText([connectTimeout: 2000, readTimeout: 3000])
The logic to process configuration map is located in method org.codehaus.groovy.runtime.ResourceGroovyMethods#configuredInputStream
private static InputStream configuredInputStream(Map parameters, URL url) throws IOException {
final URLConnection connection = url.openConnection();
if (parameters != null) {
if (parameters.containsKey("connectTimeout")) {
connection.setConnectTimeout(DefaultGroovyMethods.asType(parameters.get("connectTimeout"), Integer.class));
}
if (parameters.containsKey("readTimeout")) {
connection.setReadTimeout(DefaultGroovyMethods.asType(parameters.get("readTimeout"), Integer.class));
}
if (parameters.containsKey("useCaches")) {
connection.setUseCaches(DefaultGroovyMethods.asType(parameters.get("useCaches"), Boolean.class));
}
if (parameters.containsKey("allowUserInteraction")) {
connection.setAllowUserInteraction(DefaultGroovyMethods.asType(parameters.get("allowUserInteraction"), Boolean.class));
}
if (parameters.containsKey("requestProperties")) {
#SuppressWarnings("unchecked")
Map<String, String> properties = (Map<String, String>) parameters.get("requestProperties");
for (Map.Entry<String, String> entry : properties.entrySet()) {
connection.setRequestProperty(entry.getKey(), entry.getValue());
}
}
}
return connection.getInputStream();
}
You'd have to do it the old way, getting a URLConnection, setting the timeout on that object, then reading in the data through a Reader
This would be a good thing to add to Groovy though (imho), as it's something I could see myself needing at some point ;-)
Maybe suggest it as a feature request on the JIRA?
I've added it as a RFE on the Groovy JIRA
https://issues.apache.org/jira/browse/GROOVY-3921
So hopefully we'll see it in a future version of Groovy...

Resources