Querying an ontologie without knowing classes - jena

I want to query MESH Ontology, it contains more than 281776 class.
I want to have for example all the classes related to the word "dentistry"
How should I write the query with java Jena?
This is the form of the data in the ontology
<!-- http://bioonto.de/mesh.owl#D003813 -->
<owl:Class rdf:about="http://bioonto.de/mesh.owl#D003813">
<rdfs:label rdf:datatype="http://www.w3.org/2001/XMLSchema#string">Dentistry</rdfs:label>
<rdfs:subClassOf rdf:resource="http://bioonto.de/mesh.owl#E06"/>
<rdfs:subClassOf rdf:resource="http://bioonto.de/mesh.owl#H02.163"/>
</owl:Class>
<!-- http://bioonto.de/mesh.owl#D003814 -->
<owl:Class rdf:about="http://bioonto.de/mesh.owl#D003814">
<rdfs:label rdf:datatype="http://www.w3.org/2001/XMLSchema#string">Dentistry, Operative</rdfs:label>
<rdfs:subClassOf rdf:resource="http://bioonto.de/mesh.owl#E06.323"/>
<rdfs:subClassOf rdf:resource="http://bioonto.de/mesh.owl#H02.163.180"/>
</owl:Class>
<!-- http://bioonto.de/mesh.owl#D003815 -->
<owl:Class rdf:about="http://bioonto.de/mesh.owl#D003815">
<rdfs:label rdf:datatype="http://www.w3.org/2001/XMLSchema#string">Dentists</rdfs:label>
<rdfs:subClassOf rdf:resource="http://bioonto.de/mesh.owl#M01.526.485.330"/>
<rdfs:subClassOf rdf:resource="http://bioonto.de/mesh.owl#N02.360.330"/>
</owl:Class>

import com.hp.hpl.jena.ontology.OntClass;
import com.hp.hpl.jena.ontology.OntModel;
import com.hp.hpl.jena.rdf.model.ModelFactory;
import com.hp.hpl.jena.util.FileManager;
import com.hp.hpl.jena.util.iterator.ExtendedIterator;
import java.io.InputStream;
public class Test {
// absolute path to your owl file
static final String inputFileName = "path and file name";
public static void main (String [] args) {
// creating ontology model without reasoner specification
OntModel model = ModelFactory.createOntologyModel();
// opening input owl file
InputStream in = FileManager.get().open(inputFileName);
// reading input owl file
model.read(in, "");
// getting all classes
ExtendedIterator classes = model.listClasses();
//iterating classes
while (classes.hasNext()) {
OntClass cls = (OntClass) classes.next();
// getting local class name - without prefix
String className = cls.getLocalName();
// case sensitive string containment check
if (className.contains("dentistry"));
System.out.print(className + "\n");
}
}
}

Related

UserTypeResolver must not be null

I have been attempting to test out an insert of a Cassandra UDT, and i keep running into the following error:
Exception in thread "main" java.lang.IllegalArgumentException: UserTypeResolver must not be null
After just trying to figure my own way through it, i attempted to exactly replicate the approach outlined in the following:
User Defined Type with spring-data-cassandra
However, i still get the same error.
I am able to insert to the target DB when i remove the UDT and just insert the simple types, so I know that I am connecting appropriately. My config is as follows:
#Configuration
#PropertySource(value = { "classpath:cassandra.properties" })
//#EnableCassandraRepositories(basePackages = { "org.spring.cassandra.example.repo" })
public class CassandraConfig {
private static final Logger LOG = LoggerFactory.getLogger(CassandraConfig.class);
#Autowired
private Environment env;
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(env.getProperty("cassandra.contactpoints"));
cluster.setPort(Integer.parseInt(env.getProperty("cassandra.port")));
return cluster;
}
#Bean
public CassandraMappingContext mappingContext() {
BasicCassandraMappingContext mappingContext = new BasicCassandraMappingContext();
mappingContext.setUserTypeResolver(new SimpleUserTypeResolver(cluster().getObject(), "campaign_management"));
return mappingContext;
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName(env.getProperty("cassandra.keyspace"));
session.setConverter(converter());
session.setSchemaAction(SchemaAction.NONE);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
}
My Address and Employee classes are exactly as shown in the SO question i reference above, and my Main is simply:
public class MainClass {
public static void main(String[] args) {
ApplicationContext service = new AnnotationConfigApplicationContext(CassandraConfig.class);
Employee employee = new Employee();
employee.setEmployee_id(UUID.randomUUID());
employee.setEmployee_name("Todd");
Address address = new Address();
address.setAddress_type("Home");
address.setId("ToddId");
employee.setAddress(address);
CassandraOperations operations = service.getBean("cassandraTemplate", CassandraOperations.class);
operations.insert(employee);
System.out.println("Done");
}
}
I am using:
datastax.cassandra.driver.version=3.1.3
spring.data.cassandra.version=1.5.1
spring.data.commons.version=1.13.1
spring.cql.version=1.5.1
The version referenced in the previous SO question is 1.5.0, though spring.io lists 1.5.1 as current, so I am using that, and no 1.5.0 is shown available.
Any help would be appreciated, as this is driving me somewhat nuts.
You typically get this error when you miss a UserTypeResolver under your cassandra Mapping, itself used by the cassandra Converter, itself used by the Spring Data Cassandra Template
For the details:
Assuming you have a basic Spring MVC Controller up and running elsewhere...
UserDefinedTypes in Cassandra being most interesting within SETs and MAPs, the example below is of such kind.
Example Spring Bean configuration with all defaults (Spring XML application context extract):
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:cassandra="http://www.springframework.org/schema/data/cassandra"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:task="http://www.springframework.org/schema/task"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.3.xsd
http://www.springframework.org/schema/data/cassandra http://www.springframework.org/schema/data/cassandra/spring-cassandra.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.3.xsd">
...
<!-- ===== CASSANDRA ===== -->
<!-- Loads the properties into the Spring Context and uses them to fill in placeholders in bean definitions below -->
<context:property-placeholder location="/WEB-INF/spring/cassandra.properties" />
<!-- REQUIRED: The Cassandra Cluster -->
<cassandra:cluster contact-points="${cassandra.contactpoints}"
port="${cassandra.port}" username="cassandra" password="cassandra"
auth-info-provider-ref="authProvider" />
<!-- REQUIRED: The Cassandra Session, built from the Cluster, and attaching to a keyspace -->
<cassandra:session keyspace-name="${cassandra.keyspace}" />
<!-- REQUIRED: The Default Cassandra Mapping Context used by CassandraConverter
DO include a userTypeResolver for UDT support -->
<cassandra:mapping entity-base-packages="fr.woobe.model">
<cassandra:user-type-resolver keyspace-name="${cassandra.keyspace}" />
</cassandra:mapping>
<!-- REQUIRED: The Default Cassandra Converter used by CassandraTemplate -->
<cassandra:converter />
<bean id="authProvider" class="com.datastax.driver.core.PlainTextAuthProvider">
<constructor-arg index="0" value="myCassandraUser" />
<constructor-arg index="1" value="somePassword" />
</bean>
<!-- REQUIRED: The Cassandra Template is the building block of all Spring Data Cassandra -->
<cassandra:template id="cassandraTemplate" />
...
and then in java, typically within your Spring MVC controller:
import org.springframework.data.cassandra.core.CassandraOperations;
...
// acquire DB template
CassandraOperations cOps = this.beanFactory.getBean("cassandraTemplate", CassandraOperations.class);
// for instance: load everything
List<MyData> rows = cOps.select("SELECT * FROM mydatatable", MyData.class);
// assuming an entry with index i exists...
Set<Pair> mySetOfPairs = rows.get(i).pairSet;
if (mySetOfPairs!=null)
for (Pair p : mySetOfPairs) {
... handle p.first and p.second ...
...
with this kind of entity mappings:
package example.model;
import java.util.Set;
import org.springframework.data.cassandra.core.mapping.CassandraType;
import org.springframework.data.cassandra.core.mapping.PrimaryKey;
import org.springframework.data.cassandra.core.mapping.Table;
import com.datastax.driver.core.DataType.Name;
#Table public class MyData {
#PrimaryKey
public String myKey;
// some other basic fields...
public String moreStuff;
// a SET of user defined 'pair type'
#CassandraType(type = Name.SET, userTypeName = "pairType")
public Set<Pair> pairSet;
// your constructors and other methods here...
}
and a user defined entity like:
package example.model;
import org.springframework.data.cassandra.core.mapping.UserDefinedType;
#UserDefinedType("pairType")
public class Pair {
public String first;
public String second;
public Pair() {
}
public Pair(String f, String s) {
this.first= f;
this.second= s;
}
}
all based on a Cassandra table created as:
CREATE TYPE pairType (first text, second text);
CREATE TABLE MyData (
myKey text,
moreStuff text,
pairSet set<frozen<pairType>>,
PRIMARY KEY (myKey)
) ;
INSERT INTO MyData (myKey, moreStuff, pairSet)
VALUES ('hello', 'world', {
{ first:'one', second:'two' },
{ first:'out', second:'there' } }
) ;
In term of Maven artifacts or libraries, spring-webmvc is indeed required if you run within a Web MVC Spring Controller, and then spring-context-support, and spring-data-cassandra. The DataStax cassandra driver comes along as a dependency.

using groovy script in ANT task to call another ant task and passing groovy variables to ANT target properties

I would like to know how to pass groovy script variables(here: compName, compPath) to the ant target (here : build.application)
I would like to make the values of compName and compPath available to all ant targets in this build.xml
<target name="xmlreader" description="Clean deployment directory">
<groovy>
import javax.xml.xpath.*
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
def ant = new AntBuilder()
File buildfile = new File("d:/Users/sk/workspace/build.xml")
fileContent = buildfile.getText()
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = factory.newDocumentBuilder();
Document doc = builder.parse(buildfile);
XPathFactory xPathfactory = XPathFactory.newInstance();
XPath xpath = xPathfactory.newXPath();
XPathExpression expr = xpath.compile("BuildConfig/Applications/ApplicationConfig");
NodeList nl = (NodeList) expr.evaluate(doc, XPathConstants.NODESET);
for (int i = 0; i < nl.getLength() ; i++) {
String compName = (String)nl.item(i).getElementsByTagName("Name").item(0).getChildNodes().item(0).getNodeValue();
String compPath = (String)nl.item(i).getElementsByTagName("SVN_Path").item(0).getChildNodes().item(0).getNodeValue();
ant.echo "${compName}"
ant.echo "${compPath}"
ant.ant( antfile: 'build.xml' ){
target(name: 'build.application')
}
}
</groovy>
</target>
To answer your direct question, the ant task accepts property children to set properties in the new project used by the target you're calling:
ant.ant( antfile: 'build.xml', target: 'build.application') {
property(name:'compName', value:compName)
property(name:'compPath', value:compPath)
}
But you could also consider xmltask, whose "call" function can achieve the same thing without all the Groovy code.
<xmltask source="d:/Users/sk/workspace/build.xml">
<call path="BuildConfig/Applications/ApplicationConfig" target="build.application">
<param name="compName" path="Name" />
<param name="compPath" path="SVN_Path" />
</call>
</xmltask>

Mule flow: How remove BOM marker from XML file

I have input complex big XML files for the Mule flow.
File end point-> Byte Array to String -> Splitter -> ....
I have got org.xml.sax.SAXParseException: Content is not allowed in prolog when I try to process input files by using Splitter component. When I create new XML file and copy content of original file to the file, input files are processed.
I delete BOM marker when I create new file. Original file has EF BB BF since the beginning of the file, local file has not.
Mule config:
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:tracking="http://www.mulesoft.org/schema/mule/ee/tracking"
xmlns:mulexml="http://www.mulesoft.org/schema/mule/xml"
xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:spring="http://www.springframework.org/schema/beans" version="EE-3.4.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.mulesoft.org/schema/mule/file
http://www.mulesoft.org/schema/mule/file/current/mule-file.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans
current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/xml http://www.mulesoft.org/schema/mule/xml/current/mule-xml.xsd
http://www.mulesoft.org/schema/mule/ee/tracking
http://www.mulesoft.org/schema/mule/ee/tracking/current/mule-tracking-ee.xsd">
<mulexml:dom-to-xml-transformer name="domToXml"/>
<flow name="SplitterFlow1" doc:name="SplitterFlow1">
<file:inbound-endpoint path="D:\WORK\Input"
moveToDirectory="D:\WORK\Output"
responseTimeout="10000" doc:name="File" fileAge="200" encoding="UTF-8"/>
<byte-array-to-string-transformer doc:name="Byte Array to String" />
<splitter evaluator="xpath" expression="/Invoices/invoice"
doc:name="Splitter"/>
<transformer ref="domToXml" doc:name="Transformer Reference"/>
<tracking:custom-event event-name="Invoice ID" doc:name="Custom Business event">
</tracking:custom-event>
<logger level="INFO" doc:name="Logger"/>
<file:outbound-endpoint path="D:\WORK\Output"
outputPattern="#[function:dateStamp:dd-MM-yyyy-HH.mm.ss]-#[header:OUTBOUND:MULE_CORRELATION_SEQUENCE]"
responseTimeout="10000" doc:name="File"></file:outbound-endpoint>
</flow>
</mule>
Please advise me how I can do it in the Mule flow. Thank you in advance.
It's a pretty old post but here is my contribution.
Additionaly to the Java transformer approach suggested by #alexander-shapkin, I strongly recommend that you use Apache Commons' org.apache.commons.io.BOMInputStream to handle BOM marker out-of-the-box. The code would look something like below:
import java.io.InputStream;
import org.apache.commons.io.ByteOrderMark;
import org.apache.commons.io.IOUtils;
import org.apache.commons.io.input.BOMInputStream;
import org.mule.api.MuleMessage;
import org.mule.api.transformer.TransformerException;
import org.mule.transformer.AbstractMessageTransformer;
public class DeleteBOM extends AbstractMessageTransformer {
#Override
public Object transformMessage(MuleMessage message, String outputEncoding)
throws TransformerException {
try (InputStream in = new BOMInputStream(IOUtils.toInputStream(message.getPayloadAsString()), ByteOrderMark.UTF_8)) {
return IOUtils.toString(in);
} catch (Exception e) {
throw new RuntimeException("Could not remove BOM marker");
}
}
}
I partially reproduced your Mule app with the following configuration:
<file:connector name="File" autoDelete="false" streaming="true" validateConnections="true" doc:name="File" />
<mulexml:dom-to-xml-transformer name="DOM_to_XML" doc:name="DOM to XML"/>
<flow name="lalaFlow">
<file:inbound-endpoint path="D:\WORK\Input" moveToDirectory="D:\WORK\Output" responseTimeout="10000" doc:name="File" fileAge="200" encoding="UTF-8"/>
<component class="org.mule.bom.DeleteBOM" doc:name="Java"/>
<transformer ref="DOM_to_XML" doc:name="Transformer Reference"/>
...
</flow>
For further reference, go to https://commons.apache.org/proper/commons-io/javadocs/api-2.2/org/apache/commons/io/input/BOMInputStream.html
You can add before splitter an Java transformer with class:
package importxmltoapis;
import org.mule.api.MuleMessage;
import org.mule.api.transformer.TransformerException;
import org.mule.transformer.AbstractMessageTransformer;
public class DeleteBOM extends AbstractMessageTransformer{
public static final String BOM = "\uFEFF";
#Override
public Object transformMessage(MuleMessage message, String outputEncoding)
throws TransformerException {
String s="";
try {s = removeBOM(message.getPayloadAsString());} catch (Exception e) {e.printStackTrace();}
return s;
}
private static String removeBOM(String s) {
if (s.startsWith(BOM)) {
s = s.substring(1);
}
return s;
}
}
Try the following
1.Use the file to string transformer instead of bytearray to string transformer .
2.Check if you big xml is read completely and if not use the file age property of the file endpoint which will enable you to read your large file completely.

Jena Ontology API how to retrieve axiom that attach annotation to a class property relation

I have annotated the relationship described below
TV subClassof : Restriction {hasFeature some PowerConsumption} ::: #isNegative=true.
The TV class has a Object property called hasFeature with values in class PowerConsumption. The annotation is applied to this property relation. The OWL file gets added with the following axiom to represent the added annotation. How can I retrieve this axiom and get the annotation value of isNegative using Jena?
<owl:Axiom>
<isNegative>true</isNegative>
<owl:annotatedSource rdf:resource="&product_ontolology;TV"/>
<owl:annotatedProperty rdf:resource="&rdfs;subClassOf"/>
<owl:annotatedTarget>
<owl:Restriction>
<owl:onProperty rdf:resource="&product_ontolology;hasFeature"/>
<owl:someValuesFrom rdf:resource="&product_ontolology;PowerConsumption"/>
</owl:Restriction>
</owl:annotatedTarget>
</owl:Axiom>
Jena is an RDF-centric API, although it provides some abstraction in the form of OntModel. Even so, OntModels don't provide a convenient way to access the axioms and to annotate them. You might have better luck using the a more OWL-centric API, such as the aptly named OWL API.
Nonetheless, OWL can be serialized as RDF, and while there may be pitfalls (because there might be variation in the way that an OWL ontology can be serialized into RDF), you can probably get the sort of results you want. Here's Java code that loads a small portion of your ontology, finds the owl:Axioms inside, and determines which of their properties are annotation properties.
import java.util.HashSet;
import java.util.Set;
import com.hp.hpl.jena.rdf.model.Model;
import com.hp.hpl.jena.rdf.model.ModelFactory;
import com.hp.hpl.jena.rdf.model.Property;
import com.hp.hpl.jena.rdf.model.ResIterator;
import com.hp.hpl.jena.rdf.model.Resource;
import com.hp.hpl.jena.rdf.model.Statement;
import com.hp.hpl.jena.rdf.model.StmtIterator;
import com.hp.hpl.jena.vocabulary.OWL2;
import com.hp.hpl.jena.vocabulary.RDF;
public class AnnotationExample {
/**
* #param args
*/
public static void main(String[] args) {
// create the model and load the data.
Model model = ModelFactory.createDefaultModel().read( "products.owl" );
// owlAnnotationProperties are the properties used to represent
// annotated axioms in RDF/XML.
Set<Property> owlAnnotationProperties = new HashSet<Property>() {{
add( RDF.type );
add( OWL2.annotatedProperty );
add( OWL2.annotatedSource );
add( OWL2.annotatedTarget );
}};
// Find the axioms in the model. For each axiom, iterate through the
// its properties, looking for those that are *not* used for encoding the
// annotated axiom. Those that are left are the annotations.
ResIterator axioms = model.listSubjectsWithProperty( RDF.type, OWL2.Axiom );
while ( axioms.hasNext() ) {
Resource axiom = axioms.next();
StmtIterator stmts = axiom.listProperties();
while ( stmts.hasNext() ) {
Statement stmt = stmts.next();
if ( !owlAnnotationProperties.contains( stmt.getPredicate() )) {
System.out.println( stmt );
}
}
}
}
}
The output shows the statement that you are interested in.
[630c9cd5:13f7b69db3c:-7ffe, http://www.example.com/products#isNegative, "true"^^http://www.w3.org/2001/XMLSchema#boolean]
Here's the small OWL ontology I used:
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:owl="http://www.w3.org/2002/07/owl#"
xmlns:products="http://www.example.com/products#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#">
<owl:Ontology rdf:about="http://www.example.com/products"/>
<owl:Class rdf:about="http://www.example.com/products#TV">
<rdfs:subClassOf>
<owl:Restriction>
<owl:someValuesFrom>
<owl:Class rdf:about="http://www.example.com/products#PowerConsumption"/>
</owl:someValuesFrom>
<owl:onProperty>
<owl:ObjectProperty rdf:about="http://www.example.com/products#hasFeature"/>
</owl:onProperty>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<owl:AnnotationProperty rdf:about="http://www.example.com/products#isNegative"/>
<owl:Axiom>
<owl:annotatedTarget>
<owl:Restriction>
<owl:someValuesFrom rdf:resource="http://www.example.com/products#PowerConsumption"/>
<owl:onProperty rdf:resource="http://www.example.com/products#hasFeature"/>
</owl:Restriction>
</owl:annotatedTarget>
<owl:annotatedProperty rdf:resource="http://www.w3.org/2000/01/rdf-schema#subClassOf"/>
<owl:annotatedSource rdf:resource="http://www.example.com/products#TV"/>
<products:isNegative rdf:datatype="http://www.w3.org/2001/XMLSchema#boolean"
>true</products:isNegative>
</owl:Axiom>
</rdf:RDF>

ActionScript: Correctly casting a bitmap to a class

I have tree instance calling an iconFunction "getIconFromItem".
<mx:Tree dataProvider="{collection}" iconFunction="getIconFromItem" />
The getIconFromItem function is returning null even though the bitmap is not null.
public function getIconFromItem(item:Object):Class {
var result:Class = item.icon as Class ;
return result ;
}
Setting a break point on the return result line reveals that item.icon is a bitmap and result is null.
Any ideas or pointers on how to successfully cast a bitmap as class so that the bitmap is returned as an icon?
Cheers
Keith
The problem here is that item.icon is of type Bitmap, which is not extended from the type Class, but from Object. Whenever you cast to a type that is not in the object's type hierarchy, null is returned.
You want to get the class of the icon, which will be instantiated by the tree control, not the icon itself, so you should change your function to
public function getIconFromItem(item:Object):Class {
return item.icon.constructor as Class;
}
The issue was iconFunction expects a class. Each embedded image has a subclass automatically generated by the compiler. Thanks to #weltraumpirat for pointing me in the right direction :-)
The goal was to dynamically display icons in a tree class. It is possible to modify the class created by Ben Stucki (http://blog.benstucki.net/?p=42) to work with the TreeItemRenderer.data object but this failed when I created a custom MXTreeItemRenderer.
Ended up creating a BitmapImage and a custom TreeItemRenderer which used a prefined iconBitmap attribute within a class to dynamically load icons.
package components
{
import flash.display.BitmapData;
import mx.core.mx_internal;
import spark.core.IContentLoader;
import spark.primitives.BitmapImage;
public class RuntimeBitmapImage extends BitmapImage
{
public function RuntimeBitmapImage()
{
super();
}
public function set bitmapData(bitmapData:BitmapData):void
{
super.setBitmapData(bitmapData, false);
}
}
}
<?xml version="1.0" encoding="utf-8"?>
<s:MXTreeItemRenderer xmlns:fx="http://ns.adobe.com/mxml/2009"
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:mx="library://ns.adobe.com/flex/mx" >
<fx:Script>
<![CDATA[
override public function set data(value:Object):void {
super.data = value;
if ( value != null ) {
if ( value.iconBitmap != null ) {
runtimeBitmapImage.bitmapData = value.iconBitmap.bitmapData ;
}
}
}
]]>
</fx:Script>
<s:states>
<s:State name="normal" />
<s:State name="hovered" />
<s:State name="selected" />
</s:states>
<s:HGroup left="0" right="0" top="0" bottom="0" paddingTop="2" verticalAlign="middle">
<s:Rect id="indentationSpacer" width="{treeListData.indent}" percentHeight="100" alpha="0">
<s:fill>
<s:SolidColor color="0xFFFFFF" />
</s:fill>
</s:Rect>
<potentiate:RuntimeBitmapImage id="runtimeBitmapImage" left="2" width="18" />
<s:Label id="labelField" text="{treeListData.label}" paddingTop="2"/>
</s:HGroup>
</s:MXTreeItemRenderer>

Resources