I want to expose my MXBeans on Apache-Tomcat 7.0.
Though my MXBean registers successfully, I am unable to add description to the Operations that are exposed by thoese MXBeans.
Registering MXBeans
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName m_mxbeanOName = new ObjectName( "MyMXBean:type=" + "MyComponent"+",name=MyMXBean");
MyMXBean m_mxbean = new MyMXBean ();
if(!mbs.isRegistered(m_mxbeanOName))
mbs.registerMBean(m_mxbean, m_mxbeanOName);
MyMXBean Interface
public interface MyMXBean {
public int add (int x, int y);
}
MyMXBean Implementation
import com.sun.org.glassfish.gmbal.Description;
import com.sun.org.glassfish.gmbal.DescriptorFields;
import com.sun.org.glassfish.gmbal.Impact;
import com.sun.org.glassfish.gmbal.ManagedOperation;
public class MyMXBeanImpl implements MyMXBean {
#ManagedOperation(impact=Impact.ACTION_INFO)
#Description("Integer Addition: First parameter is the augend and second parameter is the addend.")
#DescriptorFields({"p1=augend","p2=addend"})
public int add(int x, int y) {
return x + y;
}
The annotation #ManagedOperation, #Description, #DescriptorFields has no effect on the jconsole. JConsole continues to show default values
Please tell me ways to show the description about my MXBean operations on JConsole.
The cleanest way I have found to do this is to use StandardMBean (or StandardEmitterMBean) as the actual object you register with JMX. Then, subclass StandardMBean and override the various getDescription methods. In those methods, read your annotations that contain the descriptions.
I found this very nice blogentry with code for #Descriptor and #Name attributes and a AnnotatedStandardMXBean wrapper which handles this.
http://actimem.com/java/jmx-annotations/
Sample MXBean using this:
#MXBean
#Description("A test resource")
public interface SampleMXBean {
#Description("string#1")
String getString1();
#Description("string#2")
String getString2();
#Description("string#3")
String string3(#Description("int i") #Name("i") int i, #Description("long j") #Name("j") long j);
}
Related
I am using Spring Data Neo4j RX. And I have a query like this:
#Query("MATCH (a:Repo)-[:REPO_DEPEND_ON]->(b:Repo) WHERE a.name= $name RETURN a.name, b.name")
String[] getSingleRepoDependencyTo(String name);
I know the return type is wrong here, as it cannot be a String array. But how can I get the result properly, which contains two fields?
I have searched online for a long time but cannot find an answer. The "#QueryResult" annotation is not supported in this RX version yet.
Thanks for your help.
Assuming that you have a mapped #Node Repo with its relationships like
#Node
public class Repo {
// other things
String name;
#Relationship("REPO_DEPEND_ON") Repo repo;
}
and defining this method in a ...extends Neo4jRepository<Repo,...>,
you could use Projections.
public interface RepoProjection {
String getName();
DependingRepo getRepo();
/**
* nested projection
*/
interface DependingRepo {
String getName();
}
}
Important to keep in mind that the returned values should be the nodes and relationship to make it work this way.
You could also remove the custom query and do something like:
RepoProjection findByName(String name)
if you do not have the need for a findByName in this repository for the entity itself.
Take a look here: https://neo4j.github.io/sdn-rx/current/#projections.interfaces
It seems to list exactly what you want. From those docs:
interface NamesOnly {
String getFirstName();
String getLastName();
}
interface PersonRepository extends Neo4jRepository<Person, Long> {
List<NamesOnly> findByFirstName(String firstName);
}
There are some other variations too.
You can use annotation #QueryResult on your expected model. For instance you can do that in this way.
DTO:
import org.springframework.data.neo4j.annotation.QueryResult;
#QueryResult
public class SomeDto {
private int someInt;
private SomeObject sobj;
private double sdouble;
private AnotherObject anObj;
//getters setters
}
Neo4jRepository:
public interface DomainObjectRepository extends Neo4jRepository<DomainObject, Long> {
#Query("MATCH(n:SomeTable) RETURN someInt, sobj, sdouble, anObj") //Return a few columns
Optional<SomeDto> getSomeDto();
}
Im From VB.net base want to learn ASP.NET MVC
For Example : Function created below (how to create in global folder/cs and how to call it then use in Controller)
Function pRound(Number ,NumDigits)
Dim dblPower, vPSTEmp, intSgn
dblPower = 10 ^ NumDigits
vPSTEmp = CDbl(Number * dblPower + 0.5)
pRound = Int(vPSTEmp) / dblPower
End Function
For vb I just add <--#include file="include/function.asp"-->
then can use it like pRound(number, 4)
Please Teach Me How to Do it. Thx a lot.
You could add a new class file in your solution and make a static class;
namespace ProjectName.Functions
{
public static class Utility
{
public static float pRound(float number, int digits){
float result = 0;
// your code here
return result;
}
}
}
Then in your controller, since a static class is instantiated at the start of the program, you could just call it;
using ProjectName.Functions;
public ActionResult TestController
{
// call Utility.pRound(), no need to instantiate the class
float round = Utility.pRound(1,1);
}
I've edited Jerdine Saibo's answer.
By the time it gets approved, here's an updated code. (pRound methods needs to be static)
namespace ProjectName.Functions
{
public static class Utility
{
public static float pRound(float number, int digits){
float result = 0;
// your code here
return result;
}
}
}
You can make a wrapper class and call from the controller else another option is that you can create a base controller and implement it in all your other controller, then you can access like base.pRound(1,1);
There is nameof operator in C#, it allows to get property name at compile time:
var name = nameof(User.email);
Console.WriteLine(name);
//Prints: email
It is not possible to use reflection in flutter and I do not want to hardcode names of properties i.e. to be used for querying SQLite tables. Is there any workaround?
***Currently I'm using built_value library.
For the archives, I guess, this isn't possible as Dart doesn't store the names of variables after compiling, and as you mentioned, Flutter doesn't support reflection.
But you can still hardcode responsibly by grouping your properties as part of the object that they belong to, like with JSON:
class User {
final String email;
final String name;
const User({required this.email, required this.name});
Map toJson() => {
"email": email,
"name": name,
};
}
Instead of remembering to type out "email" and "name" whenever you use User, just call User.toJson(). Then, when you want to rename your variables, you can use your IDE's "rename all", or just skim over your User class to quickly change all of the names without missing any.
I'm currently monitoring the progress on the dart:mirrors package, which offers some neat reflective properties and methods, though, I hadn't found a simple way to just get the name of a symbol like nameof() does.
Example:
import 'dart:mirrors';
class User {
final String email;
final String name;
const User({required this.email, required this.name});
}
void main() {
reflectClass(User).declarations.forEach((key, value) {
print(value.simpleName);
});
}
Output:
Symbol("email")
Symbol("name")
Symbol("User")
These are of type Symbol.
More here: https://api.dart.dev/stable/2.4.0/dart-mirrors/dart-mirrors-library.html
So, until they develop a nameof, I've created an extension on symbol:
extension SymbolExtensions on Symbol {
String get nameof =>
RegExp(r'"(.*?)"').firstMatch(toString())!.group(1).toString();
}
So, you could do:
print(reflectClass(User)
.declarations[#email)]!
.simpleName
.nameof);
Output:
email
It's a start. Dart is still growing.
You can use code generation.
The basic idea is to create a nameof annotation class and mark parts of your code with it. You also need to create a code generator that handles your annotated code. Look at the json_serializable package for an example and create your own code generator.
If you do not want to create your own generator, use a ready-made package nameof: https://pub.dev/packages/nameof
Short how-to with this package.
Mark your class with nameof annotation.
#nameof
class Car {
final double price;
final double weigth;
final int year;
final String model;
Car(this.price, this.weigth, this.year, this.model);
Car.sedan(double price, double weigth, int year)
: this(price, weigth, year, 'Sedan');
}
Run the code generator.
flutter pub run build_runner build
Then use the generated class, which will look something like this.
/// Container for names of elements belonging to the [Car] class
abstract class NameofCar {
static const String className = 'Car';
static const String constructor = '';
static const String constructorSedan = 'sedan';
static const String fieldPrice = 'price';
static const String fieldWeigth = 'weigth';
static const String fieldYear = 'year';
static const String fieldModel = 'model';
}
You can implement your own nameOf function:
String? nameOf(dynamic o) {
if (o == null) return "null";
try {
if (o is List) {
var first = o.length > 0 ? o[0] : null;
if (first != null) {
var elementType = nameOf(first)!;
Log.debug("nameOf: List<$elementType>");
if (!isMinified(elementType))
return "List<$elementType>";
}
} else {
Function? getTypeName = o.getTypeName;
if (getTypeName != null) return getTypeName();
}
} catch (e) {
Log.debug("ignored nameOf error: $e, falling back to o.runtimeType: ${o.runtimeType}");
}
return o.runtimeType.toString();
}
bool isMinified(String type) => type.startsWith("minified:");
I have many large unpartitioned BigQuery tables and files that I would like to partition in various ways. So I decided to try and write a Dataflow job to achieve this. The job I think is simple enough. I tried to write with generics so that I easily apply it both TextIO and BigQueryIO sources. It works fine with small tables, but I keep getting java.lang.OutOfMemoryError: Java heap space when I run it on large tables.
In my main class I either read a file with target keys (made with another DF job) or run a query against a BigQuery table to get a list of keys to shard by. My main class looks like this:
Pipeline sharder = Pipeline.create(opts);
// a functional interface that shows the tag map how to get a tuple tag
KeySelector<String, TableRow> bqSelector = (TableRow row) -> (String) row.get("COLUMN") != null ? (String) row.get("COLUMN") : "null";
// a utility class to store a tuple tag list and hash map of String TupleTag
TupleTagMap<String, TableRow> bqTags = new TupleTagMap<>(new ArrayList<>(inputKeys),bqSelector);
// custom transorm
ShardedTransform<String, TableRow> bqShard = new ShardedTransform<String, TableRow>(bqTags, TableRowJsonCoder.of());
String source = "PROJECTID:ADATASET.A_BIG_TABLE";
String destBase = "projectid:dataset.a_big_table_sharded_";
TableSchema schema = bq.tables().get("PROJECTID","ADATASET","A_BIG_TABLE").execute().getSchema();
PCollectionList<TableRow> shards = sharder.apply(BigQueryIO.Read.from(source)).apply(bqShard);
for (PCollection<TableRow> shard : shards.getAll()) {
String shardName = StringUtils.isNotEmpty(shard.getName()) ? shard.getName() : "NULL";
shard.apply(BigQueryIO.Write.to(destBase + shardName)
.withWriteDisposition(WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED)
.withSchema(schema));
System.out.println(destBase+shardName);
}
sharder.run();
I generate a set of TupleTags to use in a custom transform. I created a utility class that stores a TupleTagList and HashMap so that I can reference the tuple tags by key:
public class TupleTagMap<Key, Type> implements Serializable {
private static final long serialVersionUID = -8762959703864266959L;
final private TupleTagList tList;
final private Map<Key, TupleTag<Type>> map;
final private KeySelector<Key, Type> selector;
public TupleTagMap(List<Key> t, KeySelector<Key, Type> selector) {
map = new HashMap<>();
for (Key key : t)
map.put(key, new TupleTag<Type>());
this.tList = TupleTagList.of(new ArrayList<>(map.values()));
this.selector = selector;
}
public Map<Key, TupleTag<Type>> getMap() {
return map;
}
public TupleTagList getTagList() {
return tList;
}
public TupleTag<Type> getTag(Type t){
return map.get(selector.getKey(t));
}
Then I have this custom transform that basically has a function that uses the tuple map to output PCollectionTuple and then moves it to a PCollectionList to return to the main class:
public class ShardedTransform<Key, Type> extends
PTransform<PCollection<Type>, PCollectionList<Type>> {
private static final long serialVersionUID = 3320626732803297323L;
private final TupleTagMap<Key, Type> tags;
private final Coder<Type> coder;
public ShardedTransform(TupleTagMap<Key, Type> tags, Coder<Type> coder) {
this.tags = tags;
this.coder = coder;
}
#Override
public PCollectionList<Type> apply(PCollection<Type> in) {
PCollectionTuple shards = in.apply(ParDo.of(
new ShardFn<Key, Type>(tags)).withOutputTags(
new TupleTag<Type>(), tags.getTagList()));
List<PCollection<Type>> shardList = new ArrayList<>(tags.getMap().size());
for (Entry<Key, TupleTag<Type>> e : tags.getMap().entrySet()){
PCollection<Type> shard = shards.get(e.getValue()).setName(e.getKey().toString()).setCoder(coder);
shardList.add(shard);
}
return PCollectionList.of(shardList);
}
}
The actual DoFn is dead simple it just uses the lambda provided in the main class do find the matching tuple tag in the hash map for side output:
public class ShardFn<Key, Type> extends DoFn<Type, Type> {
private static final long serialVersionUID = 961325260858465105L;
private final TupleTagMap<Key, Type> tags;
ShardFn(TupleTagMap<Key, Type> tags) {
this.tags = tags;
}
#Override
public void processElement(DoFn<Type, Type>.ProcessContext c)
throws Exception {
Type element = c.element();
TupleTag<Type> tag = tags.getTag(element);
if (tag != null)
c.sideOutput(tags.getTag(element), element);
}
}
The Beam model doesn't have good support for dynamic partitioning / large numbers of partitions right now. Your approach chooses the number of shards at graph construction time, and then the resulting ParDos likely all fuses together, so you've got each worker trying to write to 80 different BQ tables at the same time. Each write requires some local buffering, so it's probably just too much.
There's an alternate approach which will do the parallelization across tables (but not across elements). This would work well if you have a large number of relatively small output tables. Use a ParDo to tag each element with the table it should go to and then do a GroupByKey. This gives you a PCollection<KV<Table, Iterable<ElementsForThatTable>>>. Then process each KV<Table, Iterable<ElementsForThatTable>> by writing the elements to the table.
Unfortunately for now you'll have to the BQ write by hand to use this option. We're looking at extending the Sink APIs with built in support for this. And since the Dataflow SDK is being further developed as part of Apache Beam, we're tracking that request here: https://issues.apache.org/jira/browse/BEAM-92
I want to have a Java class to bind to this relationship:
Vertex - Relationship - Vertex
(a:Clause)-[r:HasClause]-(b:Clause)
The problem is that the edge of class "HasClause" should have a property called "alias" on the same class - I don't know how I should annotate the class to do that automatically:
#JsonDeserialize(as = Clause.class)
public interface IClause extends VertexFrame {
#Property("nodeClass")
public String getNodeClass();
#Property("nodeClass")
public void setNodeClass(String str);
/* that would be a property on the Vertex not on the Edge
#Property("alias")
public void setAlias(String id);
#Property("alias")
public String getAlias();
*/
#Adjacency(label = "HasClause", direction = Direction.OUT)
public Iterable<IClause> getClauses();
#Adjacency(label = "HasClause", direction = Direction.OUT)
public void setClauses(Iterable<IClause> clauses);
}
Thanks
I don't know if there's a way you can do this using the #Adjacency annotation (I can't see any way).
One way you could do this, is by using a #JavaHandlerClass. This basically allows you to customise the implementation of your Frame's methods. In the following example, we'll join two Vertex's, and add a custom property 'alias' to the Edge.
Just to make things easier, I'll use the same classes from your other question - Why simple set and then get on Dynamic Proxy does not persist? (using TinkerPop Frames JavaHandler)
IVert
#JavaHandlerClass(Vert.class)
public interface IVert extends VertexFrame {
#JavaHandler
public void setTestVar(IVert vert);
}
Vert
abstract class Vert implements JavaHandlerContext<Vertex>, IVert {
public void setTestVar(IVert testVar){
Edge edge = asVertex().addEdge('foobar', testVar.asVertex())
edge.setProperty('alias', 'chickens')
}
}
Main method (Groovy)
IVert vert = framedGraph.addVertex('myuniqueid', IVert)
IVert vert2 = framedGraph.addVertex('myuniqueid2', IVert)
vert.setTestVar(vert2)
Edge e = g.getVertex('myuniqueid').getEdges(Direction.BOTH, 'foobar').iterator().next()
assert e.getProperty('alias') == 'chickens'