I am employing MULTI-GPU training using pytorch lightning. The below output displays the model:
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]
┏━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ ┃ Name ┃ Type ┃ Params ┃
┡━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 0 │ encoder │ Encoder │ 2.0 M │
│ 1 │ classifier │ Sequential │ 8.8 K │
│ 2 │ criterion │ BCEWithLogitsLoss │ 0 │
│ 3 │ train_acc │ Accuracy │ 0 │
│ 4 │ val_acc │ Accuracy │ 0 │
│ 5 │ train_auc │ AUROC │ 0 │
│ 6 │ val_auc │ AUROC │ 0 │
│ 7 │ train_f1 │ F1Score │ 0 │
│ 8 │ val_f1 │ F1Score │ 0 │
│ 9 │ train_mcc │ MatthewsCorrCoef │ 0 │
│ 10 │ val_mcc │ MatthewsCorrCoef │ 0 │
│ 11 │ train_sens │ Recall │ 0 │
│ 12 │ val_sens │ Recall │ 0 │
│ 13 │ train_spec │ Specificity │ 0 │
│ 14 │ val_spec │ Specificity │ 0 │
└────┴────────────┴───────────────────┴────────┘
Trainable params: 2.0 M
Non-trainable params: 0
I have set Encoder to be untrainable using the below code:
ckpt = torch.load(chk_path)
self.encoder.load_state_dict(ckpt['state_dict'])
self.encoder.requires_grad = False
Shouldn't trainable params be 8.8 K rather than 2.0 M ?
My optimizer is the following:
optimizer = torch.optim.RMSprop(filter(lambda p: p.requires_grad, self.parameters()), lr =self.lr, weight_decay = self.weight_decay)
self.encoder.requires_grad = False doesn't do anything; in fact, torch Modules don't have a requires_grad flag.
What you should do instead is use the requires_grad_ method (note the second underscore), that will set requires_grad for all the parameters of this module to the desired value:
self.encoder.requires_grad_(False)
as described here: https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.requires_grad_
You need to set requires_grad=False for all encoder parameters one-by-one:
for param in self.encoder.parameters():
param.requires_grad = False
Notice that if you execute the following piece of code:
class MNISTModel(LightningModule):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
mnist_model = MNISTModel()
mnist_model.l2.requires_grad = False
print(mnist_model.l2.weight.requires_grad)
print(mnist_model.l2.bias.requires_grad)
ModelSummary(mnist_model)
You will get:
True
True
| Name | Type | Params
--------------------------------
0 | l1 | Linear | 1.2 M
1 | l2 | Linear | 2.5 M
2 | l3 | Linear | 15.7 K
--------------------------------
3.7 M Trainable params
0 Non-trainable params
3.7 M Total params
14.827 Total estimated model params size (MB)
which means that this is actually not deactivating requires_grad for the parameters in that layer. So, you have two option according to (https://pytorch.org/docs/stable/notes/autograd.html#setting-requires-grad)
Applying .requires_grad_() to a module as suggested by #burzam (the more correct one)
mnist_model = MNISTModel()
mnist_model.l2.requires_grad_(False)
ModelSummary(mnist_model)
| Name | Type | Params
--------------------------------
0 | l1 | Linear | 1.2 M
1 | l2 | Linear | 2.5 M
2 | l3 | Linear | 15.7 K
--------------------------------
1.2 M Trainable params
2.5 M Non-trainable params
3.7 M Total params
14.827 Total estimated model params size (MB)
Loop through the parameters in the module
mnist_model = MNISTModel()
for param in mnist_model.l2.parameters():
param.requires_grad = False
ModelSummary(mnist_model)
you will see:
| Name | Type | Params
--------------------------------
0 | l1 | Linear | 1.2 M
1 | l2 | Linear | 2.5 M
2 | l3 | Linear | 15.7 K
--------------------------------
1.2 M Trainable params
2.5 M Non-trainable params
3.7 M Total params
14.827 Total estimated model params size (MB)
You need to to set requires_grad to False for all the parameters in the specific layers you want to deactivate
Related
I'm sorry if I miss something but I don't understand why this doesn't work:
using DataFrames, MLJ
julia> df = DataFrame(A = 1:4, B = ["M", "F", "F", "M"])
4×2 DataFrame
│ Row │ A │ B │
│ │ Int64 │ String │
├─────┼───────┼────────┤
│ 1 │ 1 │ M │
│ 2 │ 2 │ F │
│ 3 │ 3 │ F │
│ 4 │ 4 │ M │
julia> hot_model = OneHotEncoder()
julia> hot = machine(hot_model, df)
julia> fit!(hot)
julia> Xt = MLJ.transform(hot, df)
Xt is exacty as df, it didn't tranform the columns.
I tried to specify the features in OneHotEncoder() but it doesn't change.
I also saw that you can make a pipeline with it by wrapping it and fitting only at the end with the model but it should work like that, no? Is it maybe because of the type of the columns? What scitype should it be? Categorical? How can I change it into that?
Yes, you will need to change the scitypes of the columns. You can check the scitype of each column by using schema on the data frame:
julia> schema(df)
┌─────────┬─────────┬────────────┐
│ _.names │ _.types │ _.scitypes │
├─────────┼─────────┼────────────┤
│ A │ Int64 │ Count │
│ B │ String │ Textual │
└─────────┴─────────┴────────────┘
_.nrows = 4
Here you can see that the scitype of column B is Textual, so you will need to change that to Multiclass. You can use the coerce function to change the scitypes of the columns. Note that in MLJ integer columns are interpreted as count data, so you will also need to coerce column A if you want it to represent continuous data. The coerce method can be used as follows:
julia> coerce!(df, :A => Continuous, :B => Multiclass)
4×2 DataFrame
│ Row │ A │ B │
│ │ Float64 │ Cat… │
├─────┼─────────┼──────┤
│ 1 │ 1.0 │ M │
│ 2 │ 2.0 │ F │
│ 3 │ 3.0 │ F │
│ 4 │ 4.0 │ M │
Now the one-hot encoder will work properly.
ohe = machine(OneHotEncoder(), df)
fit!(ohe)
Xt = MLJ.transform(ohe, df)
4×3 DataFrame
│ Row │ A │ B__F │ B__M │
│ │ Float64 │ Float64 │ Float64 │
├─────┼─────────┼─────────┼─────────┤
│ 1 │ 1.0 │ 0.0 │ 1.0 │
│ 2 │ 2.0 │ 1.0 │ 0.0 │
│ 3 │ 3.0 │ 1.0 │ 0.0 │
│ 4 │ 4.0 │ 0.0 │ 1.0 │
See the section of the MLJ manual on working with categorical data for more information.
I have a julia TimeArray, let's say ta, and I want to build sub_array a TimeArray sub_ta by extracting some of the columns. Some month ago, I used a code similar to minimal example below, but which doesn't work anymore
import TimeSeries
import Dates
dates_index = [ Dates.Date(1970,1,day) for day in [1,2,3,4,5] ]
values = [ [1.0 2.0 3.0 4.0 5.0] ; [10.0 20.0 30.0 40.0 50.0] ; [ 100.0 200.0 300.0 400.0 500.0] ]
ta = TimeSeries.TimeArray( dates_index, transpose(values), [ :col1, :col2, :col3 ] )
sub_ta = ta[ [ :col1 , :col2 ] ]
ERROR: MethodError: no method matching getindex(::TimeSeries.TimeArray{Float64,2,Dates.Date,LinearAlgebra.Transpose{Float64,Array{Float64,2}}}, ::Array{Symbol,1})
Closest candidates are:
getindex(::TimeSeries.TimeArray, ::Integer) at /home/guilhem/.julia/packages/TimeSeries/bbwst/src/timearray.jl:259
getindex(::TimeSeries.TimeArray, ::UnitRange{#s30} where #s30<:Integer) at /home/guilhem/.julia/packages/TimeSeries/bbwst/src/timearray.jl:268
getindex(::TimeSeries.TimeArray, ::AbstractArray{#s30,1} where #s30<:Integer) at /home/guilhem/.julia/packages/TimeSeries/bbwst/src/timearray.jl:276
What seems strange to me is that there is, in the source of the TimeSeries library (in the file timearray.jl) a function getindex which should work if we want to work on many columns.
# array of columns by name
function getindex(ta::TimeArray, ss::Symbol...)
ns = [findcol(ta, s) for s in ss]
TimeArray(timestamp(ta), values(ta)[:, ns], collect(ss), meta(ta))
end
But I think I didn't get the proper way to use it, probably due to the splat operator what I don't really master
problem is both on julia-1.1.0 and julia-1.3.1, with TimeSeries v0.14.0
Finally I think I found the solution, I was quite close :
sub_ta = ta[ [:col1 , col2]...]
The best introduction I found on the ..., the splat operator is on this page (search "..." or "splat") :
enter link description here
What version of TimeSeries are you using?
(tmp) pkg> status
Status `/tmp/Project.toml`
[9e3dc215] TimeSeries v0.16.1
In version 0.16.1, both syntaxes that you mention seem to work:
julia> ta
5×3 TimeSeries.TimeArray{Float64,2,Dates.Date,LinearAlgebra.Transpose{Float64,Array{Float64,2}}} 1970-01-01 to 1970-01-05
│ │ col1 │ col2 │ col3 │
├────────────┼───────┼───────┼───────┤
│ 1970-01-01 │ 1.0 │ 10.0 │ 100.0 │
│ 1970-01-02 │ 2.0 │ 20.0 │ 200.0 │
│ 1970-01-03 │ 3.0 │ 30.0 │ 300.0 │
│ 1970-01-04 │ 4.0 │ 40.0 │ 400.0 │
│ 1970-01-05 │ 5.0 │ 50.0 │ 500.0 │
julia> ta[[:col1, :col2]]
5×2 TimeSeries.TimeArray{Float64,2,Dates.Date,Array{Float64,2}} 1970-01-01 to 1970-01-05
│ │ col1 │ col2 │
├────────────┼───────┼───────┤
│ 1970-01-01 │ 1.0 │ 10.0 │
│ 1970-01-02 │ 2.0 │ 20.0 │
│ 1970-01-03 │ 3.0 │ 30.0 │
│ 1970-01-04 │ 4.0 │ 40.0 │
│ 1970-01-05 │ 5.0 │ 50.0 │
julia> ta[[:col1, :col2]...]
5×2 TimeSeries.TimeArray{Float64,2,Dates.Date,Array{Float64,2}} 1970-01-01 to 1970-01-05
│ │ col1 │ col2 │
├────────────┼───────┼───────┤
│ 1970-01-01 │ 1.0 │ 10.0 │
│ 1970-01-02 │ 2.0 │ 20.0 │
│ 1970-01-03 │ 3.0 │ 30.0 │
│ 1970-01-04 │ 4.0 │ 40.0 │
│ 1970-01-05 │ 5.0 │ 50.0 │
Note that this last version is a rather convoluted way of writing ta[:col1, :col2]:
julia> ta[:col1, :col2]
5×2 TimeSeries.TimeArray{Float64,2,Dates.Date,Array{Float64,2}} 1970-01-01 to 1970-01-05
│ │ col1 │ col2 │
├────────────┼───────┼───────┤
│ 1970-01-01 │ 1.0 │ 10.0 │
│ 1970-01-02 │ 2.0 │ 20.0 │
│ 1970-01-03 │ 3.0 │ 30.0 │
│ 1970-01-04 │ 4.0 │ 40.0 │
│ 1970-01-05 │ 5.0 │ 50.0 │
Suppose there are 8 pcs and 1 switch, I want to divide three subnets.how to use alloy language program?Can you give an example?
The following models a small network.
sig IP {}
some sig Subnet {
range : some IP
}
abstract sig Node {
ips : some IP
}
sig Router extends Node {
subnets : IP -> lone Subnet
} {
ips = subnets.Subnet
all subnet : Subnet {
lone subnets.subnet
subnets.subnet in subnet.range
}
}
sig PC extends Node {} {
one ips
}
let routes = { disj s1, s2 : Subnet | some r : Router | s1+s2 in r.subnets[IP] }
let subnet[ip] = range.ip
let route[a,b] = subnet[a]->subnet[b] in ^ routes
fact NoOverlappingRanges { all ip : IP | one range.ip }
fact DHCP { all disj a, b : Node | no (a.ips & b.ips) }
fact Reachable { all disj a, b : IP | route[a,b] }
run {
# PC = 8
# Subnet = 3
# Router = 1
} for 12
If you run it:
┌───────────┬────────────┐
│this/Router│subnets │
├───────────┼────┬───────┤
│Router⁰ │IP² │Subnet¹│
│ ├────┼───────┤
│ │IP³ │Subnet⁰│
│ ├────┼───────┤
│ │IP¹¹│Subnet²│
└───────────┴────┴───────┘
┌───────────┬─────┐
│this/Subnet│range│
├───────────┼─────┤
│Subnet⁰ │IP³ │
│ ├─────┤
│ │IP⁴ │
├───────────┼─────┤
│Subnet¹ │IP¹ │
│ ├─────┤
│ │IP² │
│ ├─────┤
│ │IP⁵ │
│ ├─────┤
│ │IP⁶ │
│ ├─────┤
│ │IP⁷ │
│ ├─────┤
│ │IP⁸ │
│ ├─────┤
│ │IP⁹ │
│ ├─────┤
│ │IP¹⁰ │
├───────────┼─────┤
│Subnet² │IP⁰ │
│ ├─────┤
│ │IP¹¹ │
└───────────┴─────┘
┌─────────┬────┐
│this/Node│ips │
├─────────┼────┤
│PC⁰ │IP¹⁰│
├─────────┼────┤
│PC¹ │IP⁹ │
├─────────┼────┤
│PC² │IP⁸ │
├─────────┼────┤
│PC³ │IP⁷ │
├─────────┼────┤
│PC⁴ │IP⁶ │
├─────────┼────┤
│PC⁵ │IP⁵ │
├─────────┼────┤
│PC⁶ │IP⁴ │
├─────────┼────┤
│PC⁷ │IP¹ │
├─────────┼────┤
│Router⁰ │IP² │
│ ├────┤
│ │IP³ │
│ ├────┤
│ │IP¹¹│
└─────────┴────┘
You'd probably like to see what PCs are assigned to what subnet. Then go to the evaluator and type:
ips.~range
┌───────┬───────┐
│PC⁰ │Subnet¹│
├───────┼───────┤
│PC¹ │Subnet¹│
├───────┼───────┤
│PC² │Subnet¹│
├───────┼───────┤
│PC³ │Subnet¹│
├───────┼───────┤
│PC⁴ │Subnet¹│
├───────┼───────┤
│PC⁵ │Subnet¹│
├───────┼───────┤
│PC⁶ │Subnet⁰│
├───────┼───────┤
│PC⁷ │Subnet¹│
├───────┼───────┤
│Router⁰│Subnet⁰│
│ ├───────┤
│ │Subnet¹│
│ ├───────┤
│ │Subnet²│
└───────┴───────┘
Disclaimer: This was quickly hacked together so there might be modeling errors.
Alloy is a modelling language used mainly to reason about designs. So Forget about "programming".
What you can do in Alloy is to define the general rules of how pc, switch and subnets relate to each other. You can then verify if those rules allow to divide those pc into three subnets, and if the division match your expecations. In the case it does not, congrats, you have found a "bug" in your specification, solving it will improve your understanding of the constraints inherent to the system you are currently modelling.
I'm working on a project where each user is represented as a node in Neo4j. Users can 'endorse' other users, creating a relationship. I want to be able to rank users based on their trust, where the weighting of each relationship is based on the weighting of the user which endorsed them. For example, a user who has been endorsed by 20+ users should have more weighting to their own endorsements than another user with only a couple of endorsements.
The way I'm querying it at the moment gives me the number of nodes for each depth, but it doesn't group by the parent node (e.g. all level 3 nodes are returned in one array, you don't know which nodes from level 2 each relates to).
MATCH (n)-[r:TRUSTS*]->(u)
WHERE u.name = 'XYZ' WITH n.name AS n, LENGTH(r) AS depth
RETURN COLLECT(DISTINCT n) AS endorsers, depth
ORDER BY depth
Here's what the network looks like, along with the result of a query for Ben.
As you can see, there are 2 first-level endorsers of Ben, and two 2nd-level endorsers of JM, which you can see from the graph image, but not from the query result.
Can anyone advise on either how to return the results grouped by parent node AND depth, so I can calculate the trust ranking in my code, or a better way to perform a weighted average to achieve the goal in the first paragraph?
This is an example of the sort of tree structure output I'm imagining for Ben:
Ben
├── JM
│ ├── Simon
│ └── Rus
│ ├── Robbie
│ │ ├── Ben
│ │ │ └──/ should terminate here
│ │ ├── Simon
│ │ └── JM
│ └── Ben
│ └──/ should terminate here
└── Simon
Here is another one for Rus:
Rus
├── Robbie
│ ├── Simon
│ ├── Ben
│ │ ├── Simon
│ │ └── JM
│ │ ├── Simon
│ │ └── Rus
│ └── JM
│ ├── Simon
│ └── Rus
└── Ben
├── Simon
└── JM
├── Simon
└── Rus
Obviously it should terminate when it reaches the user I'm querying for (otherwise it would be a circular structure).
The closest match I've found is a query provided by Tezra, which is:
MATCH (target:User{name:"Rus"}), (person:User), p=((person)-[:TRUSTS*]->(target))
WHERE ALL(n in NODES(p)[1..-1] WHERE n<>target)
RETURN NODES(p)[-2].name as endorser, COLLECT(person.name) as endorsed_by, SIZE(RELATIONSHIPS(p)) as depth
ORDER BY depth
This query returns the 1st level endorsers of "Rus", then the n-level endorsers 1st level endorsers:
| endorser | endorsed_by | depth |
|----------|-----------------------|-------|
| Robbie | Robbie | 1 | // 1st level endorsers of Rus
| Ben | Ben | 1 | // 1st level endorsers of Rus
| Robbie | JM, Simon, Ben | 2 | // 1st level endorsers of Robbie
| Ben | JM, Simon | 2 | // 1st level endorsers of Ben
| Ben | Rus, Simon | 3 | // 2nd level endorsers of Ben
| Robbie | Rus, Simon, JM, Simon | 3 | // 2nd level endorsers of Robbie
| Robbie | Rus, Simon | 4 | // 3rd level endorsers of Robbie
This isn't quite correct, you only know who has indirectly endorsed Ben and Robbie, but not the nodes in between.
For example, from that output we know that the 1st level endorsers of Robbie are JM, Simon and Ben. The 2nd level endorsers are Rus, Simon, JM and Simon (column 4 in the tree), however there is no way to know the relationship between the 1st and 2nd level endorsers. As far as this query is concerned, the following trees are identical:
Rus
└── Robbie
├── Simon
├── Ben <--- here Ben has 3 children (so should be weighted higher)
│ ├── Simon
│ ├── Rus
│ └── JM
└── JM
└── Simon
Rus
└── Robbie
├── Simon
├── Ben
│ └── Simon
└── JM <--- here JM has 3 children instead
├── Simon
├── Rus
└── JM
What I'm looking for is a query which returns something like this (with the parent of each endorsement so the full tree can be reconstructed), this is the imagined output for Rus:
+--------+----------+-------+
| parent | children | depth |
+--------+----------+-------+
| Rus | Robbie | 1 |
+--------+----------+-------+
| Rus | Ben | 1 |
+--------+----------+-------+
| Robbie | Simon | 2 |
+--------+----------+-------+
| Robbie | Ben | 2 |
+--------+----------+-------+
| Robbie | JM | 2 |
+--------+----------+-------+
| Ben | Simon | 3 |
+--------+----------+-------+
| Ben | JM | 3 |
+--------+----------+-------+
| JM | Simon | 4 |
+--------+----------+-------+
| JM | Rus | 4 |
+--------+----------+-------+
| JM | Simon | 3 |
+--------+----------+-------+
| JM | Rus | 3 |
+--------+----------+-------+
| Ben | Simon | 2 |
+--------+----------+-------+
| Ben | JM | 2 |
+--------+----------+-------+
| JM | Simon | 3 |
+--------+----------+-------+
| JM | Rus | 3 |
+--------+----------+-------+
First, here is a console to play/test with the data.
Here are some commented queries. Let me know which most closely meets your needs. (ordered by relevance)
// Match the query target, and everyone who can endorse
MATCH (target:User{name:"Ben"}), (person:User),
// Match all endorse chains, length limit 5
p=((person)-[:TRUSTS*..5]->(target))
// Our target may start, and will end; our chain, so no other path nodes can be him.
// Normal matching will not match cycles.
// Adjust further path termination conditions here.
WHERE ALL(n in NODES(p)[1..-1] WHERE n<>target)
// Return target (extra), the 1'st tier endorser, their endorsers, and rank(depth) of each of those endorsers.
RETURN target.name, NODES(p)[-2] as endorser, COLLECT(person.name), SIZE(RELATIONSHIPS(p)) as depth
ORDER BY depth
// one line copy for copy-paste into console
MATCH (target:User{name:"Ben"}), (person:User), p=((person)-[:TRUSTS*..5]->(target)) WHERE ALL(n in NODES(p)[1..-1] WHERE n<>target) RETURN target.name, NODES(p)[-2] as endorser, COLLECT(person.name), SIZE(RELATIONSHIPS(p)) as depth ORDER BY depth
An alternate return format
WITH NODES(p)[-2] as endorser, {people:COLLECT(person.name), depth:SIZE(RELATIONSHIPS(p))} as auth
RETURN endorser, COLLECT(auth)
// one line copy for copy-paste into console
MATCH (target:User{name:"Ben"}), (person:User), p=((person)-[:TRUSTS*..5]->(target)) WHERE ALL(n in NODES(p)[1..-1] WHERE n<>target) WITH NODES(p)[-2] as endorser, {people:COLLECT(person.name), depth:SIZE(RELATIONSHIPS(p))} as auth RETURN endorser, COLLECT(auth)
UPDATE: Alternate return format to match OP's return table
MATCH (target:User{name:"Rus"}), (person:User), p=((person)-[:TRUSTS*]->(target)) WHERE ALL(n in NODES(p)[1..-1] WHERE n<>target) WITH NODES(p) as n, SIZE(RELATIONSHIPS(p)) as depth RETURN DISTINCT n[-depth] as parent, n[-depth-1] as child, depth ORDER BY depth
I've been given the task to implement Identicon using Delphi. I have searched the internet and still did not find anything.
so where do I begin, is there someone here who can give an explanation?
This is just an explanation to give you the idea of the Identicons.
Identicons are graphical representations of a bunch of bytes most likely a hash value.
Lets take a sample MD5 hash value (16 bytes)
abf5787309f3c4d5b255237c0b67dd5e
Ok let them arrange in a different way
ab f5 78 73
09 f3 c4 d5
b2 55 23 7c
0b 67 dd 5e
Now we have 16 fields each representing a byte. So we could build an image with 256 different small images. But maybe we can break it down to a less complicated method.
Lets take one byte (the first one ab) and its binary representation
10101011
Ok, let them arrange in a different way :o)
10 10
10 11
Now we have 4 fields and each field can have one of four states. And that is very easy to manage 4 different images.
00 = empty
01 = /
10 = \
11 = X
Back to our byte we will get this
┌─────┐
│ \ \ │
│ \ X │
└─────┘
And back to the whole we get
┌─────┬─────┬─────┬─────┐
│ \ \ │ X X │ / X │ / X │
│ \ X │ / / │ \ │ X │
├─────┼─────┼─────┼─────┤
│ │ X │ X │ X / │
│ \ / │ X │ / │ / / │
├─────┼─────┼─────┼─────┤
│ \ X │ / / │ \ │ / X │
│ \ │ / / │ X │ X │
├─────┼─────┼─────┼─────┤
│ │ / \ │ X / │ / / │
│ \ X │ / X │ X / │ X \ │
└─────┴─────┴─────┴─────┘
The whole point here is reduction into easy to handle small parts.
I once started to port the original library to Delphi / Graphics32. However, I never found the time to finish the small project until now. The source code together with a sample application can be found on my website (see Delphi -> Graphics32).
Originally it was meant to show some new vector graphic features from the upcoming version 2.0. Despite the fact, that this has still not been released, the source code can already be compiled with the code in the trunk repository.