I am using TinyYolo with deeplearning4j having read through this tutorial http://emaraic.com/blog/yolo-custom-object-detector but I am not quite sure whether I need more configuration to handle 720p images as the images in this example are 416x416. Is there a hard requirement of that? I am just struggling to fully understand some configurations as noted :
private static final int CHANNELS = 3;
private static final int GRID_WIDTH = 13;
private static final int GRID_HEIGHT = 13;
private static final int CLASSES_NUMBER = 1;
private static final int BOXES_NUMBER = 5;
private static final double[][] PRIOR_BOXES = {{1.5, 1.5}, {2, 2}, {3,3}, {3.5, 8}, {4, 9}};//anchors boxes
private static final int BATCH_SIZE = 4;
private static final int EPOCHS = 50;
private static final double LEARNIGN_RATE = 0.0001;
private static final int SEED = 1234;
private static final double LAMDBA_COORD = 1.0;
private static final double LAMDBA_NO_OBJECT = 0.5;
I have used darkflow with my labels and images before and had some strong success. I wanted to use deeplearning4j to have more integration with a java project I have. As I have struggled to import the models I have created that still work successfully with some of the python code I have but seem to have some nuance with exportation.
If someone could shed some light on this, I believe there should be ways to handle 720p images. I believe possibly needing to resize. I know darknet and darkflow performed this action themselves. Also if I do perform resizes, will my label annotation xml files then need changes?
Thanks for any help.
You need to resize your 720p input into tinyolo expected input , dimension 416x416
resize(rgbaMat, resizedImage, new Size(tinyyolowidth, tinyyoloheight));
Ref: https://github.com/yptheangel/dl4j-android-demo/blob/master/app/src/main/java/com/yptheangel/dl4jandroid/yolo_objdetection/ObjDetection.java
Related
We found a very strange difference between Dataflow SDK 1.9 and 2.0/2.1 for a very simple pipeline.
We have CoGroupByKey step that joins two PCollections by their keys and outputs two PCollections (via TupleTags). For instance, one PCollection may contain {"str1", "str2"} and the other may contains {"str3"}.
These two PCollections are written to GCS (at different locations), and their union (basically, the PCollection produced by applying Flatten on the two PCollections) would be used by subsequent steps in a pipeline. Using the previous example, we will store {"str1", "str2"} and {"str3"} in GCS under respective locations, and the pipeline will further transform their union (Flattened PCollection) {"str1", "str2", "str3"}, and so on.
In Dataflow SDK 1.9, that is exactly what is happening, and we've built our pipelines around this logic.
As we were slowly migrating to 2.0/2.1, we noticed that this behavior is no longer observed. Instead, all the steps followed by the Flatten step are run correctly and as expected, but those two PCollections (being Flattened) are no longer written to GCS as if they are nonexistent. In the execution graph though, the steps are shown, and this is very strange to us.
We were able to reproduce this issue reliably so that we can share the data and code as an example.
We have two text files stored in GCS:
data1.txt:
k1,v1
k2,v2
data2.txt:
k2,w2
k3,w3
We will read these two files to create two PCollections, a PC for each file.
We'll parse each line to create KV<String, String> (so the keys are k1, k2, k3 in this example).
We then apply CoGroupByKey and produce PCollection to be output to GCS.
Two PCollections are produced after the CoGroupByKey step depending on the number of values associated with each key (it's a contrived example, but it is to demonstrate the issue we are experiencing) -- whether the number is even or odd.
So one of the PCs will contain keys "k1, " and "k3" (with some value strings appended to them, see the code below) as they have one value each and the other will contain a single key "k2" as it has two values (found in each file).
These two PCs are written to GCS at different locations, and the flattened PC of the two will also be written to GCS (but it could have been further transformed).
The three output files are expected to contain the following contents (rows may not be in order):
output1:
k2: [v2],(w2)
output2:
k3: (w3)
k1: [v1]
outputMerged:
k3: (w3)
k2: [v2],(w2)
k1: [v1]
This is exactly what we see (and expected)in Dataflow SDK 1.9.
In 2.0 and 2.1 however, output1 and output2 come out to be empty (and the TextIO steps are not even executed as if there are no elements being input to them; we verified this by adding a dummy ParDo in-between, and it's not invoked at all).
This makes us very curious as to why suddenly this behavior change was made between 1.9 and 2.0/2.1, and what would be the best way for us to achieve what we have been doing with 1.9.
Specifically, we produce output1/2 for archiving purposes, while we flatten the two PCs to transform the data further and produce another output.
Here is Java Code you can run (you will have to import properly, change the bucket name, and set Options properly, etc.).
Working code for 1.9:
//Dataflow SDK 1.9 compatible.
public class TestJob {
public static void execute(Options options) {
Pipeline pipeline = Pipeline.create(options);
PCollection<KV<String, String>> data1 =
pipeline.apply(TextIO.Read.from(GcsPath.EXPERIMENT_BUCKET + "/data1.txt")).apply(ParDo.of(new doFn()));
PCollection<KV<String, String>> data2 =
pipeline.apply(TextIO.Read.from(GcsPath.EXPERIMENT_BUCKET + "/data2.txt")).apply(ParDo.of(new doFn()));
TupleTag<String> inputTag1 = new TupleTag<String>() {
private static final long serialVersionUID = 1L;
};
TupleTag<String> inputTag2 = new TupleTag<String>() {
private static final long serialVersionUID = 1L;
};
TupleTag<String> outputTag1 = new TupleTag<String>() {
private static final long serialVersionUID = 1L;
};
TupleTag<String> outputTag2 = new TupleTag<String>() {
private static final long serialVersionUID = 1L;
};
PCollectionTuple tuple = KeyedPCollectionTuple.of(inputTag1, data1).and(inputTag2, data2)
.apply(CoGroupByKey.<String>create()).apply(ParDo.of(new doFn2(inputTag1, inputTag2, outputTag2))
.withOutputTags(outputTag1, TupleTagList.of(outputTag2)));
PCollection<String> output1 = tuple.get(outputTag1);
PCollection<String> output2 = tuple.get(outputTag2);
PCollection<String> outputMerged = PCollectionList.of(output1).and(output2).apply(Flatten.<String>pCollections());
outputMerged.apply(TextIO.Write.to(GcsPath.EXPERIMENT_BUCKET + "/test-job-1.9/outputMerged").withNumShards(1));
output1.apply(TextIO.Write.to(GcsPath.EXPERIMENT_BUCKET + "/test-job-1.9/output1").withNumShards(1));
output2.apply(TextIO.Write.to(GcsPath.EXPERIMENT_BUCKET + "/test-job-1.9/output2").withNumShards(1));
pipeline.run();
}
static class doFn2 extends DoFn<KV<String, CoGbkResult>, String> {
private static final long serialVersionUID = 1L;
final TupleTag<String> inputTag1;
final TupleTag<String> inputTag2;
final TupleTag<String> outputTag2;
public doFn2(TupleTag<String> inputTag1, TupleTag<String> inputTag2, TupleTag<String> outputTag2) {
this.inputTag1 = inputTag1;
this.inputTag2 = inputTag2;
this.outputTag2 = outputTag2;
}
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
String key = c.element().getKey();
List<String> values = new ArrayList<String>();
int numValues = 0;
for (String val1 : c.element().getValue().getAll(inputTag1)) {
values.add(String.format("[%s]", val1));
numValues++;
}
for (String val2 : c.element().getValue().getAll(inputTag2)) {
values.add(String.format("(%s)", val2));
numValues++;
}
final String line = String.format("%s: %s", key, Joiner.on(",").join(values));
if (numValues % 2 == 0) {
c.output(line);
} else {
c.sideOutput(outputTag2, line);
}
}
}
static class doFn extends DoFn<String, KV<String, String>> {
private static final long serialVersionUID = 1L;
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
String[] tokens = c.element().split(",");
c.output(KV.of(tokens[0], tokens[1]));
}
}
}
Working Code for 2.0/2.1:
// Dataflow SDK 2.0 and 2.1 compatible.
public class TestJob {
public static void execute(Options options) {
Pipeline pipeline = Pipeline.create(options);
PCollection<KV<String, String>> data1 =
pipeline.apply(TextIO.read().from(GcsPath.EXPERIMENT_BUCKET + "/data1.txt")).apply(ParDo.of(new doFn()));
PCollection<KV<String, String>> data2 =
pipeline.apply(TextIO.read().from(GcsPath.EXPERIMENT_BUCKET + "/data2.txt")).apply(ParDo.of(new doFn()));
TupleTag<String> inputTag1 = new TupleTag<String>() {
private static final long serialVersionUID = 1L;
};
TupleTag<String> inputTag2 = new TupleTag<String>() {
private static final long serialVersionUID = 1L;
};
TupleTag<String> outputTag1 = new TupleTag<String>() {
private static final long serialVersionUID = 1L;
};
TupleTag<String> outputTag2 = new TupleTag<String>() {
private static final long serialVersionUID = 1L;
};
PCollectionTuple tuple = KeyedPCollectionTuple.of(inputTag1, data1).and(inputTag2, data2)
.apply(CoGroupByKey.<String>create()).apply(ParDo.of(new doFn2(inputTag1, inputTag2, outputTag2))
.withOutputTags(outputTag1, TupleTagList.of(outputTag2)));
PCollection<String> output1 = tuple.get(outputTag1);
PCollection<String> output2 = tuple.get(outputTag2);
PCollection<String> outputMerged = PCollectionList.of(output1).and(output2).apply(Flatten.<String>pCollections());
outputMerged.apply(TextIO.write().to(GcsPath.EXPERIMENT_BUCKET + "/test-job-2.1/outputMerged").withNumShards(1));
output1.apply(TextIO.write().to(GcsPath.EXPERIMENT_BUCKET + "/test-job-2.1/output1").withNumShards(1));
output2.apply(TextIO.write().to(GcsPath.EXPERIMENT_BUCKET + "/test-job-2.1/output2").withNumShards(1));
PipelineResult pipelineResult = pipeline.run();
pipelineResult.waitUntilFinish();
}
static class doFn2 extends DoFn<KV<String, CoGbkResult>, String> {
private static final long serialVersionUID = 1L;
final TupleTag<String> inputTag1;
final TupleTag<String> inputTag2;
final TupleTag<String> outputTag2;
public doFn2(TupleTag<String> inputTag1, TupleTag<String> inputTag2, TupleTag<String> outputTag2) {
this.inputTag1 = inputTag1;
this.inputTag2 = inputTag2;
this.outputTag2 = outputTag2;
}
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
String key = c.element().getKey();
List<String> values = new ArrayList<String>();
int numValues = 0;
for (String val1 : c.element().getValue().getAll(inputTag1)) {
values.add(String.format("[%s]", val1));
numValues++;
}
for (String val2 : c.element().getValue().getAll(inputTag2)) {
values.add(String.format("(%s)", val2));
numValues++;
}
final String line = String.format("%s: %s", key, Joiner.on(",").join(values));
if (numValues % 2 == 0) {
c.output(line);
} else {
c.output(outputTag2, line);
}
}
}
static class doFn extends DoFn<String, KV<String, String>> {
private static final long serialVersionUID = 1L;
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
String[] tokens = c.element().split(",");
c.output(KV.of(tokens[0], tokens[1]));
}
}
}
Also, in case it is useful, the execution graph looks like this.
(And for Google engineers, Job IDs are also specified).
With 1.9 (job id 2017-09-29_14_35_42-15149127992051688457):
With 2.1 (job id 2017-09-29_14_31_59-991964669451027883):
TextIO.Write 2,3 are not producing any output under 2.0/2.1.
Flatten, and its subsequent step works fine.
This is indeed a defect. A fix is in flight and should be documented as available in the Service Release Notes.
A workaround in the meantime is to use the 1.9.1 SDK, as this error only affects 2.x SDKs.
Users interested in picking up the fix early can also use the latest nightly build from Beam (recommended to unblock development, not for production, since it's a daily build). Instructions here.
Im new to deeplearning4J. I already experimented with its word2vec functionality and everything was fine. But now I am little bit confused regarding image classification. I was playing with this example:
https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/AnimalsClassification.java
I changed the "save" flag to true and my model is stored into model.bin file.
Now comes the problematic part (I am sorry if this sounds as silly question, maybe I am missing something really obvious here)
I created separate class called AnimalClassifier and its purpose is to load model from model.bin file, restore neural network from it and then classify single image using restored network. For this single image I created "temp" folder -> dl4j-examples/src/main/resources/animals/temp/ where I put picture of polar bear that was previously used in training process in AnimalsClassification.java (I wanted to be sure that image would be classified correctly - therefore I reused picture from "bear" folder).
This my code trying to classify polar bear:
protected static int height = 100;
protected static int width = 100;
protected static int channels = 3;
protected static int numExamples = 1;
protected static int numLabels = 1;
protected static int batchSize = 10;
protected static long seed = 42;
protected static Random rng = new Random(seed);
protected static int listenerFreq = 1;
protected static int iterations = 1;
protected static int epochs = 7;
protected static double splitTrainTest = 0.8;
protected static int nCores = 2;
protected static boolean save = true;
protected static String modelType = "AlexNet"; //
public static void main(String[] args) throws Exception {
String basePath = FilenameUtils.concat(System.getProperty("user.dir"), "dl4j-examples/src/main/resources/");
MultiLayerNetwork multiLayerNetwork = ModelSerializer.restoreMultiLayerNetwork(basePath + "model.bin", true);
ParentPathLabelGenerator labelMaker = new ParentPathLabelGenerator();
File mainPath = new File(System.getProperty("user.dir"), "dl4j-examples/src/main/resources/animals/temp/");
FileSplit fileSplit = new FileSplit(mainPath, NativeImageLoader.ALLOWED_FORMATS, rng);
BalancedPathFilter pathFilter = new BalancedPathFilter(rng, labelMaker, numExamples, numLabels, batchSize);
InputSplit[] inputSplit = fileSplit.sample(pathFilter, 1);
InputSplit analysedData = inputSplit[0];
ImageRecordReader recordReader = new ImageRecordReader(height, width, channels);
recordReader.initialize(analysedData);
DataSetIterator dataIter = new RecordReaderDataSetIterator(recordReader, batchSize, 0, 4);
while (dataIter.hasNext()) {
DataSet testDataSet = dataIter.next();
String expectedResult = testDataSet.getLabelName(0);
List<String> predict = multiLayerNetwork.predict(testDataSet);
String modelResult = predict.get(0);
System.out.println("\nFor example that is labeled " + expectedResult + " the model predicted " + modelResult + "\n\n");
}
}
After running this, I get error:
java.lang.UnsupportedOperationException
at org.datavec.api.writable.ArrayWritable.toInt(ArrayWritable.java:47)
at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.getDataSet(RecordReaderDataSetIterator.java:275)
at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:186)
at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:389)
at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:52)
at org.deeplearning4j.examples.convolution.AnimalClassifier.main(AnimalClassifier.java:66)
Disconnected from the target VM, address: '127.0.0.1:63967', transport: 'socket'
Exception in thread "main" java.lang.IllegalStateException: Label names are not defined on this dataset. Add label names in order to use getLabelName with an id.
at org.nd4j.linalg.dataset.DataSet.getLabelName(DataSet.java:1106)
at org.deeplearning4j.examples.convolution.AnimalClassifier.main(AnimalClassifier.java:68)
I can see there is a method public void setLabels(INDArray labels) in MultiLayerNetwork.java but I don't get how to use (especially when it takes as argument INDArray).
I am also confused why I have to specify number of possible labels in constructor of RecordReaderDataSetIterator. I would expect that model already knows which labels to use (should not it use labels that were used during training automatically?). I guess, maybe I am loading the picture in completely wrong way...
So to summarize, I would like to achieve simply following:
restore network from model (this is working)
load image to be classified (also working)
classify this image using the same labels that were used during training (bear, deer, duck, turtle) (tricky part)
Thank you in advance for your help or any hints !
So summarizing your multiple questions here:
A record for images is 2 entries in a collection. The second 1 is the label. The label index is relative to the kind of record you pass in.
The second part of your question:
Multiple entries can be apart of a dataset. The list refers to a label for an item at a particular row in the minibatch.
Is it possible to access OpenGL ES on iOS from RoboVM without using LibGDX? If so, are there any useful references?
The only thing I can find is this super-simple demo from over 2 years ago: http://robovm.com/ios-opengles-in-java-on-robovm/
But it doesn't provide any functions besides glClearColor and glClear.
The Apple GLKit framework seems to be implemented, though. I just can't find all the actual glWhatever(...) functions...
Yes, it is possible. You need two things for this: 1. Access to the OpenGL ES functions (like glClear(...), etc.) and 2. a UIView in your app that can draw the GL image.
Turns out the second point is very easy. You can either use a GLKView (requires iOS 5.0) or a CAEAGLLayer (requires iOS 2.0) if you're feeling nostalgic. For both, there are tons of tutorials online on how to use them in Objective-C, which can readily be translated to RoboVM. So, I won't spend too much time on this point here.
Access to the OpenGL ES functions is a little more difficult, as RoboVM doesn't ship with the definitions file out of the box. So, we'll have to build our own using Bro. Turns out, once you wrap your head around how Bro handles C-strings, variable pointers, IntBuffers and such (which is actually quite beautiful!), it's really pretty straight forward. The super-simple demo I linked to in the original question is the right starting point.
In the interest of brevity, let me post here just a very abridged version of the file I wrote to illustrate the way the different data types can be handled:
import java.nio.Buffer;
import java.nio.IntBuffer;
import org.robovm.rt.bro.Bro;
import org.robovm.rt.bro.Struct;
import org.robovm.rt.bro.annotation.Bridge;
import org.robovm.rt.bro.annotation.Library;
import org.robovm.rt.bro.ptr.BytePtr;
import org.robovm.rt.bro.ptr.BytePtr.BytePtrPtr;
import org.robovm.rt.bro.ptr.IntPtr;
#Library("OpenGLES")
public class GLES20 {
public static final int GL_DEPTH_BUFFER_BIT = 0x00000100;
public static final int GL_STENCIL_BUFFER_BIT = 0x00000400;
public static final int GL_COLOR_BUFFER_BIT = 0x00004000;
public static final int GL_FALSE = 0;
public static final int GL_TRUE = 1;
private static final int MAX_INFO_LOG_LENGTH = 10*1024;
private static final ThreadLocal<IntPtr> SINGLE_VALUE =
new ThreadLocal<IntPtr>() {
#Override
protected IntPtr initialValue() {
return Struct.allocate(IntPtr.class, 1);
}
};
private static final ThreadLocal<BytePtr> INFO_LOG =
new ThreadLocal<BytePtr>() {
#Override
protected BytePtr initialValue() {
return Struct.allocate(BytePtr.class, MAX_INFO_LOG_LENGTH);
}
};
static {
Bro.bind(GLES20.class);
}
#Bridge
public static native void glClearColor(float red, float green, float blue, float alpha);
#Bridge
public static native void glClear(int mask);
#Bridge
public static native void glGetIntegerv(int pname, IntPtr params);
// DO NOT CALL THE NEXT METHOD WITH A pname THAT RETURNS MORE THAN ONE VALUE!!!
public static int glGetIntegerv(int pname) {
IntPtr params = SINGLE_VALUE.get();
glGetIntegerv(pname, params);
return params.get();
}
#Bridge
private static native int glGetUniformLocation(int program, BytePtr name);
public static int glGetUniformLocation(int program, String name) {
return glGetUniformLocation(program, BytePtr.toBytePtrAsciiZ(name));
}
#Bridge
public static native int glGenFramebuffers(int n, IntPtr framebuffers);
public static int glGenFramebuffer() {
IntPtr framebuffers = SINGLE_VALUE.get();
glGenFramebuffers(1, framebuffers);
return framebuffers.get();
}
#Bridge
private static native void glShaderSource(int shader, int count, BytePtrPtr string, IntPtr length);
public static void glShaderSource(int shader, String code) {
glShaderSource(shader, 1, new BytePtrPtr().set(BytePtr.toBytePtrAsciiZ(code)), null);
}
#Bridge
private static native void glGetShaderInfoLog(int shader, int maxLength, IntPtr length, BytePtr infoLog);
public static String glGetShaderInfoLog(int shader) {
BytePtr infoLog = INFO_LOG.get();
glGetShaderInfoLog(shader, MAX_INFO_LOG_LENGTH, null, infoLog);
return infoLog.toStringAsciiZ();
}
#Bridge
public static native void glGetShaderPrecisionFormat(int shaderType, int precisionType, IntBuffer range, IntBuffer precision);
#Bridge
public static native void glTexImage2D(int target, int level, int internalformat, int width, int height, int border, int format, int type, IntBuffer data);
#Bridge
private static native void glVertexAttribPointer(int index, int size, int type, int normalized, int stride, Buffer pointer);
public static void glVertexAttribPointer(int index, int size, int type, boolean normalized, int stride, Buffer pointer) {
glVertexAttribPointer(index, size, type, normalized ? GL_TRUE : GL_FALSE, stride, pointer);
}
}
Note how most methods are exposed via just trivial #Bridge-annotated native definitions, but for some it's convenient to define a wrapper method in Java that converts a String to a *char or unpacks a result from an IntPtr for example.
I didn't post my whole library file, since it is still very incomplete and it'll just make it harder to find the examples of how different parameter types are handled.
To save yourself some work, you can copy the GL constant definitions from libGDX's GL20.java. And the OpenGL ES docs are a great reference for the calling signature of the methods (the data types GLenum and GLbitfield correspond to a Java int).
You can then call the gl-methods statically by prepending GLES20. (just like on Android), e.g.:
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
Turns out Bro is so smart that you don't even need to include the <framework>OpenGLES</framework> tag in robovm.xml any more, like you would with libGDX.
And - What do you know? - my app starts about 3 times as quickly as it did when it was still using libGDX. And it fixed another issue I had (see LibGDX displays black screen while app is paused but still visible (e.g. during in-app purchase password dialog) on iOS). "Yay!" for getting rid of unnecessary baggage.
The one thing that makes life a little annoying is that if you mess up the call signature of a method or the memory allocation, your app will simply crash with a very unhelpful "Terminated due to signal 11" message in the IDE-console that contains no information about where the app died.
I am implementing an interface for creating graph nodes and connecting them using JUNG.
I want to create some nodes that can move from one existing node to another node using the edge between two nodes as their path (It will be used for showing some Data Packets being transferred between nodes that are like Hosts).
There is some information on the internet about how to make the JUNG nodes(vertices) movable by mouse but there is no info about moving them by modifying values in the code.
Even if there is someway for moving the nodes is it possible and efficient to move the node between nodes using the edge between them as the moving path in JUNG library?
Any suggestions would be appreciated.
You can forcibly move a vertex with the setLocation method of the layout. I have built something that is quite similar to your request. It produces a vertex that moves from vertex A to vertex B in a direct line. If your edges are straight then it might work:
import java.awt.geom.Point2D;
import edu.uci.ics.jung.algorithms.layout.AbstractLayout;
import edu.uci.ics.jung.algorithms.util.IterativeProcess;
import edu.uci.ics.jung.visualization.VisualizationViewer;
public class VertexCollider extends IterativeProcess {
private static final String COLLIDER = "Collider";
private AbstractLayout<String, Number> layout;
private VisualizationViewer<String, Number> vv;
private Point2D startLocation;
private Point2D endLocation;
private Double moveX;
private Double moveY;
public VertexCollider(AbstractLayout<String, Number> layout, VisualizationViewer<String, Number> vv, String vertexA, String vertexB) {
this.layout = layout;
this.vv = vv;
startLocation = layout.transform(vertexA);
endLocation = layout.transform(vertexB);
}
public void initialize() {
setPrecision(Double.MAX_VALUE);
layout.getGraph().addVertex(COLLIDER);
layout.setLocation(COLLIDER, startLocation);
moveX = (endLocation.getX() - startLocation.getX()) / getMaximumIterations();
moveY = (endLocation.getY() - startLocation.getY()) / getMaximumIterations();
}
#Override
public void step() {
layout.setLocation(COLLIDER, layout.getX(COLLIDER) + moveX, layout.getY(COLLIDER) + moveY);
vv.repaint();
setPrecision(Math.max(Math.abs(endLocation.getX() - layout.transform(COLLIDER).getX()),
Math.abs(endLocation.getY() - layout.transform(COLLIDER).getY())));
if (hasConverged()){
layout.getGraph().removeVertex(COLLIDER);
}
}
}
You could instantiate this for example with this code:
VertexCollider vtxCol = new VertexCollider(layout, vv, "nameOfVertexA", "nameOfVertexB");
vtxCol.setMaximumIterations(100);
vtxCol.setDesiredPrecision(1);
vtxCol.initialize();
Animator animator = new Animator(vtxCol);
animator.start();
Painting straight edges:
Code Example
I'm trying to draw a single triangle on-screen using SharpDX (DX11). For whatever reason, the triangle only seems to be drawn every second frame. My device initialization code looks like this:
public void Init()
{
renderForm = new RenderForm(Engine.GameTitle);
renderForm.ClientSize = new Size(Engine.Settings.Screen.Width, Engine.Settings.Screen.Height);
renderForm.MaximizeBox = false;
var desc = new SwapChainDescription()
{
BufferCount = 2,
ModeDescription = new ModeDescription(renderForm.ClientSize.Width, renderForm.ClientSize.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = renderForm.Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Sequential,
Usage = Usage.RenderTargetOutput
};
Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug, desc, out device, out swapChain);
deviceContext = device.ImmediateContext;
var factory = swapChain.GetParent<Factory>();
factory.MakeWindowAssociation(renderForm.Handle, WindowAssociationFlags.IgnoreAll);
backBuffer = Texture2D.FromSwapChain<Texture2D>(swapChain, 0);
renderView = new RenderTargetView(device, backBuffer);
backBuffer.Dispose();
deviceContext.OutputMerger.SetTargets(renderView);
deviceContext.Rasterizer.SetViewports(new Viewport(0, 0, renderForm.ClientSize.Width, renderForm.ClientSize.Height, 0.0f, 1.0f));
ProjectionMatrix = Matrix.PerspectiveFovLH(
(float)(Math.PI / 4),
(float)(renderForm.ClientSize.Width / renderForm.ClientSize.Height),
nearPlane,
farPlane);
WorldMatrix = Matrix.Identity;
renderForm.Location = new Point(Screen.PrimaryScreen.Bounds.Width / 2 - Engine.Settings.Screen.Width / 2, Screen.PrimaryScreen.Bounds.Height / 2 - Engine.Settings.Screen.Height / 2);
}
The code for rendering the triangle looks like this:
public void Render()
{
DeviceContext context = D3DRenderer.Instance.GetDevice().ImmediateContext;
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, Utilities.SizeOf<Vertex>(), 0));
context.InputAssembler.SetIndexBuffer(IndexBuffer, Format.R32_UInt, 0);
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
}
public void RenderShader(int indexCount)
{
device.ImmediateContext.DrawIndexed(indexCount, 0, 0);
}
Whereas Render() is called before RenderShader().
There's no error message returned by any function except for an Direct3D warning:
D3D11: WARNING: ID3D11DeviceContext::DrawIndexed: The size of the Constant Buffer at slot 0 of the Vertex Shader unit is too small (64 bytes provided, 192 bytes, at least, expected).
My MatrixBuffer structure looks like the following:
[StructLayout(LayoutKind.Sequential)]
internal struct MatrixBuffer
{
public Matrix world;
public Matrix view;
public Matrix projection;
}
I have been clearing the backbuffer every other frame with a different color to make sure it's not an issue with not correctly swapping the backbuffer. This is working fine.
I am quite baffled as to why this isn't working right now. I hope anyone knows an answer to this.
Ladies and gentleman, learn a lesson today.. Never copy and paste tutorial code. Ever.
Turns out the issue was in the shader rendering code. The tutorial I copied the declarations from (and didn't post on here, otherwise it might've been pretty obvious to you guys) had the world/view/projection matrices declared as follows:
public Matrix WorldMatrix { get; private set; }
Then I tried to do this:
D3DRenderer.Instance.WorldMatrix.Transpose();
Which for now obvious reasons doesn't work. Interestingly it did seem to work every other frame. Why that is I have no idea. But after changing the matrix definitions from private set to set everything is now working fine.