CGPDF drawPDFPage with rotation support - ios

I'm saving a PDF file with CGPDF under iOS 10. For this I load an existing PDF page, and write it to a new file with a context. While doing so, the rotation information gets lost and the resulting PDF file has all pages re-arragend at 0°.
let writeContext: CGContext = CGContext(finalPDFURL, mediaBox: nil, nil)!
// Loop through all pages
let page: CGPDFPage = ...
var mediaBox = page.getBoxRect(.mediaBox)
writeContext.beginPage(mediaBox: &mediaBox)
writeContext.drawPDFPage(page)
writeContext.endPage()
// Loop finished
writeContext.closePDF()
Then I came up with this code, which handles rotation just fine, but seems to draw the content with a slight offset. Using it with a PDF which has text or anything else close to the margins, results in cut-off content. Tried later also setting x, y, etc. on the pageInfo dict but I guess I misunderstood something here, see 2nd question below.
let page: CGPDFPage = ...
// Set the rotation
var pageDict = [String: Int32]()
pageDict["Rotate"] = CGFloat.init(page.rotationAngle)
writeContext.beginPDFPage(pageDict as CFDictionary?)
writeContext.drawPDFPage(page)
writeContext.endPDFPage()
So my questions,
1) How to use the first approach but with rotation support? Or the second one, but without cropping of content?
2) Where would I find a complete listing of all available pageInfo key-value pairs for this method? https://developer.apple.com/reference/coregraphics/cgcontext/1456578-beginpdfpage
Thanks!

Question is old, but hope following answer will be useful for someone in the future. It will preserve the rotation information of the original PDF page
How to use the first approach but with rotation support?
let writeContext: CGContext = CGContext(finalPDFURL, mediaBox: nil, nil)!
// Loop through all pages
let page: CGPDFPage = ...
var mediaBox = page.getBoxRect(.mediaBox)
writeContext.beginPage(mediaBox: &mediaBox)
let m = page.getDrawingTransform(.mediaBox, rect: mediaBox, rotate: 0, preserveAspectRatio: true)
// Following 3 lines makes the rotations so that the page look in the right direction
writeContext.translateBy(x: 0.0, y: mediaBox.size.height)
writeContext.scaleBy(x: 1, y: -1)
writeContext.concatenate(m)
writeContext.drawPDFPage(page)
writeContext.endPage()
// Loop finished
writeContext.closePDF()

I got feedback from a fellow iOS coder who suggested following principle:
These three steps should allow you to recreate the original page in the destination, while maintaining the page rotation field: (1) set the source page rotation in the destination page via the page dictionary, (2) set that rotation (or possibly the rotation * -1?) in the CGContext you’re drawing into, and finally (3) explicitly set the media box in the destination to be identical to the source (no rotation).

Related

Scenekit - vector/tangent displacement map

Important: please note that this question is about VECTOR map. Not height map.
I'm trying to implement Vector displacement in Scenekit, as described on apple presentation:
https://www.youtube.com/watch?v=uli814Qugm8&app=desktop
Apple presentation on Scenekit vector displacement
My code is:
material?.diffuse.contents = UIImage(named: "\(materialFilePrefix)-albedo.jpg")
material?.displacement.contents = UIImage(named: "(materialFilePrefix)-displacement.exr")
material?.displacement.textureComponents = .all
My Xcode project:
enter image description here
But I don't get the displacement... Anything wrong with the code?
From checking out SCNMaterial’s header and the presentation, it looks like you might need to enable tessellation on your node’s geometry for displacement to work. That’d look like this:
let tessellator = SCNGeometryTesselator()
tessellator.edgeTessellationFactor = 2 // may not need this line (default is 1), or may need to set it higher to get a smooth result
tessellator.edgeTessellationFactor = 2 // ditto
sphereNode.geometry?.tessellator = tessellator

Rotate OxyPlot 90 Degrees in iOS View

I've tried everything I can find suggested elsewhere and I've also tried every permutation of the code you can see below and I just cannot crack this.
The plot itself is working great there isn't an issue there. I have a certain screen in my application that I want to draw an OxyPlot onto this view but rotate 90 degrees to suit the data better (for various reasons the application is currently locked to portrait).
The code in the view is:
private void CreatePlotChart()
{
var normalRect = new CGRect(0,0, View.Frame.Width, View.Frame.Height);
var rotatedRect = new CGRect(0, 0, View.Frame.Height, View.Frame.Width); // height and width swapped
var plot = new PlotView();
var radians = -90d.ToRadians();
plot.Transform = CGAffineTransform.MakeRotation((nfloat)radians);
// plot.Frame = normalRect;
plot.Model = ViewModel.Model;
// plot.InvalidatePlot(true); // no discernable effect
Add(plot);
// plot.Draw(rotatedRect); // context error
View.SubviewsDoNotTranslateAutoresizingMaskIntoConstraints();
View.AddConstraints(
plot.AtTopOf(View),
plot.AtLeftOf(View),
plot.WithSameHeight(View),
plot.WithSameWidth(View),
plot.AtBottomOf(View)
);
}
The code above results in a chart as shown below, I also get this exact same chart if I pass in the rotatedRect to the constructor new PlotView(rotatedRect):
If I remove the use of constraints and pass in the rotatedRect like this:
private void CreatePlotChart()
{
var normalRect = new CGRect(0,0, View.Frame.Width, View.Frame.Height);
var rotatedRect = new CGRect(0, 0, View.Frame.Height, View.Frame.Width); // height and width swapped
var plot = new PlotView(rotatedRect);
var radians = -90d.ToRadians();
plot.Transform = CGAffineTransform.MakeRotation((nfloat)radians);
// plot.Frame = normalRect;
plot.Model = ViewModel.Model;
// plot.InvalidatePlot(true); // no discernable effect
Add(plot);
// plot.Draw(rotatedRect); // context error
// View.SubviewsDoNotTranslateAutoresizingMaskIntoConstraints();
//View.AddConstraints(
// plot.AtTopOf(View),
// plot.AtLeftOf(View),
// plot.WithSameHeight(View),
// plot.WithSameWidth(View),
// plot.AtBottomOf(View)
// );
}
I get a lot closer to the desired effect, as can be seen below:
If I go another step and "reset" it's frame to the cached normalRect I get even closer, with this:
All of these attempts feel too hacky. What is the best way of achieving the chart manipulation I need and maintaining the use of constraints to make sure things are positioned properly?
Update
If after the first chart is drawn and I kick off another select of data, the exact same code rendered the chart exactly correctly:
This is also 100% repeatable so I think this might be a bug in OxyPlot or some odd side effect of the way I'm using it.

How to rotate a particle with a specific angle programmatically in SceneKit?

I would like to rotate a particle, it is a simple line, emitted once in the center of the screen.
After I touch the screen, the method is called, and the rotation changes all the time. With 10° or 180°, around the x or z axis, the result is the same: the angle is N°, then Y°, then Z° (always a different number, with a random difference between one another : with 10°, it is not offset by 10 each time, but by a random number). Would you know why?
func addParticleSceneKit(str:String){
var fire = SCNParticleSystem(named: str, inDirectory: "art.scnassets/Particles")
fire.orientationMode = .Free
fire.particleAngle = 90
//fire.propertyControllers = [ SCNParticlePropertyRotationAxis : [1,0,0] ] // should it be a SCNParticlePropertyController? I don't know how to use it then. But it would not be for an animation in my case.
emitter.addParticleSystem(fire)
Thanks
The particleAngleVariation property controls the random variation in initial particle angles. Normally that defaults to zero, meaning particle angle isn't randomized, but you're loading a particle system from a file, so you're getting whatever is in that file — setting it to zero should stop the randomization you're seeing. (You can also do that to the particle system in the file you're loading it from by editing that file in Xcode.)
By the way, you're not adding another new particle system to the scene every time you want to emit a single particle, are you? Sooner or later that'll cause problems. Instead, keep a single particle system, and make emit more particles when you click.
Presumably you've already set its emissionDuration, birthRate, and loops properties in the Xcode Particle System Editor so that it emits a single particle when you add it to the scene? Then just call its reset method, and it'll start over, without you needing to add another one to the scene.
Also, regarding your comment...
fire.propertyControllers = [ SCNParticlePropertyRotationAxis : [1,0,0] ]
should it be a SCNParticlePropertyController? I don't know how to use it then. But it would not be for an animation in my case.
Reading the documentation might help with that. But here's the gist of it: propertyControllers should be a dictionary of [String: SCNParticlePropertyController]. I know, it says [NSObject : AnyObject], but that's because this API is imported from ObjC, which doesn't have typed collections. That's why documentation is important — it says "Each key in this dictionary is one of the constants listed in Particle Property Keys, and the value for each key is a SCNParticlePropertyController object..." which is just long-winded English for the same thing.
So, passing a dictionary where the key is a string and the value is an array of integers isn't going to help you.
The docs also say that property controllers are for animating properties, and that you create one from a Core Animation animation. So you'd use a property controller for angle if you wanted each particle to rotate over time:
let angleAnimation = CABasicAnimation()
angleAnimation.fromValue = 0 // degrees
angleAnimation.toValue = 90 // degrees
angleAnimation.duration = 1 // sec
let angleController = SCNParticlePropertyController(animation: angleAnimation)
fire.propertyControllers = [ SCNParticlePropertyAngle: angleController ]
Or for rotation axis if you wanted particles (that were already spinning freely due to orientation mode and angular velocity) to smoothly transition from one axis of rotation to another:
let axisAnimation = CABasicAnimation()
axisAnimation.fromValue = NSValue(SCNVector3: SCNVector3(x: 0, y: 0, z: 1))
axisAnimation.toValue =NSValue(SCNVector3: SCNVector3(x: 0, y: 1, z: 0))
axisAnimation.duration = 1 // sec
let axisController = SCNParticlePropertyController(animation: axisAnimation)
fire.propertyControllers = [ SCNParticlePropertyRotationAxis: axisController ]

add mouseevents to webgl objects

im using xtk to visualize medical data in a webgl canvas. currently im playing around with this lesson:
lesson 10
this library is pretty good but not very well documented. i want to get rid of that gui and add some mouseevents. if i load the mesh from the gui how can i add a mouse event to the mesh? i actually don't know where to start. it's a little bit confusing to get started with this library....
i tried
mesh.click(function(){
alert("yes");
})
or
mesh.mousedown(function(){
alert("yes");
}
Objects rendered in WebGL are not part of the DOM, and as such don't generate events like DOM elements do. This means that for events like these you have to implement the mouse interaction code yourself.
Traditionally in WebGL/OpenGL this process is known as "Picking", and there's several decent resources for it online. (For example: http://webgldemos.thoughtsincomputation.com/engine_tests/picking) The core process is something like this, though:
For each pickable object in your scene, assign it a color. Put this in a lookup table somewhere
Re-render the entire scene to a texture, rendering each pickable object with it's assigned color
Once the scene is rendered, determine your mouse coordinates and read back the color of the texture at that X/Y.
Fetch the object associated with that color from your lookup table. This is the object your mouse cursor is pointing at!
As you can see, while not a difficult method conceptually this also involves several mid-level WebGL topics, such as rendering to a texture, and as such is not usually recommended for beginners. I'm not sure if there are any features in xtk to assist with this (honestly I had never heard of the library before your post), but I would guess that this is something that you'll have to implement on your own.
DOM events are not supported but you can do it with xtk. Check out this JSFiddle
http://jsfiddle.net/haehn/r7Ugf/
// create and initialize a 3D renderer
var r = new X.renderer3D();
r.init();
// create a cube and a sphere
cube = new X.cube();
sphere = new X.sphere();
sphere.center = [-20, 0, 0];
r.interactor.onMouseMove = function() {
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id != 0) {
// grab the object and turn it red
r.get(_id).color = [1, 0, 0];
} else {
// no object under the mouse
cube.color = [1, 1, 1];
sphere.color = [1, 1, 1];
}
r.render();
}
r.interactor.onMouseDown = function(left, middle, right) {
// only observe right mouse clicks
if (!right) return;
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id == sphere.id) {
// turn the sphere green
sphere.color = [0, 1, 0];
r.render();
}
}
r.add(cube); // add the cube to the renderer
r.add(sphere); // and the sphere as well
r.render(); // ..and render it
Easy, no?
XTK implements picking the way Toji explained (i.e. with a frameBuffer where every object is rendered in a different RGBA "color"). It will work while you have less than 255^4 objects, so almost always. There are other methods like unprojecting but they would be longer I think.
So with X.renderer.pick and X.renderer.get you can find the object under the mouse and change its properties. However for the moment you can only change vizualisation properties (see the setGetter and setSetter in every class) but you cannot move an X.object (since X.object._transform attribute is private and there is no getter/setter for it yet).
That's something interesting to deal with : adding a pair of getter/setter for X.object's transform would allow, for example, an user to put medical stuff (modelized by a mesh or something else) in the scene and place to mesure distances or see if it will fit for an operation or something like that. Shouldn't be a good idea Haehn ? And it's a minor change in the framework.

How do I render a string on Image in Windows phone Mango?

I am trying to render a string over an image chosen by user via Photochooser task. I have seen various replies to similar question but none of the replies have nailed it.
This is what I have come up with -
void photochoosertask_Completed(object sender, PhotoResult e)
{
if (e.TaskResult == TaskResult.OK)
{
System.Windows.Media.Imaging.BitmapImage bmp = new System.Windows.Media.Imaging.BitmapImage();
bmp.SetSource(e.ChosenPhoto);
image1.Source = bmp;
string steamer = "SO!";
System.Windows.Media.Imaging.WriteableBitmap bmps = new System.Windows.Media.Imaging.WriteableBitmap(bmp);
RenderString(bmps, steamer);
}
}
private void RenderString(System.Windows.Media.Imaging.WriteableBitmap bitmap, string steamer)
{
textBlock1.Text = steamer;
bitmap.Render(textBlock1 , null);
bitmap.Invalidate();
}
}
The code however doesn't work. I am most likely doing a major mistake. Any help appreciated, thanks!
According to the documentation:
If an empty transform is supplied [i.e. the null you're passing as the second parameter], the bits representing the element show up at the same offset as if they were placed within their parent.
So if I understand what's happening correctly (and I probably don't), your textBlock1 element is being rendered with the same offset as it has on your parent form. So it may be that textBlock1 is so far down from the top and left that it doesn't show up in your writeable bitmap.
BTW, I'm not familiar with WriteableBitmap, but what you're doing (putting text into a UI element and then rendering that element onto your bitmap) seems like a strange way to add text to a bitmap.
I just figured it out. Thought I should post the solution code here, might help somebody - someday :)
//setup a writeable bitmap with required dimensions
System.Windows.Media.Imaging.WriteableBitmap wbmps = new System.Windows.Media.Imaging.WriteableBitmap(x,y);
//set up a transform, we'll use ScaleTransform and we'll keep things simple here, 1x on both the axis
ScaleTransform transform = new System.Windows.Media.ScaleTransform();
transform.ScaleX=1;
transform.ScaleY=1;
//now we need to render the image on the writeablebitmap and follow it up by rendering a //string
wbmps.Render(imageelement,transform);
//Now render the string which is equivalent to TextBlock.Text
wbmps.Render(texblock,transform);
//Finally - redraw the writeablebitmap to complete the rendering
wbmps.Invalidate();

Resources