Kivy Image and Widget in different locations after rotation - kivy

i tried to rotate my widget with:
with self.canvas.before:
PushMatrix()
Rotate(origin=self.center, angle=90)
with self.canvas.after:
PopMatrix()
But my widget and the image it contains are in different places, i display the widget position with:
with self.root.ids.background.canvas:
Color(1, 0, 0, 1, mode='rgba')
Rectangle(pos=self.root.ids.ship.pos, size=self.root.ids.ship.size)
also the widget collide method suggest the same finding i have with painting the position.
Doesnt matter if i use Scatter or Image class, both times when i rotate the Image ends up in a different place. Here is an example
i also tried to control the position with:
self.children[0].pos = self.center
here is the complete git repo:
The link to the github repo to be able to test it yourself

i am now using https://github.com/kivy-garden/garden.rotabox for my rotation issue and it works great

Related

Konva object snapping with transformer jitters

I'm trying to make an editor using Konva.js.
In the editor I show a smaller draw area which becomes the final image. For this I'm using a group with a clipFunc. This gives a better UX since the transform controls of the transformer can be used "outside" of the canvas (visible part for the user) and allow the user to zoom and drag the draw area around (imagine a frame in Figma).
I want to implement object snapping based on this: https://konvajs.org/docs/sandbox/Objects_Snapping.html. (just edges and center for now) However I want it to be able to work when having multiple elements selected in my Transformer.
The strategy I'm using is basically calculating the snapping based on the .back element created by the transformer. When I know how much to snap I apply it to every node within the transformer.
However when doing it, the items starts jittering when moving the cursor close to the snapping lines.
My previous implementation was having the draw area fill the entire Stage, which I did manage to get working with the same strategy. (no jitter)
I don't really know what the issue is and I hope some of you guys can point me in the right direction.
I created a codepen to illustrate my issue: https://codesandbox.io/s/konva-transformer-snapping-1vwjc2?file=/src/index.ts

How can I add a 3d object as a marker on Google Maps like Uber does

I want to add a 3d marker for showing cars on map with rotation like Uber does but I can't find any information on adding 3d objects on Google Maps SDK for iOS.
Would appreciate any help.
Apparently no one is seeing what OP and I are seeing so here's a video of a Uber car turning 90 degrees. Play it frame by frame and you'll notice that it's not a simple image rotation. Either Uber went through the trouble of doing ~360 angles of each vehicles, or it really is a 3D model. Doing 360 images of every car seems foolish to me.
From what I can tell, they are not using 3D objects. They are also not animating between 400 images of a car at a different angle. They're doing a mix of rotating image assets and animating between ~50-70 images of a car at different angles. The illusion is perfect because it really does look like they used 3D car models !
Look at this GIF of a Uber car turning a corner (Dropbox link):
We can clearly see that that the shadow and the car's view angle doesn't update as often as the car's rotation.
Here I overlaid 2 images of the car at different angles, but using the same car image:
We can see that the map is rotated ~5 degrees but the car image is perfectly clear because it hasn't changed, it was simply rotated.
Uber just released a blog post documenting this.
It looks like the vehicles were modeled in 3D software and then image assets depicting different angles were exported for the app. Depending on where the vehicle is on the map and its heading then a different asset is used.
First, they are NOT 3D Objects if that's what you referring to (It's possible to create one though, but waste of time) They are simply 3D image created in Photoshop or Illustrator (Mostly) that have 3D perspective (It's also retina optimized, that's why it looks very clear).
The reason you see that the car is rotated its because the UIImageView that the image is being held into is rotated (using CABasicAnimation mostly) using calculation base off of 3D device position (Same technology use for running apps to track your location etc), which you can use Core Location to retrieve that data.
It's a proccess, but very doable. Good Luck!
Thanks All answers are valid.
if you want you can see the video running, how it works
You can generate sprite sheet ( around 60 ) tiles
How i implement it and tools you need
3d source car model.
blender, animate camera using path animation elipse.
camera rotate around of car from top to bottom view
render 3d marker using sprite generated with blender, for angles use bearing change on location updates.
Your vehicle needs to be rendered to support most screens, so the base size for each tile was 64 px and I was scaling according to the dpi of the screens
Result implementation:
https://twitter.com/ronfravi/status/1133226618024022016?s=09
I believe a pair of marker images, one is the real marker, and another one is a darker blurry shadow can do the trick in a cheaper way. Putting the shadow marker beneath the marker, shifting X & Y axis to an amount where you feel the shadow would be put appropriately, and finally moving them as well as rotating them (on web version, you may need separate rotated images) at the same time should be able to [re]create the illusion.
As #Felix Lapalme already explained it beyond any easier, am not diving any deeper into explaning it.
check out my repo
I used a dea model and turned it according to the heading variable
https://github.com/hilalalhakani/HHMarker
I achieved this in Xamarin by rendering three.js in a webview then sending the image buffer directly to the marker instead of drawing it to the screen. It only took a couple of days, and for my use case it was needed because I want the drivers to be able to change the color and kind of car, but if this is not the functionality you need you're better of just using a sequence of rendered images.
If you want to Rotated your image as the marker. Want to show a moving object you can use .git image. It would be help full for you.
Swift 3
let position = CLLocationCoordinate2D(latitude: 51.5, longitude: -0.127)
//Rotate a marker
let degrees = 90.0
let london = GMSMarker(position: position)
london.title = "London"
//Customize the marker image
london.icon = UIImage(named: "YourGifImageName")
london.groundAnchor = CGPoint(x: 0.5, y: 0.5)
london.rotation = degrees
london.map = mapView
For more info Please check here

Resize from 4-inch to 3.5 using Sprite Kit

I'm doing my first iOs game using Sprite Kit and Swift.
I start positioning all my Sprites and labels like:
sprite.position = CGPoint(x: 0, y: 200)
when i run in a 4-inch device, it looks really good but when i run it in a 3.5 device the game looks incomplete.
Is there any good solutions to resize all the layers instead of redesign all my scenes?
If you didn't change the initial setting for your scene scaleMode, you should have it set to .aspectFill. This setting will size your scene keeping it's aspect ration but trying to fill your screen. It will chop part of it basically. There's little you can do except go for an .aspectFit setting. It will keep the aspectRation but will fit all your scene in the screen leaving you with black letter boxing bands.
Most people do not use .aspectFit but if you know how to resize the scene depending on the screen size, you can add the needed padding on either side to remove the black letter boxing. I created a framework that does that for you and also calculates your original anchor point for your scene so you loose nothing of your current coordinates implementation. All with 2 methods.. I highly suggest you take a look at it: SceneSizer
Just:
Download the ZIP file for the Repository
Open the "SceneSizer" sub-folder
Drag the SceneSizer.framework "lego block" in your project
Make sure that the Framework in Embedded and not just Linked
Import the Framework somewhere in your code import SceneSizer
And you're done, you can now call the sizer Class with: SceneSizer.calculateSceneSize(#initialSize: CGSize, desiredWidth: CGFloat, desiredHeight: CGFloat) -> CGSize
Documentation is in the folder for a clean and full use with a standard scene. Hope this helps!

Why is object.y not positioning the image in Corona SDK?

displaycontent = display.newImageRect (rawdata[currentpath][3], screenW*1.1, ((screenW*1.1/1654)*rawdata[currentpath][6]))
displaycontent.anchorY = 0
displaycontent.y = screenH*0.78
My program loads an image from a database to be displayed on the mobile phone's screen, everything works correctly apart from being able to position it with the y coordinates.
The only thing that changes its position is the anchor point 0 puts the top of the image in the centre of the screen, and values from 0.1 - 1 all position it higher. Changing the y position via object.y has zero effect regardless of what I set it as.
(the size settings probably look a bit weird in the first line, but this is because the images are different sizes and need to show the correct proportions on different screen types).
Btw I am using a tabbar widget as the UI (in case that is relevant)
Any help would be appreciated.
Edit: I am aware that displaycontent is bad name for a variable because of its similarity to things like display.contentCenterY for example, this will be changed to prevent any confusion when I look over the code in future.
I went through my code and tried disabling sections to find the culprit and a content mask was preventing me from setting the position of the loaded images within it.
I will look over my masking code and fix it (should be straight forward now I know where the problem started).
If anyone else has a similar problem (where an image or object wont position itself on given coordinates) check your content mask as that may be the issue!

Pattern fill that "follows" the path?

On iOS, I'm using core-plot to draw a line graph. I want the line to be bordered top and bottom with a solid color and a different color in the middle, a striping effect. This cannot be done using CPTLineStyle, so I created a custom line style that uses CGContextSetStrokePattern to draw the line.
I thought I achieve the desired effect by creating a striped image and using it as the stroke pattern. This works, but the image orientation does not follow the direction of the path. The stripes are always horizontal even if the direction of the path is 45 degrees.
How can I tell Quartz auto rotate the pattern fill such that it follows the vector direction of the graph segment? Or alternatively, how can I get core-plot to do this for me?
We recently added the lineGradient property to CPTLineStyle that gives you a very flexible way to do this. See the "Gradient Scatter Plot" demo in the Plot Gallery example app.
Note that this change was added after the 1.3 release and is not part of a downloadable release yet. You will need to pull the latest code with Mercurial to get the change or wait for the next release.
The best solution I've found is to use two plots. The first with the wider line style the second with the narrower one. That achieves the desired effect.

Resources