I'm making a game similar to Terraria in Godot. I want a lighting system similar to it. The light source is a circle but it still can penetrate through a couple of blocks before stopping. I tried using a Light2D and LightOccluder2D for the blocks but the light just stopped there. How can I make one similar to Terraria.
Terraria Lighting
This is the same question as this question but it was in Unity.
Related
I can't share much code because it's proprietary, but this is a bug that's been haunting me for awhile. We have SceneKit geometry added to the ARKit face node and displayed inside an ARSCNView. It works perfectly almost all of the time, but about 1 in 100 times, nothing shows up at all. The ARSession is running, and none of the parent nodes are set to hidden. Further, when I look at Debug Memory Graph function in Xcode, the geometry appears to be entirely visible there (and doesn't seem to be set to hidden). I can see all the nodes attached to the face node perfectly within the ARSCNView of the memory graph, but on the screen, nothing shows up. This has been an issue for multiple iOS versions, so it didn't just appear with a recent update.
Has anybody run into a similar problem, or does anybody have any ideas to look into? Is it an apple bug, or is there a timing issue I might not be aware of? It's been really hard to debug because of how infrequent it is, and I haven't found it discussed on any other forums (but point me in the right direction if there is a previous discussion). Thanks!
This is pretty common practice if AR tracking is poor for some reason.
I ran into a similar problem too. I think it's definitely a tracking error which arises due to the fault of the user of AR app. Sometimes, if you're using World Tracking Config in ARKit and track a surrounding environment offhandedly or if you are tracking under inappropriate conditions – you get a sloppy tracking data which results in situation when your World Grid/Axis may be unpredictably shifted aside and your model may fly away somewhere. If such a situation arises - look for your model somewhere nearby – maybe it’s behind you.
If you're using a gadget with a LiDAR, the aforementioned situation is almost impossible, but if you're using a gadget with no LiDAR you need thoroughly track your room. Also there must be good lighting conditions and high-contrast real-world objects with distinguishable non-repetitive textures.
I am new to WebAR, but I have experience with building AR scenes with Unity3D and software such as Vuforia and 8thWall.
I have a question with the markers with AR.js. Why are they stuck with the thick black border? Is the software not able to just recognize a unique image like how Vuforia and Wikitude works? I apologize for how naive I may be when it comes to WebAR, however, I see this as an issue for the adoption rate of this technology if developers cannot use truly custom images and patterns. Is there a solution available that I may have missed somewhere? What happens if someone deletes/erases the big black border on the marker? Does it still work?
Thanks to anyone who can shed some light on this!
Ar.js uses artoolkit and therefore is marker based. If you want to use it, there is not much you can do about it. Still You can have unique images inside the black box but not much else.
Worth mentioning, it is possible to make the borders thinner, there's even a branch - work in progress worth looking into.
Aframe-argon tried integrating aframe and vuforia, but i'm not sure if it's up to date.
As an introduction and context, I'm currently a novice iOS app developer and I want to make sure I'm not reinventing the wheel too much as I make this app (reinventing wheels can get very expensive.)
The app will allow the user to download our videos off the internet and will allow storage for offline usage. The problem with storing these videos on the device is that many of them will be too long and thus too big to be practical to store.
The videos are quite simple however, consisting of a couple short "real" video clips at the beginning and end, with the bulk of the video being still images animated around the screen. The animations would consist solely of opacity and simple transformation keyframes (translate, scale, rotate around static anchor point), and would require a variety of easing functions for each transition.
The hardest part likely would be that the "video" player will also have to be able to track with an audio player's timecode, and will have to support seeking to any arbitrary point like a normal video player.
So, now that I've described the problem, here's the solution I've come up with so far. Hopefully doing it this way will reduce the probability of XY problems. :)
The idea is to basically do a dumbed-down version of what Final Cut and other editing programs do with animations—have a bunch of clips, sometimes overlapping, and be able to animate the position, scale, rotation, and opacity of each using keyframes.
My first instinct as far as implementation goes is to use some of iOS's game engine stuff to do animations (maybe SceneKit because it seems to allow animations to use scene time as opposed to real time, despite the fact that it's primarily 3d and I am doing 2d animations) and manually handle syncing time with the audio player, as well as manually handling the adding and removing of nodes from the scene when seeking through the video and when clips begin/end.
What are some built-in systems, plugins, etc. that I can take advantage of to make this easier and faster to develop and maintain? Double points if I don't have to transcode the animations by hand to some custom format.
As I mentioned in my comment your question is rather broad and contains multiple questions in one, I will address what you mentioned to be likely the hardest part:
https://developer.apple.com/documentation/avfoundation/avplayeritem
https://developer.apple.com/documentation/avfoundation/avasset
Instead of SceneKit, take a look at SpriteKit and its SKVideoNode.
Also, research Metal video processing. There are quit a few example projects available you could use as a starting point.
When building a game with SpriteKit, with a platformer game (like Doodle Jump for example), is it preferable to move the camera up, or the background nodes down ?
What is the standard practice in other frameworks ?
MOVE THE CAMERA!!!
One of the weirdest things about 2D game engines is that it often takes them a series of versions to get a camera.
They should be born with them.
SpriteKit was no different, it took forever to get a camera.
Now that it has one, never ever think of not using it.
Will make your life a million times simpler.
I can think of no exceptions, but look forward to being proven wrong.
move background was HOTFIX until proper cam support added.
use the cam. its easy and fun. no reason to not imo.
I'm trying to replicate the pendulum effect as seen in the old game "Gold miner", if you ever played it before. Do I need to use a physics engine for that, or not? I heard that Box2D is preferred over Chipmunk when it comes to ropes, but do I really have to join two objects together with a rope to accomplish this pendulum effect? I'd love to do it without a physics engine, but if I have to use one I think I would pref chipmunk as it comes with Cocos2d v3. (In short, whats the best way of making a pendulum that swings forever and can be lowered and raised)
Im' a complete n00b when it comes to physics engines, never used one before, only made non-games apps before :/ Any help is very appreciated =)
You mean the grappling hook in that game? That's absolutely not physics, or at least can be done easily without a physics engine. At best you may want to use the physicss engine's raycast collision detection methods (if any).