Any KNX-speaking simple Decora rocker switches? - home-automation

I'm rebuilding my house later this year. I plan to do all the lighting using KNX controls, so every light will be an actuator.
For wall switches, I'm looking to find something much simpler and more familiar than touchscreens or capacitive switches. I'm looking for a Decora-style rocker switch that will send KNX signals over twisted pair wiring. Properties I'm looking for:
Decora style look
Momentary action "on" and "off" buttons (depress the top or bottom of the paddle momentarily)
Signals via KNX over twisted pair wiring
Powered via twisted pair wiring (I don't care if it's the same wires as the signaling)
Has anyone ever seen such a product?

Related

How to add "Action Points" to Scenekit Game using Gameplaykit or something similar

This is a very common question, and I am not asking for technical details. What I am looking for is an approach or best practice guidance for the following situation:
Imagine a jump and run game entirely made with SceneKit. The game is played by controlling a character running to the right or left side, climbing up walls and jump over obstacles. (it is a 2½-D style game where the character always runs in one direction or it's opposite. Mainly on the x-Axis)
At certain "locations" (on the scene x-axis) I need to implement some specific actions, that should occur as soon as the main character walks, jumps or runs over that specific point.
What I have done so far, is adding simple invisible SCNPlanes with static physics bodies, with the bitmask configured to detect only the contact (no collision). In the physics handler (physics delegate) I can now fetch contact of the objects the character trespasses. Currently I have a large switch-statement to fetch the names of the "Action-Walls" as I call the planes through which the character walks. So, I can trigger whatever specific action that I want to happen at that location.
It is working this way, even quite good, but I wonder if there is a way to add some better "action points" using i.e. Gameplaykit, but I only find information about agents and behaviors or pathfinding.
What I am looking for is some static point, when trespassing it, an associated action happens.
I have no clue about the possibilities of Gameplaykit. Can anyone tell me an approach using Gameplaykit or whatever Apple has in its magic box of useful tools to make this better than using action walls of SCNPlanes and the physics handler?

Hololens replaces GameObjects after spatialmap scan

I have made an application for the hololens with the holotoolkit. The app gets an array with 3D positions and spawns GameObjects based on these positions.
After walking a bit in the room and change some realworld objects (maybe open a door or something) the hololens will recognized it, then a window opens and says 'Finding your space' and all of the GameObjects are in the wrong position.
How can I use SpaitalMapping, but not allow to replace the GameObjects i place?
After walking a bit in the room and change some realworld objects (maybe open a door or something) the hololens will recognized it
I'm not sure the details of the changes you made on realworld objects, but some changes in the real world may have a great impact on the lighting conditions. For example, after opening an outward door in a dimly lit room. a lot of daylight will enter the room.
Lighting can make a difference in the visual features that HoloLens detects. So changeable lighting conditions might cause HoloLens to lose tracking.
Besides, in environments where the visual features change because most objects move, will also cause the same problem.
I recommend that you try the steps in I see a message that says Finding your space to fix this problem. In short, make sure your artificial lighting conditions and WiFi signal are stable and try moving more slowly.

How to handle performance for 1000's of components on the screen in React Native?

I am making an app where there is a 32x64 grid and when you click any square it will light up. I want the user to be able to drag their finger and it fills up all the squares the finger touches.
I am basically setting the state of my component to be an array of all of the squares and when a user touches the screen it switches the state to be a new array of squares (with one more filled) and renders the view.
With so many squares (components) on screen and with the re-rendering the performance is really bad on my phone. It is decent on my computer phone simulator, but could be better. I have tried adding a key to all of the squares in the array and I changed the square from a regular Component to a Pure Component, and although that did help performance it still could be a lot better.
After researching for a while, I decided I needed to reach out for guidance. I am trying to do this for IPhone, so do you think I should do the whole thing in swift if I want better performance or is there other ways to optimize the performance of a lot of components in React Native?
Keys are required when returning multiple nodes.
PureComponent could help but you can use two more techniques.
Use simpler functional, stateless components.
Introduce <Row /> component to divide updating complexity.

Placing Virtual Object Behind the Real World Object

In ARKit for iOS. If you display a virtual item then it always comes before any real item. This means if I stand in front of the virtual item then I would still see the virtual item. How can I fix this scenario?
The bottle should be visible but it is cutting off.
You cannot achieve this with ARkit only. It offers no off the shelve solution for solving occlusion, which is a hard problem.
Ideally you'd know the depth of each pixel projected on the camera, and would use that to determine those that are in front and those that are behind. I would not try something with the feature points ARKit is exposing since 1) their position is innacurate 2) there's no way to know between two frames which feature point of frame A is which feature point in frame B. It's way to noisy data to do anything good.
You might be able to achieve something with third party options that'd process the captured image and understand depth or different depth levels in the scene, but I don't know any good solution. There's some SLAM technique that yields dense depth map like DTAM (https://www.kudan.eu/kudan-news/different-types-visual-slam-systems/) but that'd be redoing most of what arkit is doing. There might be other approaches that I'm not aware off. Apps like snap do this in their own way so it is possible!
So basically your question is to mapping the coordinate of the virtual item on real world coordinate system, in short, you want to see the virtual item blocked by the real item, and you can only see the virtual item once you pass the real item.
If so, you need to know the physical relations of each object in this environment, and then you need to know exactly where you are to decide if the virtual item is blocked.
It's not an intuitive way to fix this, however, it's the only way I can think of.
Cheers.
What you are trying to achieve is not easy.
You need to detect the parts of the real world that "should be visible" using some kind of image processing. Or maybe the ARKit feature points that have the depth information, then based on this you have to add "an invisible virtual object" that cuts the drawing of things behind it. This will represent your "real object" inside the "virtual world" so that the background (camera feed) remains visible in places where this invisible virtual object is present.

simple shape recognition

I wanna achieve something that looks like the wizard's ability in the game Trine.
I want to create a game where the player uses the mouse to create certain objects, so i will need to compare the shape the player drew to a predefined shape of my own and check if its close.
I have no idea how to achieve this and where to look for, I assume it has something to do with shape recognition like in image processing and computer vision but it should be much simpler and work in real time.
does anyone have a clue how this can be done or where can i look for something like that?
Is this what you're going for? http://www.youtube.com/watch?v=7Zh79q_xvZw
I would start by researching gesture recognition. I think that's the phrase you need to get good info. http://en.wikipedia.org/wiki/Gesture_recognition
Also, sketch recognition: http://en.wikipedia.org/wiki/Sketch_recognition
Have a look at this question. What you are looking for in particular is on-line handwriting recognition, meaning that you follow every move of the user from beginning to end.
Now, you might want to simplify it a whole lot, so one way is defining 9 areas, like a 3x3 grid. Then convert the user's movement into a list of how the user moved through these grids (use thresholds to make sure it was in that area for a while). Now you will have an array like this: 1-1, 1-2, 2-2, 2-3 (meaning the user went from upper-left corner, the upper-middle, etc.)
This information is now fairly easy to match to a set of gestures. If it performs poorly, you can either make it more difficult and introduce a Hidden Markov Model, that will allow some mistakes in the gesture (but still matching the most likely one you have in your gesture set), or you could simply display the grid to the user, so that the user will learn the gestures like number codes.

Resources