Lance GG : Is it posible to activate game pause? - lance

Is it possible to add pause function to the game.
Without setting all DynamicObject velocity to 0. E.g. the pong game.

Lance is a multiplayer game engine, so it wouldn't make sense to pause the game just for a single client - as the game must continue for all other players.
Nevertheless, there is an ignorePhysics attribute on the GameEngine class. Setting this attribute to true on the server would stop the physics, and should work in interpolation mode.

To some extent, it would make sense with a global pause feature. Think any modern MOBA - if a client disconnects, the game is paused to wait for a reconnect. Unless of course, Lance.gg is only meant for persistent worlds :)

Related

How to make a killer bot

I'm actually new to roblox studio and I want to make my own basic shooter game. I know when I publish only one or two will come at the beginning and I don't want them to leave without playing itself. So I decided to create killer bots who play just like players[kill the players, and escape from not getting killed]. Is there any way I can do that because NPC creating is easy but asking them to kill the players is difficult and moreover the NPC's are not counted as players and not shown in the leaderstats. How can I show that?
Using PathFinding won't work that well, so I recommend using a Raycast and Strafing. And if the npc touches player, check if the hit part's parent IS a player and then set its health to 0 or damage his health.
For leaderstats, the roblox sword has a tag called "creator" that is there for KOs. Add the creator tag for the leaderstats to work.

performance wise, should I use AVAudioEngine for multiple sound effects yes or no?

Performance-wise, is it better to use an AVAudioPlayerNode instance with
A. one AVAudioEngine instance, to which I connect multiple sound effects to its mixer
B. a separate instance of AVAudioEngine for each sound effect?
The reason I'm using AVAudioEngine is because I'm doing some audio processing with AVAudioUnitVaryspeed, but each sound effect can be (and is) independent, so I was wondering if anyone knows what's best?
Is it OK to have an AVAudioEngine and its nodes for each effect or should I manage a single engine instance and connect / disconnect nodes as sounds play?
You should manage a single AVAudioEngine instance to which all your effects and player are connected, and connect/disconnect nodes as sounds play.
Having multiple AVAudioEngine instances isn't a problem because of the performance overhead, but because it'll become too complicated to manage them all in real time, having them all respond to AVAudioSession.routeChangeNotification and AVAudioSession.interruptionNotification, keeping all the players in sync, testing each one of them using renderOffline, and so on.
I recommend you watch WWDC 2014 Session 502 - AVAudioEngine in Practice, for a good introduction to this API. One of the use cases they bring up is very similar to yours – and they use just one AVAudioEngine instance :-)

AudioKit metronome synced with time-pitched audio loops

I'm making an app that plays synced audio loops with a metronome. For example, I might have 3 files like this:
bass_60bpm.m4a
drums_60bpm.m4a
guitar_60bpm.m4a
And a metronome sound tick.m4a, which I play with AKSamplerMetronome.
I need to play them back at arbitrary tempos, so I use AKTimePitcher on the AKAudioFiles (so playing at 90bpm, I'd play bass_60bpm.m4a at 1.5x).
This almost works, but after 3-5 loops, the metronome gets out of sync with the audio loops. I think I understand why that happens (audio_sample_length * floating_point_number is not equivalent to AKSamplerMetronome's tempo calculations), but I don't know how to fix it.
What I suspect I need to do is manually reimplement some or all of AKSamplerMetronome, but play the metronome ticks based on AKTimePitcher's output, but I can't piece together enough info from the API, docs, and examples to make it happen.
An alternate approach might be to use AKSequencer instead of AKSamplerMetronome. The midi output of the sequencer's track could be sent to an AKCallbackInstrument, and the sequencer's events could get the callback function to trigger both the time-stretched sample and the metronome ticks (and you could also trigger synchronized UI events from there as a bonus). This would guarantee that they stay in sync.
Apple's MusicSequence, which is what AKSequencer uses under the hood, is a little flakey with its timing immediately after you call play, but it's pretty solid after that. If you start the sequencer just before its looping point (i.e., if you have a 1 bar loop, start it one sixteenth note before the end of the first bar) then you can get passed that flakiness before the actual loop starts.

IOS 8: Real Time Sound Processing and Sound Pitching - OpenAL or another framework

I'm trying to realize an app which plays a sequence of tones in a loop.
Actually, I use OpenAL and my experiences with such framework are positive, as I can perform a sound pitch also.
Here's the scenario:
load a short sound (3 seconds) from a CAF file
play that sound in a loop and perform a sound shift also.
This works well, provided that the tact rate isn't too high - I mean a time of more than 10 milliseconds per tone.
Anyhow, my NSTimer (which embeds my sound sequence to play) should be configurable - and as soon as my tact rate increases (I mean less than 10 ms per tone), the sound is no more echoed correctly - even some tones are dropped in an obvious random way.
It seems that real time sound processing becomes an issue.
I'm still a novice in IOS programming, but I believe that Apple sets a limit concerning time consumption and/or semaphore.
Now my questions:
OpenAL is written in C - until now, I didn't understand the whole code and philosophy behind that framework. Is there a possibility to resolve my above mentioned problem making some modifications - I mean setting flags/values or overwriting certain methods?
If not, do you know another IOS sound framework more appropriate for such kind of real time sound processing?
Many thanks in advance!
I know that it deals with a quite extraordinary and difficult problem - maybe s.o. of you has resolved a similar one? Just to emphasize: sound pitch must be guaranteed!
It is not immediately clear from the explanation precisely what you're trying to achieve. Some code is expected.
However, your use of NSTimer to sequence audio playback is clearly problematic. It is neither intended as a reliable nor a high resolution timer.
NSTimer delivers events through a run-loop queue - probably your application's main queue - where they content with user interface events.
As the main thread is not a real-time thread, it may not even be scheduled to run for some time.
There may be quantisation effects on with the delay you requested, meaning your events effectively round to zero clock ticks and get scheduled immediately.
Perioidic timers have deleterious effects on battery life. iOS and MacOSX both take steps to reduce their impact by timer coalescing
The clock you should be using for sequencing events is the playback sample clock - which is available in the render handler of whatever framework you use. As well as being reliable this is efficient as well, as the render handler will be running periodically anyway, and in a real-time thread.

Where does game logic go in Rails apps?

(Disclosure: I'm very new to Rails)
I am trying to make a RISK-style board-game in Rails, though this question may apply for any MVC-style framework.
I have players and games. Players can join games until the game is full. Trying to join a full game (or the same game twice) signals an error, yada yada.
Below are three pieces of game logic that I am unsure where to place, or at least where they are typically placed.
If a game is full, it should start and do things related to that (ie, messaging all players that the game has begun, randomly disperse armies across the map).
When a player executes moves during his turn it seems reasonable to have that logic in the controller. What about when his turn ends and it is time to message the next player? Would that code go in the same controller as
Suppose that a player forfeits his turn if he does not finish it within 24 hours. I'd need to periodically look at all the games in my app and see if a player started a turn more than 24 hours ago. Where would this logic go?
My question is: Where does logic for items like the above go in a Rails/MVC app?
In one sense I could stuff all of it except 3. into the controllers for the last-done-action. For instance I could place the logic for 1. in the player-joins-game controller method (check if the game is full after every player join, if it is, commence 1. related logic). This seems like it might be the wrong place, but maybe that's how it is typically done.
Rails' convention is "fat model, thin controller". so I would suggest that the state of the game should be held by the Game model.
You webapp consists of zero or more games, and each game consists of 1 or more players.
The state of "full", or the state of "game begun" are properties of the game, and should be held by that model.
So for 1) When the final player joins (or perhaps, all current players vote to start the game), the game state (a property of Game) would be set to "begun", the property that holds the currently active player would be set, and a delayed job would be queued to message all the players.
For 2, the game has a "execute move" method in the Game controller that would check that the player executing the move is the current player, then it would execute the move against the Game model. The Game model internally would know if the move is valid, what the result is, and what the next step(s) would be. It would, again, use something like a delayed job to message the next player.
For #3, again, a delayed job could be set to execute the timeout. I'm not 100% on how to schedule delayed jobs, or if there's another gem/plugin that would work better. But, the job would call a method on the Game controller to check the status of the game at the required time. If the player has not moved, then execute your forfit logic, which would, again, be a method in the Game model.
The state of each player could be held in the Player model, or in the Game model, I suppose, depending on the game, and how much interaction between Player models there might be.
In the case of a risk game, I would think the player model would be rather thin, as the state of the board is more about which player owns a country, and how many armies they have there - that's more a state of the game, then a state of each individual player. I would expect the Player model in a risk game to be more oriented towards metadata about the actual player - username, wins/losses, skill level, etc.
In a game like Supremacy, where the player has resources, nukes, etc., then there's more data to store in the Player model.

Resources