I want to create a simple addon that will play sound files when the player kills an enemy player (gets a killing blow). I've looked around on Google but haven't found much in terms of documentation or guides.
Can anyone point me to some up-to-date documentation or some places where I can just find better guides?
Getting started: http://www.wowpedia.org/Getting_started_with_writing_addons
API: http://www.wowpedia.org/World_of_Warcraft_API
What you want to do is to add a trigger for the combat log event for a killing blow. Shouldn't be to hard. And then play a sound, using the API for that.
Addons for the game are created, most simply, by making a new folder in the Interface/AddOns directory in your game folder and populating this with the core files for your addon. These files should include a "Table Of Contents" file which contains information about your addon, and script(s) which are created using the Lua scripting language (with some custom WoW functions and tables and other bits). To properly get started with this, Wowpedia is generally a pretty good guide, and I also recommend this tutorial.
In your specific situation you should just be able to listen for a game event and then do your custom stuff (i.e. playing a sound) in the desired situation. There isn't actually an event for killing blows at the time of writing, however if you register the COMBAT_LOG_EVENT_UNFILTERED event and look for the PARTY_KILL combat event, calling playSoundFile if the sourceName (arg4) matches the player's name (UnitName("Player")), you should be set.
Related
Edit: The answer is to use firebase realtime database.
I wrote a library for the next person.
https://github.com/flipflopapp/turnbased-games-with-firebase
-- Question --
I am implementing two player chess game (www.halfchess.com) and am considering using firebase messaging (instead of using sockets to create rooms and two player matches). The game would involve sending 60-100 chess moves as messages between two devices in two to three minutes (that can be android or iOS). My nodejs server would have code that enables device to device messaging (receiving from one player and sending to other).
I cannot use Google Game Services because I don't have google login implemented in my app (I only plan to keep facebook login). The advantages of using firebase (compared to sockets) is that I will have to write much lesser code (reconnections, etc) and it would take care of scalability issues.
My questions are :-
(1) Will there be problems when the users playing against each other are on two iOS devices (instead of android's)? (such as higher latency)
(2) If a user is changing location physically and a message that contains a chess move is undelivered, when will it be retried?
(3) For a fast game of chess, will the latency be manageable? This is like 8-10 times the speed of normal chatting.
While I read more on the topic, perhaps someone who has already experimented can comment.
Firebase Cloud Messaging is not meant for kind of usage, and in addition to a non guaranteed delivery time (some researches from 2013 - 2014 shows more than 1 seconds per message on avarage), FCM will probably imply throttling in such a use case.
See also this SO post
I'm sure the answers above will work but, I was having a tough time getting them to function. This is what eventually worked for me and my firebase chat app!
Hopefully, this will help some folks out there.
I was able to add a chess game to my firebase chat app and, all I used was an iframe! However, it didn't work the first time because, all I did was add the iframe coding to my app.
This is how I got my iframe to work in a firebase app...
First, change directory (cd) into your chat app's "public" folder (where you would typically run the "firebase deploy" command) and, add your iframe to the "index.html" document located there. Use this address for your iframe's source URL (src)...
src="chess/index.html"
It wont work right if you do not include the "index.html" page name!
Next, I created a new folder named "chess" in the same public directory and I added the chess game's "index.html" doc and dependanciess to it (js,css,images...etc).
And last, but not lease, open a terminal in the same "public" folder and run "firebase deploy" to upload the whole thing to your firebase account and console.
Done!
I'm pretty sure that including your chess app docs inside of your firebase app is what made the iframe finally work. I also wrapped the iframe with a couple of 'div' tags but, I'm not sure if that made any difference.
Please, feel free to come and take a look but, you'll have to sign in with Google to gain entry!
After that, just right-click anywhere on the page and select "view source" to see the code. Cheers!
https://friendly-chat-b2d6a.firebaseapp.com/
Instead of having the other player send a message to the client, why not just have the client display a message based on whats happening in the game? It seems like an easier solution for you, as the only thing that needs to be sent is the actual move, and you can piggyback off of that if you need to.
I realized I should be looking at Firebase Realtime database (and not messaging).
Useful links:
https://firebase.googleblog.com/2016/07/have-you-met-realtime-database.html
Is firebase realtime json database suitable for data broadcasting?
https://firebase.google.com/pricing/
https://groups.google.com/forum/#!topic/firebase-talk/n_B1nrgp580
(according to this talk the latencies can be < 200ms most of the times)
https://twitter.com/jonikorpi/status/733560092780462080
I am using julius speech recognition for my application. I have one doubt regarding julius:
I have downloaded the latest version and was successful in using its lib and making it work. the problem I am facing is..once the app starts and I call the voice recognition function in my application...it takes the input from mic and displays whatever is said in the mic, but the function still continues to do so again and again. The control will never come out of that function. Here I am facing problem since the control is not returning back I am not able to proceed further. What I want is once the engine gets input from mic it should recognize and stop there.. which I tried to do by deleting the callback function but was unsuccessful.
Can anyone please guide me in this matter, what I need to do to get the desired output. It will be helpful for me.
As discussed in the same post on VoxForge:
You have a couple of choices: first to use the Julius -input control to get the sound data from a list of files (see the .jconf sample file), so that when the list (even if only length one) is exhausted then Julius stops. It is quite easy to record the voice input to a file and then feed the file into Julius. Second you can put a dialog manager in control. If you need more information on what a dialog manager does there are many posts on this forum on that subject accessible by a search.
The basic function of Julius is to start up and then keep on decoding input. When you get more experience you can run Julius as a server, and then tell the server to respond, not respond or shut down as required. It's more efficient than having Julius start and stop all the time.
When an avenue exists for a complex application to yield the required result by using an effective combination of options at run time, editing the application, while possible, might involve a lot of unnecessary work. The emphasis then shifts to passing the options correctly in whatever script is being used to access Julius.
my goal is to create a sampler instrument for iPhone/iOS.
The Instrument should play back sound files on different pitches/notes and it should have a volume envelope.
A volume envelope means, that the sounds volume is fading in when nit starts to play.
I tried countless way on creating that. The desired way is to use a AVAudioEngine's AVPlayerNode, then process the individual samples of that node in realtime.
Unfortunately I had no success on that approach so far. Could you give me some pointers on how this works in iOS?
Thanks,
Tobias
PS: I did not learn the Core Audio Framework. Maybe it is possible to access an AVAudioNodes Audio Unit to execute this job, but I had not the time to read into the Framework yet.
A more low-level way is to read the audio from the file and process the audio buffers.
You store the ADSR in an array or better, a mathematical function that outputs the envelope value based on the sound index you pass it (using interpolation). So the envelope maps to any sound's duration.
Then you multiply the audio sample with the returned envelope value to get the filtered sample.
One way would be to use the AVAudioNode and link a processing node to it.
I looked at another post of yours I think AUSampler - Controlling the Settings of the AUSampler in Real Time is what you're looking for.
I haven't yet used AVAudioUnitSampler, but I believe it is just a wrapper for the AUSampler. To configure an AUSampler you must first make and export a preset file on your mac using AULab. This file is a plist which contains file references and sampler decay volume pitch cutoff and all of the good stuff that the AUSampler is built for. Then this file is put into your app bundle. You then create a directory named "Sounds", copy of all of the referenced audio samples into that folder and put it in your app bundle as well (as a folder reference). Then you create your audioGraph (or in your case AVAudioEngine) and sampler and load the preset from the preset file in your app bundle. It's kind of a pain. These links I'm providing are what I used to get up and running, but they are a little dated, if I where to start now I would definitely look into the AVAudioUnitSampler first to see if there are easier ways.
To get AULab go to Apple's developer downloads, select "Audio Tools for Xcode". Once downloaded just open the DMG and drag the folder anywhere (I drag it to my Applications folder). Inside is The AULab.
Here is a technical note describing how to load presets, another technical note on how to change parameters (such as attack/decay) in real time, and here is a WWDC Video that walks you through the whole thing including the creation of your preset using AULab.
It is possible to implement a feature that allows users to watch videos as they are uploaded to server by others. Is html 5 suitable for this task? But flash? Are there any read to go solutions, don't want to reinvent the wheel. The application will be hosted on a dedicated server.
Thanks.
Of course it is possible, the data is there isnt it?
However it will be very hard to implement.
Also I am not so into python and I am not aware of a library or service suiting your requirements, but I can cover the basics of video streaming.
I assume you are talking about video files that are uploaded and not streams. Because, for that, there are obviously thousands of solutions out there...
In the most simple case the video being uploaded is already ready to be served to your clients and has a so called "faststart atom". They are container format specific and there are sometimes a bunch of them. The most common is the moov-atom. It contains a lot of data and is very complex, however in our use case, in a nutshell, it holds the data that enables the client to begin playing the video right away using the data available from the beginning.
You need that if you have progressive download videos (youtube...), meaning where a file is served from a Webserver. You obviously have not downloaded the full file and the player already can start playing.
If the fastastart atom was not present, that would not be possible.
Sometimes it is, but the player for example cannot display a progress bar, because it doesnt know how long the file is.
Having that covered the file could be uploaded. You will need an upload solution that writes the data directly to a buffer or a file. (file will be easier...).
This is almost always the case, for example PHP creates a file in the tmp_dir. You can also specify it if you want to find the video while its being uploaded.
Well, now you can start reading that file byte by byte and print that data to a connection to another client. Just be sure not to go ahead of what has already been recieved and written. You would probaby initiate your upload with a metadata set in memory that holds the current recieved byte position and location of the file.
Anyone who requests the file after the uploaded has started can just recieve the entire file, or if the upload is not yet finished, get it from your application.
You will have to throttle the data delivery or pause it when the data becomes short. This will appear to the client almost as a "slow connection". However you will have to echo some data from time to time to prevent the connection from closing. But if your upload doesnt stall, and why shoud it?, that shouldnt be a problem.
Now if you want to have someting like on the fly transcoding of various input formats into your desired output format, things get interesting.
AFAIK ffmpeg has neat apis which lets you directly deal with datasterams.
Also handbrake is a very good tool, however you would need to take the long road using external executeables.
I am not really aware of your requirements, however if your clients are already tuned in, for example on a red 5 streaming server, feeding data into a stream should also work fine.
Yes, take a look at Qik, http://qik.com/
"Instant Video Sharing ... Videos can be viewed live (right as they are being recorded) or anytime later."
Qik provides developer APIs, including ones like these:
qik.stream.subscribe_public_recent -- Subscribe to the videos (live and recorded)
qik.user.following -- Provides the list of people the user is following
qik.stream.public_info -- Get public information for a specific video
It is most certainly to do this, but it won't be trivial. And no, I don't think that you will find an "out of the box" solution that will require little effort on your behalf.
You say you want to let:
users watch videos as they are uploaded to server by others
Well, this could be interpreted two different ways:
Do you mean that you don't want a user to have to refresh the page before seeing new videos that other users have just finished uploading?
Or do you mean that you want one user to be able to watch a partially uploaded video (aka another user is still in the process of uploading it and right now the server only contains a partial upload of the video)?
Implementing #1 wouldn't be hard at all whatsoever. You would just need an AJAX script to check for newly uploaded videos, and those videos could then be served to the user in whatever way you choose. HTML5 vs. Flash isn't really a consideration here.
The second scenario, on the other hand, would require quite a bit of effort. I am guessing that HTML5 might not be mature enough to handle this type of situation. If you are not looking
to reinvent the wheel and don't have a lot of time to dedicate to this feature than I would say that you would be out of luck. You may be able to use ffmpeg to parse partial video files and feed them to a Flash player, but I would think of this as a large task.
I haven't done any coding of this kind and would like some pointers how to start. The service will eventually do several things and perhaps someone has already thought of it made it happen.
The big picture is this: Detect if a PowerPoint presentation has been updated on the server. If it has extract the slides and save them as individual jpegs then upload them to a specific image list in SharePoint. All this has to happen without human intervention.
I assume this would be a window service project, right? Then a file stream property that with some property that deals with changes in the file?
As far as dissecting a .pptx/.ppsx files and get the slides converted, it there a "api" or some dll class?
What about uploading files to a library list on SharePoint automatically?
Thanks,
Risho
I've done this in Topshelf http://topshelf-project.com/, a windows service host for .NET.
https://github.com/Topshelf/Topshelf/blob/master/src/Topshelf/FileSystem/FileSystemEventProducer.cs
Since Windows has an event pump issue if events take too long, we also implemented polling on top of this since the FileSystemWatcher gets disconnected during those times.
https://github.com/Topshelf/Topshelf/blob/master/src/Topshelf/FileSystem/PollingFileSystemEventProducer.cs
Now, these producers are supposed to be tied to actors so they might seem a bit overly complicated for just checking on file system events. It's up to use if that model is useful or just the core part. Remember that you can often receive events even if the file is locked or not done yet, so handle those exceptions.
SharePoint has what is called a timer service for just these types of situations. Andrew Connell has an article regarding creating your own timer jobs.
http://www.andrewconnell.com/blog/archive/2007/01/10/5704.aspx