AKTubularBells() and AKRhodesPiano() together cause error on the second - audiokit

I'm using AKRhodesPiano() and AkTubularBells(). Both work alone. When I try to initialize both, I get the following error.
AKRhodesPiano.swift:init(frequency:amplitude:):88:Parameter Tree Failed
Notably, if I change the order of initialization, the error occurs for the last one of the two instantiated.
Adding the following line to the AKTubularBells playground right under the initialization of AKTubularBells is enough to trigger the error.
let tubularBells = AKTubularBells()
let temp = AKRhodesPiano() /// <- Add this line.
I saw in another post AKRhodesPiano error (crush) on AudioKit v4.2 that there was a recent error in the STK Physical models, so perhaps this is part of that. Any insight appreciated as always.

Thanks for noticing this, it only occurred when using those two nodes simultaneously, but it was basically just a cut-and-paste job gone bad. I fixed it on develop, so if you can rebuild the framework you'll be fine or else wait for the next release which should be soon.
Here's the fix:
https://github.com/AudioKit/AudioKit/commit/05651ff97a7ea7815a27de6a53eee0b5f7998920

Related

Pinescript code give "no data" on tradingview or gives cant find a certain function error

this is quite new to me, so I've been trying to make it work for the past few days, but keep getting errors.
I'm trying to create a strategy code for several indicator, but get "Could not find function or function reference 'sma'" error or it compiles perfectly but get no data on the overview pane...
I decided to try the most basic code to see what happens:
strategy("Simple Moving Average Strategy", overlay=true)
// Define the moving average
ma = sma(close, 20)
// Buy signal
strategy.entry("Buy", strategy.long, when=crossover(close, ma))
// Sell signal
strategy.exit("Sell", when=crossunder(close, ma))
This yields no data.
can someone please assist?
also, if you guys can explain why I get "Could not find function or function reference 'sma'" (or other functions) eventhough the rest of the code is just fine, that would be great.
Thanks!
Tried reducing the code to its basics, to pinpoint the error.

Drake -- Coredump in Simulation When There is a Contact

I have a Kuka arm and some objects set up in my simulation(very similar to manipulation station example), and I have been running into coredump error below whenever there is a contact between the robot and the objects.
"abort: Failure at multibody/plant/multibody_plant.cc:1640 in CalcImplicitStribeckResults(): condition 'info == ImplicitStribeckSolverResult::kSuccess' failed.
Aborted (core dumped)"
Decreasing the integration step size for the simulator did not help, so I ended up tracing back the error and commented out the condition that is causing the error( "DRAKE_DEMAND(info == ImplicitStribeckSolverResult::kSuccess);" ), which seems to coredump a lot less often.
However, I am guessing that condition is there for a reason, so would commenting the line out cause any other issues in the simulation? What is the proper way to fix the coredump problem?
In Drake PR #12503, the ImplicitStribeck code was refactored to reflect the notation in the TAMSI arXiv paper, and in #12361 it was changed to provide a more helpful exception with troubleshooting tips:
https://github.com/RobotLocomotion/drake/blob/v0.15.0/multibody/plant/multibody_plant.cc#L1866-L1878
Can you try out a later version (e.g. 0.15.0), and then try out the troubleshooting instructions there? (You've already tried changing step size in the simulator, but you may want to check on the stiffness of your overall system, etc.)

Random bad_object_header mnesia/dets error

I am having a very weird error with mnesia. I have about 10 tables that mnesia is recording, and usually it works fine. However, in a certain place in my code, whenever I try to read from a particular table (trying to read from other tables is fine) I get a DETS error.
I reduced my code to
{atomic, ok} = mnesia:transaction(fun() ->
[Entry] = mnesia:read(table_name, Key),
ok
end)
I have a try/catch block around the transaction, and the error I get is this:
error:{badmatch,
{aborted,
{{badmatch,
{error,
{bad_object_header,
"/path/to/table_name.DAT"}}},
[{callback,
'-handle/2-fun-0-',
1,
[{file,
"src/src.erl"},
{line,
234}]},
{mnesia_tm,
apply_fun,
3,
[{file,
"mnesia_tm.erl"},
{line,
830}]},
{mnesia_tm,
execute_transaction,
5,
[{file,
"mnesia_tm.erl"},
{line,
810}]},
]}}}
Unfortunately I cannot reproduce the error with a short example. Even if I call the function from the REPL, it doesn't error. It only errors when it happens in my actual code. But it does happen reliably every time.
If I take out the mnesia:read line, everything works fine. I have tried remaking the schema and the tables, and that didn't help. It's really weird because my code goes on later to use the table successfully. It is only if it's used from this one place that it fails.
What could be going wrong?
Update
I experimented some more and it seems that the error only happens when two of these transactions happen (in different processes) nearly simultaneously. Isn't mnesia meant to be used in this way?
Update 2
Turned out that the problem was fixed by downgrading my erlang installation on Arch Linux to R16B-3 from R16B-6. Hopefully this bug will be ironed out soon.
The symptoms means that either that the file operation to read at a specific part of the file runs out of memory or you are trying to read from an non existing position in the file. So if you do not run out of memory (which you should have noticed), it is likely that there is a race condition in the handling of that file by DETS.
I've got the same error from time to time. It occurs since I did an upgrade on my Debian server maybe one or two months ago.
Here is my error:
Error in process <0.84.0> on node 'yaws#overnux' with exit value: {{case_clause,{error,{bad_object_header,"/var/www/d-lan/db/d_lan_downloads_count.dets"}}},[{d_lan_db,loop,0,[]},{string,strip,1,[]}]}
I think it's an Erlang regression because I didn't change the code for a long time and it was working fine before the upgrade.
I'm only using DETS, not Mnesia. I have no concurrent access to the file.
Here is my code, it's very simple: https://github.com/Ummon/D-LAN/blob/website/modules/erl/d_lan_db.erl#L103

Simperium Received an Invalid Change

I am developing integration with Simperium, the integration is complete and has been running on test machines for sometime, I am starting to get this error on one of the devices and it keeps repeating constantly any ideas?
MeetingPad[891:1103] Simperium error (ActionLinks82), received an invalid change for (a18852011efe4964a6fdeb1853c790f3)
2013-02-07 10:07:05:277 MeetingPad[891:1103] Simperium client ios-7f43b434754d882923e966df5d885755 received change (ActionLinks82) ios-4176925448fa8ae0a2f1d0937627aa6b: {
ccids = (
3f3b4550b23147d49e194038feea09a6
);
clientid = "ios-4176925448fa8ae0a2f1d0937627aa6b";
cv = 5112df4b37a401031dcc5be1;
ev = 2;
id = 9ca0b7ad04314ab9888d75691be784b5;
o = "-";
}
If this occured on a user account what would be the guidance?
One thing to check is that you're using the latest version of the code. The repository was recently open sourced on GitHub. There used to be a bug related to nil values that could cause an invalid diff to appear in your change stream, but it should be fixed now.
Assuming you're using the latest code though, does the error repeat continuously for the same "id" value, or are there different values?
Looking at the Simperium code where this error occurs, it's possible it could be displayed if you've deleted an object locally and remotely at around the same time. In your app, might ActionLinks fit that pattern, i.e. are you creating and deleting a lot of them across multiple clients?
If that is indeed the cause, then the error is innocuous and we should patch the code. Let me know what you discover.

node.js process out of memory error

FATAL ERROR: CALL_AND_RETRY_2 Allocation Failed - process out of memory
I'm seeing this error and not quite sure where it's coming from. The project I'm working on has this basic workflow:
Receive XML post from another source
Parse the XML using xml2js
Extract the required information from the newly created JSON object and create a new object.
Send that object to connected clients (using socket.io)
Node Modules in use are:
xml2js
socket.io
choreographer
mysql
When I receive an XML packet the first thing I do is write it to a log.txt file in the event that something needs to be reviewed later. I first fs.readFile to get the current contents, then write the new contents + the old. The log.txt file was probably around 2400KB around last crash, but upon restarting the server it's working fine again so I don't believe this to be the issue.
I don't see a packet in the log right before the crash happened, so I'm not sure what's causing the crash... No new clients connected, no messages were being sent... nothing was being parsed.
Edit
Seeing as node is running constantly should I be using delete <object> after every object I'm using serves its purpose, such as var now = new Date() which I use to compare to things that happen in the past. Or, result object from step 3 after I've passed it to the callback?
Edit 2
I am keeping a master object in the event that a new client connects, they need to see past messages, objects are deleted though, they don't stay for the life of the server, just until their completed on client side. Currently, I'm doing something like this
function parsingFunction(callback) {
//Construct Object
callback(theConstructedObject);
}
parsingFunction(function (data) {
masterObject[someIdentifier] = data;
});
Edit 3
As another step for troubleshooting I dumped the process.memoryUsage().heapUsed right before the parser starts at the parser.on('end', function() {..}); and parsed several xml packets. The highest heap used was around 10-12 MB throughout the test, although during normal conditions the program rests at about 4-5 MB. I don't think this is particularly a deal breaker, but may help in finding the issue.
Perhaps you are accidentally closing on objects recursively. A crazy example:
function f() {
var shouldBeDeleted = function(x) { return x }
return function g() { return shouldBeDeleted(shouldBeDeleted) }
}
To find what is happening fire up node-inspector and set a break point just before the suspected out of memory error. Then click on "Closure" (below Scope Variables near the right border). Perhaps if you click around something will click and you realize what happens.

Resources