Stack error caused working with anchor on solana - stack

I am developing on-chain program of solana with anchor framework.
But I have crashed with stack error.
#[derive(Accounts)]
pub struct ClaimNftContext<'info> {
#[account(mut)]
pool: Account<'info, Pool>,
pool_signer: AccountInfo<'info>,
vault: AccountInfo<'info>,
user: Signer<'info>,
mint: Account<'info, Mint>,
#[account(mut)]
nft_from: Account<'info, TokenAccount>,
#[account(mut)]
nft_to: Box<Account<'info, TokenAccount>>,
#[account(mut)]
token_from: Account<'info, TokenAccount>,
#[account(mut)]
token_to: Account<'info, TokenAccount>,
token_program: Program<'info, Token>
}
As you can see, there are 10 accounts in ClaimNftContext but if I remove one, there's no error.
I think stack size is limited in anchor.
How can I do?

Anchor has limited stack size.
Then is it impossible to get over 9 accounts from the context?
Luckily, there is a way to reduce stack.
That's Box.
We could use like this:
token_from: Box<AccountInfo<'info>>
Then we can get more accounts.

Related

Error: "CrashReportError: Preview is missing environment object "TabViewModel""

I am building an app in SwiftUI and I've encountered a confusing error:
CrashReportError: Preview is missing environment object "TabViewModel"
Mathematically crashed due to missing environment of type: TabViewModel. To resolve this add `.environmentObject(TabViewModel(...))` to the appropriate preview.
It's asking me to insert the modifier but doesn't mention where to put it.
This happened when I used this modifier down in my code.
.overlay(PGDetailView(animation: animation).environmentObject(tabData))
*Note: 'tabData' is used here: #EnvironmentObject var tabData: TabViewModel

Orbeon 2018.1 bug in initialization of opsXFormsProperties

I've loaded a vanilla install of the Orbeon 2018.1 in Tomcat 9.0.11. The main /home/ page works, but when I click to Form Runner (http://localhost:8080/orbeon2018/fr/)
I get this in the browser console and the page doesn't load:
The (main) issue turns out to be a malformed line here, generated by ScriptBuilder.scala
var opsXFormsProperties = {, "session-heartbeat-delay": 34560000 "format.input.time": "[h]:[m]:[s] [P,*-2]", "retry.delay-increment": 5000, "retry.max-delay": 30000, "delay-before-ajax-timeout": 30000
};
Note the leading comma before "session-heartbeat-delay": 34560000 that should have come right after that fragment.
Good news: we have identified this issue. See #3736. We will have to produce a 2018.1.1 release with this fix.

ALM JIRA Integration

I have created a link between ALM and JIRA to sync defects from ALM to JIRA. The integrity check passed and after enabling and trying to sync, am getting the following error,
05/22/2017,02:27:31,654 INFO (Create.From1To2.Source-1) Synchronize: JIRA: Creating new issue
05/22/2017,02:27:32,874 INFO (Create.From1To2.Source-1) Synchronize: JIRA: refreshing the issue id:26906
05/22/2017,02:27:32,925 INFO (Create.From1To2.Source-1) Synchronize: JIRA: Updating issue id:26906
05/22/2017,02:27:33,494 ERROR (Create.From1To2.Source-1) Create: Fatal exception caught,operation terminated. Cause: create: fatal error update: fatal error {"errorMessages":["one of 'fields' or 'update' required"],"errors":{}}
05/22/2017,02:27:33,499 INFO (Disconnection.Adapter1) DisconnectAdapter: Disconnecting adapter HPE-ALM
05/22/2017,02:27:33,499 INFO (Disconnection.Adapter1) DisconnectAdapter: HPE-ALM: disconnect() called
05/22/2017,02:27:33,502 INFO (Disconnection.Adapter1) DisconnectAdapter: HPE-ALM: Call to disconnect
05/22/2017,02:27:34,550 INFO (Disconnection.Adapter1) DisconnectAdapter: HPE-ALM: Call to logout
Not sure what this really means as am relatively new to both products. Could anyone help on this please?
It looks like the issue with 26906 ID have some required fields.
You can figure out what this issue is via this URL: http://your-jira-server/rest/api/2/issue/26906. If you know the project of this issue and know which fields are required, this could already help.
If this information won't help you to figure out the actual reason of error, you can check which fields are actually required using this URL: http://your-jira-server/rest/api/2/issue/26906/editmeta. This should return you a JSON object, containing metadata about fields of given issue. Search for "required": true (or use jq utility) and figure out which fields are missing.
If the import failing during issue creation, you can try another URL: http://your-jira-server/rest/api/2/issue/createmeta?projectK‌​eys=~PROJECT_KEY~&is‌​suetypeNames=~ISSUE_‌​TYPE_NAME~&expand=pr‌​ojects.issuetypes.fi‌​elds which will give you JSON metadata for creating new issue. Using this information, you can check whether some of the fields are missing in the source data.

neo4j batch import cache type issue

I am pretty new to neo4j and facing the following issue. When executing the batch-import (Micheal Hunger - batch importer) command I get this error about the cache_type settings. It is recommending gcr settings, but these are only available in the enterprise edition.
Help is very appreciated, thanks.
System Info:
win7 32bit 4G RAM (3G usable), jre7, neo4j-community-1.8.2
Data: (very small test data)
nodes.csv (tab-separated) 13 nodes
rels.csv (tab-separated) 16 relations
Execution and Error:
C:\Daten\Studium\LV HU Berlin\SS 2013\Datamanagement and BI\Neuer Ordner>java -server -Xmx1G -jar target\batch-import-jar-with-dependencies.jar target\db nodes.csv rels.csv
Using Existing Configuration File
Exception in thread "main" java.lang.IllegalArgumentException: Bad value 'none' for setting 'cache_type': must
be one of [gcr]
at org.neo4j.helpers.Settings$DefaultSetting.apply(Settings.java:788)
at org.neo4j.helpers.Settings$DefaultSetting.apply(Settings.java:708)
at org.neo4j.graphdb.factory.GraphDatabaseSetting$SettingWrapper.apply(GraphDatabaseSetting.ja va:215)
at org.neo4j.graphdb.factory.GraphDatabaseSetting$SettingWrapper.apply(GraphDatabaseSetting.ja va:189)
at org.neo4j.kernel.configuration.ConfigurationValidator.validate(ConfigurationValidator.java: 50)
at org.neo4j.kernel.configuration.Config.applyChanges(Config.java:121)
at org.neo4j.kernel.configuration.Config.<init>(Config.java:89)
at org.neo4j.kernel.configuration.Config.<init>(Config.java:79)
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.<init>(BatchInserterImpl.java:83)
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.<init>(BatchInserterImpl.java:67)
at org.neo4j.unsafe.batchinsert.BatchInserters.inserter(BatchInserters.java:60)
at org.neo4j.batchimport.Importer.createBatchInserter(Importer.java:40)
at org.neo4j.batchimport.Importer.<init>(Importer.java:26)
at org.neo4j.batchimport.Importer.main(Importer.java:54)
Batch.properties:
dump_configuration=false
cache_type=none
use_memory_mapped_buffers=true
neostore.propertystore.db.index.keys.mapped_memory=5M
neostore.propertystore.db.index.mapped_memory=5M
neostore.nodestore.db.mapped_memory=200M
neostore.relationshipstore.db.mapped_memory=500M
neostore.propertystore.db.mapped_memory=200M
neostore.propertystore.db.strings.mapped_memory=200M
ran into the same problem as you and i changed the line in batch.properties
cache_type=none to cache_type=gcr and it worked. not sure about how the speed changes for this. Not sure why the other options none, soft, weak, strong are not working.
Maybe Michael can give an answer to this?
Got the answer from the neo4j documentations
http://docs.neo4j.org/chunked/stable/configuration-caches.html#_object_cache

Magento - Fatal error: Maximum function nesting level of '100' reached, aborting

I don't know what is causing this error. I was working on a custom module on my Magento store and didn't check the homepage of the store regularly. Out of the blue today I am getting this error on my homepage.
Fatal error: Maximum function nesting level of '100' reached, aborting! in C:\Program Files\EasyPHP-5.3.8.0\www\indieswebs\lib\Zend\Db\Adapter\Pdo\Mysql.php on line 1045
The funny thing is there is NO 1045 line number in this file! So I am guessing it's some sort of looping error. But I don't know what is causing it. Can anyone help me figure out what might be causing this particular error and how can I remove it?
Edit:I deleted the cache from the store and reloaded the homepage. The error has changed now. It says:
Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 261904 bytes) in C:\Program Files\EasyPHP-5.3.8.0\www\indieswebs\lib\Zend\Db\Select.php on line 281
Does anyone know how to resolve this?
The error only come when you installed xdebug.
Use following setting in php.ini
xdebug.max_nesting_level = 200
I was able to resolve a related issue (that causes the same error message) by checking the files in [webroot]/app/etc/
It was happening (on enterprise edition) because
config.xml
enterprise.xml
were missing from that directory. Once I put them back, that fixed this problem.
I also read elsewhere that a malformed local.xml might cause this issue.
On enterprise edition, use something like:
<default_setup>
<connection>
<host><![CDATA[localhost]]></host>
<username><![CDATA[some_user]]></username>
<password><![CDATA[some_pass]]></password>
<dbname><![CDATA[database_name]]></dbname>
<active>1</active>
</connection>
</default_setup>
On CE, use something like:
<default_setup>
<connection>
<host><![CDATA[localhost]]></host>
<username><![CDATA[your_user]]></username>
<password><![CDATA[your_pass]]></password>
<dbname><![CDATA[your_db]]></dbname>
<initStatements><![CDATA[SET NAMES utf8]]></initStatements>
<model><![CDATA[mysql4]]></model>
<type><![CDATA[pdo_mysql]]></type>
<pdoType><![CDATA[]]></pdoType>
<active>1</active>
</connection>
</default_setup>
I got this in my Collection.php model, and the culprit turned out to be the call to parent::__construct(). Once I remmed it out the error went away. PS: Raising xdebug nesting level limit did not work.

Resources