lightmass crashed UE5 - lighting

I was trying to build lightmap for a relatively small scene and the errored occured right after the scene data finished exporting:
=== Critical error: ===
Assertion failed: (Index >= 0) & (Index < ArrayNum) [File:D:\build\++UE5\Sync\Engine\Source\Runtime\Core\Public\Containers\Array.h] [Line: 691] 23:17:31: Array index out of bounds: 2742 from an array of size 2742
by the way I’ve tried to clean and validate cache in swarm but it didn’t work
however, GPU lightmass was fine for this scene, anyone having any clue how to solve this?

Related

ERROR:Unable to create preboard manifest idevicerestore.exe

idevicerestore.exe ios15.3.1.ipsw
return :
Checking if device requires stashbag...
ERROR: img4_create_local_manifest: Unhandled component 'Ap,SystemVolumeCanonicalMetadata' - can't create manifest
ERROR: Unable to create preboard manifest.
I downloaded the last updated binary from imobiledevice-net and I am on Windows 10.
Also I checked idevicerestore.exe code and I found this:
if (needs_preboard) {
info("Checking if device requires stashbag...\n");
plist_t manifest;
if (get_preboard_manifest(client, build_identity, &manifest) < 0) {
error("ERROR: Unable to create preboard manifest.\n");
return -1;
}
Then I checked
get_preboard_manifest
function and I understand that the problem is because of
img4_create_local_manifest
Can any one help me fix this issue or to find some way to solve this programitically?
you can download last binary from github action-> artifacts:
https://github.com/libimobiledevice/idevicerestore/actions
you also may need dependencies:
libplist
libimobiledevice-glue
libusbmuxd
libimobiledevice
libirecovery

neo4j admin import Error in import Requested index -1, but length is 1000000

I have a set of CSV's that I have been able to use with LOAD CSV to create a database. This set is the small version (1 gb) of a much larger data set (120 gb) I intend to load to neo4j using admin import. I am trying to run the admin import on the smaller dataset first since I have already successfully created a graph with that data. I assume that if I can get the admin import to run for the small version it will hopefully run without problems for the large dataset. I've read through the admin import instructions and I've set up header files. The import loads the nodes just fine but ends up failing with he relationship files. Can anyone help me understand what is happening here so that I can figure out how to fix it? I've tried just removing the file and its associated nodes but this only results in the same error being thrown from the next file in the relationships list.
IMPORT FAILED in 9s 121ms.
Data statistics is not available.
Peak memory usage: 1.015GiB
Error in input data
Caused by:ERROR in input
data source: BufferedCharSeeker[source:/var/lib/neo4j/import/rel_cchg_dimcchg.csv, position:3861455, line:77614]
in field: :START_ID(cchg-ID):1
for header: [:START_ID(cchg-ID), :END_ID(dim_cchg-ID), :TYPE]
raw field value: 106715432018-09-010.01.00.0
original error: Requested index -1, but length is 1000000
org.neo4j.internal.batchimport.input.InputException: ERROR in input
data source: BufferedCharSeeker[source:/var/lib/neo4j/import/rel_cchg_dimcchg.csv, position:3861455, line:77614]
in field: :START_ID(cchg-ID):1
for header: [:START_ID(cchg-ID), :END_ID(dim_cchg-ID), :TYPE]
raw field value: 106715432018-09-010.01.00.0
original error: Requested index -1, but length is 1000000
at org.neo4j.internal.batchimport.input.csv.CsvInputParser.next(CsvInputParser.java:234)
at org.neo4j.internal.batchimport.input.csv.LazyCsvInputChunk.next(LazyCsvInputChunk.java:98)
at org.neo4j.internal.batchimport.input.csv.CsvInputChunkProxy.next(CsvInputChunkProxy.java:75)
at org.neo4j.internal.batchimport.ExhaustingEntityImporterRunnable.run(ExhaustingEntityImporterRunnable.java:57)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.neo4j.internal.helpers.NamedThreadFactory$2.run(NamedThreadFactory.java:110)
Caused by: java.lang.ArrayIndexOutOfBoundsException: Requested index -1, but length is 1000000
at org.neo4j.internal.batchimport.cache.OffHeapRegularNumberArray.addressOf(OffHeapRegularNumberArray.java:42)
at org.neo4j.internal.batchimport.cache.OffHeapLongArray.get(OffHeapLongArray.java:43)
at org.neo4j.internal.batchimport.cache.DynamicLongArray.get(DynamicLongArray.java:46)
at org.neo4j.internal.batchimport.cache.idmapping.string.EncodingIdMapper.dataValue(EncodingIdMapper.java:767)
at org.neo4j.internal.batchimport.cache.idmapping.string.EncodingIdMapper.findFromEIdRange(EncodingIdMapper.java:802)
at org.neo4j.internal.batchimport.cache.idmapping.string.EncodingIdMapper.binarySearch(EncodingIdMapper.java:750)
at org.neo4j.internal.batchimport.cache.idmapping.string.EncodingIdMapper.binarySearch(EncodingIdMapper.java:305)
at org.neo4j.internal.batchimport.cache.idmapping.string.EncodingIdMapper.get(EncodingIdMapper.java:205)
at org.neo4j.internal.batchimport.RelationshipImporter.nodeId(RelationshipImporter.java:134)
at org.neo4j.internal.batchimport.RelationshipImporter.startId(RelationshipImporter.java:109)
at org.neo4j.internal.batchimport.input.InputEntityVisitor$Delegate.startId(InputEntityVisitor.java:228)
at org.neo4j.internal.batchimport.input.csv.CsvInputParser.next(CsvInputParser.java:117)
... 9 more
The error is actually quite explicit: have a look at line 77614 in rel_cchg_dimcchg.csv. It's usually caused by an incorrect endpoint id. For example, if the END_ID is supposed to be a number but it's something like 4171;4172;4173;4174;4175;4176 this will raise the InputException error.
One would assume that --skip-bad-relationships would ignore these issues but it doesn't. So, the only remedy is to ensure that all START_ID/END_ID's are correct (ie. the right data type and format).

Fail to run the example in the OpenCV tutorial of Load Caffe framework models

I've tried the example in the opencv_contrib tutorial of dnn: http://docs.opencv.org/master/d5/de7/tutorial_dnn_googlenet.html
But it return two errors, I've search for hours and didn't get any solution. Does anyone could give me some help?
OpenCV Error: Assertion failed (retval == 0) in cv::ocl::Kernel::set, file D:\opencv32\opencv\modules\core\src\ocl.cpp, line 3366
OpenCV Error: Assertion failed (The following error occured while making forward() for layer "loss3/classifier": retval == 0) in cv::ocl::Kernel::set, file D:\opencv32\opencv\modules\core\src\ocl.cpp, line 3366

Mongoid - Cursor not found errors

I am randomly, but regularly, running into errors like this in my application.
An exception occurred: 'Cursor not found, cursor id: 46772737523145 (43)' on '/var/www/mysite-prod/shared/bundle/ruby/2.3.0/gems/mongo-2.2.5/lib/mongo/operation/result.rb:256:in `validate!''
It seems to be triggered by map reduce tasks
#collection.map_reduce(map, reduce).out(replace: 'mr-results').each do |res|
some_var[res['_id']] = Integer(res['value'])
end
Any idea what might be the cause ?

Cannot get SURF example in EMGU.CV to work?

I am trying to detect a pattern shown in two images. Hence I have been trying to use the SURF algorithim found in emgu.CV, but the "SURFFeature" example that is given gives me the following error:
An unhandled exception of type 'Emgu.CV.Util.CvException' occurred in Emgu.CV.dll
Additional information: OpenCV: norm == NORM_L1 || norm == NORM_L2 || norm == NORM_HAMMING
Any ideas how to fix this?
When I try the "Hello World" example and the face detection example, both seem to work fine.
Thanks for any advice!
Fouad.
PS: Emgu.CV can be downloaded from here: http://www.emgu.com/wiki/index.php/Main_Page
Apparently the build was messed up.
http://www.emgu.com/bugs/show_bug.cgi?format=multiple&id=74
Aha, found it. The error here is in Emgu.Cv.Gpu/GpuBruteForceMatcher.cs lines 22 and 27.
Line 22 currently reads:
L2Dist,
It should read: L2Dist = 4,
Line 27 currently reads: HammingDist
It should read: HammingDist = 6
Rebuild the Emgu.CV.Gpu dll with those changes and it works.

Resources