Xorg crashing in custom image - driver

I generated an image to an advantech PCM-9375 board using the yocto system (branch dunfell).
The outcome uses Xorg as the video manager, however, the application is crashing due to the geode driver installed with it.
I debugged it and discovered that the crashes happen when the driver function LXReadMSR is called with the parameters: addr=0x80002000h, lo=0xbffff994 and hi=0xbffff998. The last two are pointers, and their contents are: 5136 and 0, respectively.
The snippet below is the gdb's backtrace:
(gdb) bt
#0 0xb7693ba7 in LXReadMSR (hi=0xbffff998, lo=0xbffff994, addr=2147491840) at ../../xf86-video-geode-2.11.20/src/lx_driver.c:131
#1 LXReadMSR (addr=2147491840, lo=0xbffff994, hi=0xbffff998) at ../../xf86-video-geode-2.11.20/src/lx_driver.c:126
#2 0xb7681eef in msr_create_geodelink_table (gliu_nodes=0xb76b2880 <gliu_nodes>) at ../../xf86-video-geode-2.11.20/src/cim/cim_msr.c:199
#3 0xb7682400 in msr_init_table () at ../../xf86-video-geode-2.11.20/src/cim/cim_msr.c:82
#4 0xb7693282 in LXPreInit (pScrni=0x6a79e0, flags=0) at ../../xf86-video-geode-2.11.20/src/lx_driver.c:349
#5 0x00480986 in InitOutput (pScreenInfo=0x688280 <screenInfo>, argc=12, argv=0xbffffc44) at ../../../../xorg-server-1.20.14/hw/xfree86/common/xf86Init.c:522
#6 0x00444525 in dix_main (argc=12, argv=0xbffffc44, envp=0xbffffc78) at ../../xorg-server-1.20.14/dix/main.c:193
#7 0x0042d89b in main (argc=12, argv=0xbffffc44, envp=0xbffffc78) at ../../xorg-server-1.20.14/dix/stubmain.c:34
Looking in the documentation of the processor, I found out that addr points to GLD_MSR_CAP register (chapter "6.6.1.1 GLD Capabilities MSR (GLD_MSR_CAP)" in the documentation), however I didn't figure out what's happening.
Solutions tried:
Insertion of the CLI kernel instruction "iomem=relaxed", as pointed by item 6 of the driver's readme file in its github repository;
Replacing of the kernel configuration "CONFIG_BLK_DEV_CS5535=y" by "CONFIG_BLK_DEV_CS5536=y".
None of them worked.
Xorg version: 1.20.14
Geode driver version: 2.11.20
Did anyone have a similar problem? Does anyone know what's happenning?
My next tries will be the modification of kernel config parameters, but there are a lot and I'don't know which of them are related to the problem.

Problem solved when the kernel option "CONFIG_X86_IOPL_IOPERM" was enabled.
I've come to this solution after reading this post.

Related

DCDG Dart Diagrams: Unhandled exception: Bad state: Unable to find the context to foldername\env.dart

I have been trying to generate dart diagrams for my code but the below error:
C:\Users\Foldername\AppData\Local\Pub\Cache\bin>dart pub global run dcdg
C:\Users\Foldername\AppData\Local\Pub\Cache\bin\lib
Unhandled exception:
Bad state: Unable to find the context to C:\Users\Foldername\AppData\Local\Pub\Cache\bin\lib\.env.dart
#0 AnalysisContextCollectionImpl.contextFor (package:analyzer/src/dart/analysis/analysis_context_collection.dart:106:5)
#1 findClassElements (package:dcdg/src/find_class_elements.dart:46:39)
#2 main (file:///C:/Users/Foldername/AppData/Local/Pub/Cache/hosted/pub.dartlang.org/dcdg-4.0.1/bin/dcdg.dart:35:25)
#3 _delayEntrypointInvocation.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:295:32)
#4 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:192:12)
1- I have activated the package.
2- I have updated it to the latest.
3- Environment variables are correctly set.
I still don't understand why does it not allow me to create class diagrams for my code. Anyone has any answers?
I think that probably in your project are dependencies that use code-generation (with build_runner) like: freezed, auto_route, localizely or hive. Package dcdg at the moment (ver. 4.1.0) have issues for generating PlantUML file for such projects. If you want to use dcdg you need to refactor your code and remove such dependencies.
Here is github issue related to your case.

Contiki 6lbr router produces not enough memory error

When running 6lbr in router mode on a Openmote I get the error ERROR: NODE: Not enough memory to create node [node ipv6 addr].
I can not find the error in any source file and therefore it is hard to see why it might happen!
Has anyone seen this error before or know how to solve it?
I run the router with RDC: contikimac_driver and MAC: csma_driver. The branch on the cetic/6lbr git rep is develop.
The info on the routers webpage tells me:
Memory
Flash : 86990 (16 %)
Code : 84781
Initialized data : 2209
RAM : 29429 (89 %).
and the routing table with my 4 motes:
Routes
[del] aaaa::212:4b00:60d:9b57/128 via fe80::212:4b00:60d:9ac3 7668 s
[del] aaaa::212:4b00:60d:9b59/128 via fe80::212:4b00:60d:9ac3 7659 s
[del] aaaa::212:4b00:60d:9abb/128 via fe80::212:4b00:60d:9abb 7600 s
[del] aaaa::212:4b00:60d:9ac3/128 via fe80::212:4b00:60d:9ac3 7586 s

luajit/physicsfs mutex deadlock

I've got the following code:
local M=ffi.load "physfs"
ffi.cdef [[ //basically the preprocessed content of physfs.h, see http://icculus.org/physfs/docs/html/physfs_8h.html ]]
M.PHYSFS_init(arg[0])
M.PHYSFS_setSaneConfig("a","b","zip",0,0)
function file2str(path)
local cpath=ffi.cast("const char *",path)
print(1) --debug
if M.PHYSFS_exists(cpath)==0 then return nil,"file not found" end
print(2) --debug
-- some more magic
end
assert(file2str("someFile.txt"))
when calling, I expect debug output 1 and 2, or at least the assert triggering, but I only get:
1
["endless" (i pressed ^C after about a minute) freeze]
when i finally got luajit to run in gdb, this is the backtrace when freezing:
(gdb) bt
#0 0x00007ffff37a5c40 in __pause_nocancel ()
at ../sysdeps/unix/syscall-template.S:81
#1 0x00007ffff379bce6 in __pthread_mutex_lock_full (mutex=0x68cbf0)
at ../nptl/pthread_mutex_lock.c:354
#2 0x00007ffff606951f in __PHYSFS_platformGrabMutex (mutex=0x68cbf0)
at /home/kyra/YDist/src/physfs-2.0.3/platform/unix.c:403
#3 0x00007ffff606410d in PHYSFS_getWriteDir ()
at /home/kyra/YDist/src/physfs-2.0.3/physfs.c:913
#4 0x000000000045482b in ?? ()
#5 0x000000000043a829 in ?? ()
#6 0x000000000043af17 in ?? ()
#7 0x00000000004526a6 in ?? ()
#8 0x0000000000446fb0 in lua_pcall ()
#9 0x00000000004047dc in _start ()
so it seems to me that something is blocking the mutex, which is kinda strange, because, while there are two threads running, only one even touches physfs (the second thread doesn't even ffi.load "physfs")
what could/should I do?
I still don't really know what the hell is going on, but while trying to further debug the mutex in gdb I LD_PRELOADed libpthread.so to the gdb process, and suddenly it worked.
Then I tried just preloading it to luajit without gdb, also works.
Then I dug further into physfs and lualanes (which is a pthread ffi wrapper I'm using for threading), to find out they both try to load libpthread if not already loaded, but physfs from C and lualanes using the ffi, which somehow doesn't see the one loaded by physfs, and the process ends up with 2 copies of the library loaded.
so the fix is to explicitely do a ffi.load"pthread" before ffi.load"physfs", because while lanes can't see the version loaded by physfs, physfs is just happy with the version loaded by us, and doesn't try to load it again, while the luajit ffi ignores further load tries made by lanes.

bindSecure() with InternetAddress.ANY_IP_V4 as dynamic address argument

I'm playing around a little with these examples. I get a proper response from the server from a tablet using:
HttpServer.bind(InternetAddress.ANY_IP_V4, 4040)
Then I wanted to try the secure sockets example. For localhost it is working a expected.
HttpServer.bindSecure('localhost', 4047...
But then it will not repond to requests from other computers. So I tried this:
HttpServer.bindSecure(InternetAddress.ANY_IP_V4, 4047,
And gets compile error:
Breaking on exception: object of type TypeError
Unhandled exception:
type '_InternetAddress' is not a subtype of type 'String' of 'address'.
#0 RawSecureServerSocket.bind (secure_server_socket.dart:182)
#1 SecureServerSocket.bind (secure_server_socket.dart:70)
#2 _HttpServer.bindSecure (http_impl.dart:2025)
#3 HttpServer.bindSecure (http.dart:179)
#4 main (file:///D:/Documents/dart/dart-tutorials-samples-master/httpserver/bin/hello_world_server_secure.dart:16:24)
#5 _startIsolate.isolateStartHandler (dart:isolate-patch/isolate_patch.dart:216)
#6 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:115)
I tried printing 'InternetAddress.ANY_IP_V4' and got '0.0.0.0'. So I tried:
HttpServer.bindSecure('0.0.0.0', 4047,
And it worked.
Why does 'InternetAddress.ANY_IP_V4' as first argument to bindSecure fail? I don't understand the error message.
Edit: See comments, this was a bug/inconsistency in an old version of the Dart VM!
This looks like a bug. According to the docs bindSecure should take either a String or an InternetAddress, and InternetAddress.ANY_IP_V4 is even given as a sample!
The address can either be a String or an InternetAddress. If address
is a String, bind will perform a lookup and use the first value in the
list. To listen on the loopback adapter, which will allow only
incoming connections from the local host, use the value
[InternetAddress.LOOPBACK_IP_V4] or [InternetAddress.LOOPBACK_IP_V6].
To allow for incoming connection from the network use either one of
the values [InternetAddress.ANY_IP_V4] or [InternetAddress.ANY_IP_V6]
to bind to all interfaces or the IP address of a specific interface.
I've had a look through the source of these files; but can't figure out what the bug is; the code looks good at first glance :(

neo4j batch import cache type issue

I am pretty new to neo4j and facing the following issue. When executing the batch-import (Micheal Hunger - batch importer) command I get this error about the cache_type settings. It is recommending gcr settings, but these are only available in the enterprise edition.
Help is very appreciated, thanks.
System Info:
win7 32bit 4G RAM (3G usable), jre7, neo4j-community-1.8.2
Data: (very small test data)
nodes.csv (tab-separated) 13 nodes
rels.csv (tab-separated) 16 relations
Execution and Error:
C:\Daten\Studium\LV HU Berlin\SS 2013\Datamanagement and BI\Neuer Ordner>java -server -Xmx1G -jar target\batch-import-jar-with-dependencies.jar target\db nodes.csv rels.csv
Using Existing Configuration File
Exception in thread "main" java.lang.IllegalArgumentException: Bad value 'none' for setting 'cache_type': must
be one of [gcr]
at org.neo4j.helpers.Settings$DefaultSetting.apply(Settings.java:788)
at org.neo4j.helpers.Settings$DefaultSetting.apply(Settings.java:708)
at org.neo4j.graphdb.factory.GraphDatabaseSetting$SettingWrapper.apply(GraphDatabaseSetting.ja va:215)
at org.neo4j.graphdb.factory.GraphDatabaseSetting$SettingWrapper.apply(GraphDatabaseSetting.ja va:189)
at org.neo4j.kernel.configuration.ConfigurationValidator.validate(ConfigurationValidator.java: 50)
at org.neo4j.kernel.configuration.Config.applyChanges(Config.java:121)
at org.neo4j.kernel.configuration.Config.<init>(Config.java:89)
at org.neo4j.kernel.configuration.Config.<init>(Config.java:79)
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.<init>(BatchInserterImpl.java:83)
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.<init>(BatchInserterImpl.java:67)
at org.neo4j.unsafe.batchinsert.BatchInserters.inserter(BatchInserters.java:60)
at org.neo4j.batchimport.Importer.createBatchInserter(Importer.java:40)
at org.neo4j.batchimport.Importer.<init>(Importer.java:26)
at org.neo4j.batchimport.Importer.main(Importer.java:54)
Batch.properties:
dump_configuration=false
cache_type=none
use_memory_mapped_buffers=true
neostore.propertystore.db.index.keys.mapped_memory=5M
neostore.propertystore.db.index.mapped_memory=5M
neostore.nodestore.db.mapped_memory=200M
neostore.relationshipstore.db.mapped_memory=500M
neostore.propertystore.db.mapped_memory=200M
neostore.propertystore.db.strings.mapped_memory=200M
ran into the same problem as you and i changed the line in batch.properties
cache_type=none to cache_type=gcr and it worked. not sure about how the speed changes for this. Not sure why the other options none, soft, weak, strong are not working.
Maybe Michael can give an answer to this?
Got the answer from the neo4j documentations
http://docs.neo4j.org/chunked/stable/configuration-caches.html#_object_cache

Resources