osmdroid on fragment cause out of memory - memory
I' trying devlopping an map application on Android with Android Studio.
I'm facing of a out of memory exception after some manipulation...
I have an activity with a drawer that switch between two fragment :
void StartFragment(int position)
{
FragmentTransaction transaction = getFragmentManager().beginTransaction();
Fragment newFragment = null;
switch(position){
case(1):
newFragment = new MissionFragment();
transaction.replace(R.id.frame_container, newFragment);
break;
case(0):
default:
newFragment = new MapFragment();
transaction.replace(R.id.frame_container, newFragment);
break;
}
if (newFragment != null) {
transaction.addToBackStack(null);
transaction.commit();
mCurrentFragment = position;
}
}
The map fragment is construct like this:
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
// Inflate the layout for this fragment
LinearLayout rl = (LinearLayout) inflater.inflate(R.layout.fragment_map, container, false);
map = (MapView) rl.findViewById(R.id.map);
return rl;
}
and this :
public void onActivityCreated(Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
mContext = getActivity();
mMissionDataModel = MissionDataModel.getInstance(mContext);
map.setTileSource(TileSourceFactory.MAPNIK);
map.setBuiltInZoomControls(true);
map.setMultiTouchControls(true);
GeoPoint startPoint = new GeoPoint(46.5328, 6.6306);
IMapController mapController = map.getController();
mapController.setZoom(11);
mapController.setCenter(startPoint);
if(myLocationOverlay == null)
myLocationOverlay = new DirectedLocationOverlay(mContext);
else
map.getOverlays().remove(myLocationOverlay);
map.getOverlays().add(myLocationOverlay);
//Add Scale Bar
ScaleBarOverlay myScaleBarOverlay = new ScaleBarOverlay(mContext);
map.getOverlays().add(myScaleBarOverlay);
}
Layout of fragment map
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:id="#+id/mapfragment"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical" >
<RelativeLayout
android:layout_width="fill_parent"
android:layout_height="fill_parent" >
<org.osmdroid.views.MapView android:id="#+id/map"
android:layout_width="fill_parent"
android:layout_height="fill_parent" />
</RelativeLayout>
</LinearLayout>
when i switch from fragment 4 first time I have this logcat
09-04 10:53:09.438 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.v*.MapView﹕ Using tile source: org.osmdroid.tileprovider.tilesource.XYTileSource#429569b8
09-04 10:53:09.443 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.t*.m*.MapTileFil*﹕ sdcard state: mounted
09-04 10:53:09.448 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.t*.m*.MapTileFil*﹕ sdcard state: mounted
09-04 10:53:09.453 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.t*.m*.MapTileFil*﹕ sdcard state: mounted
09-04 10:53:09.563 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.t*.LRUMapTileCac*﹕ Tile cache increased from 9 to 35
09-04 10:53:09.603 18516-18516/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 554K, 5% free 38490K/40135K, paused 40ms, total 40ms
09-04 10:53:09.738 18516-19354/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 108K, 4% free 39643K/41095K, paused 36ms, total 36ms
09-04 10:53:09.853 18516-19363/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 213K, 4% free 41024K/42439K, paused 35ms, total 35ms
09-04 10:53:09.913 18516-19350/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 134K, 4% free 42209K/43719K, paused 34ms, total 35ms
09-04 10:53:10.028 18516-19350/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 217K, 4% free 43493K/44999K, paused 35ms, total 36ms
09-04 10:53:10.118 18516-19361/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 143K, 4% free 44838K/46279K, paused 38ms, total 38ms
09-04 10:53:10.213 18516-19361/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 211K, 4% free 46057K/47559K, paused 36ms, total 36ms
09-04 10:53:10.293 18516-19356/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 143K, 4% free 47305K/48839K, paused 38ms, total 38ms
09-04 10:53:10.328 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/Choreographer﹕ Skipped 46 frames! The application may be doing too much work on its main thread.
09-04 10:53:14.248 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.v*.MapView﹕ Using tile source: org.osmdroid.tileprovider.tilesource.XYTileSource#429569b8
09-04 10:53:14.253 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.t*.m*.MapTileFil*﹕ sdcard state: mounted
09-04 10:53:14.258 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.t*.m*.MapTileFil*﹕ sdcard state: mounted
09-04 10:53:14.263 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.t*.m*.MapTileFil*﹕ sdcard state: mounted
09-04 10:53:14.368 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.t*.LRUMapTileCac*﹕ Tile cache increased from 9 to 35
09-04 10:53:14.478 18516-19529/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 562K, 3% free 48679K/50119K, paused 41ms, total 50ms
09-04 10:53:14.568 18516-19542/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 190K, 3% free 49980K/51399K, paused 35ms, total 35ms
09-04 10:53:14.653 18516-19538/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 174K, 3% free 51230K/52679K, paused 33ms, total 34ms
09-04 10:53:14.768 18516-19536/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 184K, 3% free 52612K/54023K, paused 37ms, total 39ms
09-04 10:53:14.838 18516-19538/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 167K, 3% free 53765K/55303K, paused 37ms, total 37ms
09-04 10:53:14.938 18516-19539/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 183K, 3% free 55109K/56583K, paused 33ms, total 36ms
09-04 10:53:15.018 18516-19529/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 173K, 3% free 56326K/57863K, paused 35ms, total 36ms
09-04 10:53:15.053 18516-18516/ch.coyoteprod.coyotecad.mobilecad I/Choreographer﹕ Skipped 41 frames! The application may be doing too much work on its main thread.
then I have this out of memory
09-04 10:53:20.948 18516-19733/ch.coyoteprod.coyotecad.mobilecad E/o*.o*.t*.t*.BitmapTile*﹕ OutOfMemoryError loading bitmap
09-04 10:53:21.003 18516-19718/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Clamp target GC heap from 65.028MB to 64.000MB
09-04 10:53:21.003 18516-19718/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_EXPLICIT freed 39K, 3% free 63728K/65287K, paused 13ms+17ms, total 70ms
09-04 10:53:21.003 18516-19708/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ WAIT_FOR_CONCURRENT_GC blocked 415ms
09-04 10:53:21.068 18516-19708/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Clamp target GC heap from 65.006MB to 64.000MB
09-04 10:53:21.068 18516-19708/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_EXPLICIT freed 66K, 3% free 63706K/65287K, paused 3ms+17ms, total 64ms
09-04 10:53:21.068 18516-19722/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ WAIT_FOR_CONCURRENT_GC blocked 827ms
09-04 10:53:21.133 18516-18516/ch.coyoteprod.coyotecad.mobilecad E/SpannableStringBuilder﹕ SPAN_EXCLUSIVE_EXCLUSIVE spans cannot have a zero length
09-04 10:53:21.133 18516-18516/ch.coyoteprod.coyotecad.mobilecad E/SpannableStringBuilder﹕ SPAN_EXCLUSIVE_EXCLUSIVE spans cannot have a zero length
09-04 10:53:21.133 18516-19722/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Clamp target GC heap from 65.003MB to 64.000MB
09-04 10:53:21.133 18516-19722/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_EXPLICIT freed 31K, 3% free 63703K/65287K, paused 13ms+8ms, total 65ms
09-04 10:53:21.133 18516-19733/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ WAIT_FOR_CONCURRENT_GC blocked 185ms
09-04 10:53:21.183 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Clamp target GC heap from 64.999MB to 64.000MB
09-04 10:53:21.183 18516-19733/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_EXPLICIT freed 5K, 3% free 63698K/65287K, paused 13ms+4ms, total 50ms
09-04 10:53:21.183 18516-19731/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ WAIT_FOR_CONCURRENT_GC blocked 404ms
09-04 10:53:21.208 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Clamp target GC heap from 64.999MB to 64.000MB
09-04 10:53:21.208 18516-19731/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed <1K, 3% free 63698K/65287K, paused 24ms, total 24ms
09-04 10:53:21.208 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Forcing collection of SoftReferences for 262160-byte allocation
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Clamp target GC heap from 64.999MB to 64.000MB
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_BEFORE_OOM freed 0K, 3% free 63698K/65287K, paused 33ms, total 33ms
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad E/dalvikvm-heap﹕ Out of memory on a 262160-byte allocation.
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ "downloader" prio=5 tid=32 RUNNABLE
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ | group="main" sCount=0 dsCount=0 obj=0x437a5ca8 self=0x610d8370
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ | sysTid=19731 nice=0 sched=0/0 cgrp=apps handle=1572382416
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ | schedstat=( 160484495 80061083 1859 ) utm=10 stm=5 core=1
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at android.graphics.BitmapFactory.nativeDecodeStream(Native Method)
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:652)
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase.getDrawable(BitmapTileSourceBase.java:144)
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:214)
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:293)
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at java.lang.Thread.run(Thread.java:856)
09-04 10:53:21.243 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ [ 09-04 10:53:21.243 18516:19712 D/dalvikvm ]
WAIT_FOR_CONCURRENT_GC blocked 646ms
09-04 10:53:21.248 18516-19731/ch.coyoteprod.coyotecad.mobilecad D/skia﹕ --- decoder->decode returned false
09-04 10:53:21.248 18516-19731/ch.coyoteprod.coyotecad.mobilecad E/o*.o*.t*.t*.BitmapTile*﹕ OutOfMemoryError loading bitmap
09-04 10:53:21.248 18516-19733/ch.coyoteprod.coyotecad.mobilecad W/o*.o*.t*.m*.MapTileDow*﹕ LowMemoryException downloading MapTile: /11/1061/724 : org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase$LowMemoryException: java.lang.OutOfMemoryError
09-04 10:53:21.258 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.t*.m*.MapTileMod*﹕ Tile loader can't continue: /11/1061/724
org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$CantContinueException: org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase$LowMemoryException: java.lang.OutOfMemoryError
at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:224)
at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
Caused by: org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase$LowMemoryException: java.lang.OutOfMemoryError
at org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase.getDrawable(BitmapTileSourceBase.java:151)
at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:214)
at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
Caused by: java.lang.OutOfMemoryError
at android.graphics.BitmapFactory.nativeDecodeStream(Native Method)
at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:652)
at org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase.getDrawable(BitmapTileSourceBase.java:144)
at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:214)
at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
09-04 10:53:21.283 18516-19712/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Clamp target GC heap from 64.995MB to 64.000MB
09-04 10:53:21.283 18516-19712/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_EXPLICIT freed 36K, 3% free 63694K/65287K, paused 3ms+5ms, total 38ms
09-04 10:53:21.283 18516-19731/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ WAIT_FOR_CONCURRENT_GC blocked 32ms
09-04 10:53:21.338 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Clamp target GC heap from 64.939MB to 64.000MB
09-04 10:53:21.338 18516-19731/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_EXPLICIT freed 92K, 3% free 63637K/65287K, paused 12ms+6ms, total 59ms
09-04 10:53:21.343 18516-19731/ch.coyoteprod.coyotecad.mobilecad W/o*.o*.t*.m*.MapTileDow*﹕ LowMemoryException downloading MapTile: /11/1062/724 : org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase$LowMemoryException: java.lang.OutOfMemoryError
09-04 10:53:21.353 18516-19731/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.t*.m*.MapTileMod*﹕ Tile loader can't continue: /11/1062/724
org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$CantContinueException: org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase$LowMemoryException: java.lang.OutOfMemoryError
at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:224)
at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
Caused by: org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase$LowMemoryException: java.lang.OutOfMemoryError
at org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase.getDrawable(BitmapTileSourceBase.java:151)
at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:214)
at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
Caused by: java.lang.OutOfMemoryError
at android.graphics.BitmapFactory.nativeDecodeStream(Native Method)
at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:652)
at org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase.getDrawable(BitmapTileSourceBase.java:144)
at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:214)
at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
09-04 10:53:21.548 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Clamp target GC heap from 64.910MB to 64.000MB
09-04 10:53:21.548 18516-19733/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_FOR_ALLOC freed 192K, 3% free 63607K/65287K, paused 36ms, total 36ms
09-04 10:53:21.548 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Forcing collection of SoftReferences for 262160-byte allocation
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Clamp target GC heap from 64.910MB to 64.000MB
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_BEFORE_OOM freed 0K, 3% free 63607K/65287K, paused 32ms, total 32ms
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad E/dalvikvm-heap﹕ Out of memory on a 262160-byte allocation.
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ "downloader" prio=5 tid=34 RUNNABLE
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ | group="main" sCount=0 dsCount=0 obj=0x434c1498 self=0x610d4798
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ | sysTid=19733 nice=0 sched=0/0 cgrp=apps handle=1572382512
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ | schedstat=( 371508693 77115930 1961 ) utm=30 stm=6 core=1
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at android.graphics.BitmapFactory.nativeDecodeStream(Native Method)
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:652)
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase.getDrawable(BitmapTileSourceBase.java:144)
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:214)
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:293)
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ at java.lang.Thread.run(Thread.java:856)
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm﹕ [ 09-04 10:53:21.583 18516:19733 D/skia ]
--- decoder->decode returned false
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad E/o*.o*.t*.t*.BitmapTile*﹕ OutOfMemoryError loading bitmap
09-04 10:53:21.583 18516-19733/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ WAIT_FOR_CONCURRENT_GC blocked 0ms
09-04 10:53:21.618 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/dalvikvm-heap﹕ Clamp target GC heap from 64.894MB to 64.000MB
09-04 10:53:21.618 18516-19733/ch.coyoteprod.coyotecad.mobilecad D/dalvikvm﹕ GC_EXPLICIT freed 16K, 3% free 63590K/65287K, paused 2ms+4ms, total 36ms
09-04 10:53:21.618 18516-19733/ch.coyoteprod.coyotecad.mobilecad W/o*.o*.t*.m*.MapTileDow*﹕ LowMemoryException downloading MapTile: /11/1058/725 : org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase$LowMemoryException: java.lang.OutOfMemoryError
09-04 10:53:21.628 18516-19733/ch.coyoteprod.coyotecad.mobilecad I/o*.o*.t*.m*.MapTileMod*﹕ Tile loader can't continue: /11/1058/725
org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$CantContinueException: org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase$LowMemoryException: java.lang.OutOfMemoryError
at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:224)
at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
Caused by: org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase$LowMemoryException: java.lang.OutOfMemoryError
at org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase.getDrawable(BitmapTileSourceBase.java:151)
at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:214)
at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
Caused by: java.lang.OutOfMemoryError
at android.graphics.BitmapFactory.nativeDecodeStream(Native Method)
at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:652)
at org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase.getDrawable(BitmapTileSourceBase.java:144)
at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:214)
at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
I'm don't understand why....
Thanks a lot for your Help...
Found a partial "solution" from https://code.google.com/p/osmdroid/issues/detail?id=265
adding:
map.getTileProvider().clearTileCache(); solve the problem...
Related
ESXI How to view CPU usage
I am trying to get total CPU usage of the servers running the VMware ESXi 6.7.0 Update 3 to monitor performance. I was able to view the memory usage using the command- vsish -e get /memory/comprehensive However, I can't find a command for CPU usage. There isn't a top command or /proc/stat file. Is there a way I can get this info? Thanks
I'm using ESXi 6.0.0U2 and after looking through the vsish nodes for a bit, I found some relevant information in the power/pcpu tree. [root#server:~] vsish -e ls power policy/ pcpu/ hardwareSupport hostStats resetStats currentPolicy [root#server:~] vsish -e ls power/pcpu 0/ 1/ 2/ 3/ 4/ 5/ 6/ 7/ 8/ 9/ 10/ 11/ 12/ 13/ 14/ 15/ My server has 8 hyperthreaded CPUs, so 16 virtual cores show up in ESXi. Here's what it shows about the first core: [root#server:~] vsish -e ls power/pcpu/0 cstate/ tstate/ pstate/ cres perf state [root#server:~] vsish -e get power/pcpu/0/perf PCPU performance statistics { PCPU core busy (current frequency): 40 % PCPU core busy (maximum frequency): 40 % APERF/MPERF ratio: -1 % Halted time: 176111536370 usec } Maybe there's a better way to do this, but this shows the info for all the cores: [root#server:~] vsish -e get $(printf 'power/pcpu/%sperf ' $(vsish -e ls power/pcpu)) PCPU performance statistics { PCPU core busy (current frequency): 100 % PCPU core busy (maximum frequency): 100 % APERF/MPERF ratio: -1 % Halted time: 175721833728 usec } PCPU performance statistics { PCPU core busy (current frequency): 100 % PCPU core busy (maximum frequency): 100 % APERF/MPERF ratio: -1 % Halted time: 188174562097 usec } PCPU performance statistics { PCPU core busy (current frequency): 24 % PCPU core busy (maximum frequency): 24 % APERF/MPERF ratio: -1 % Halted time: 174102490208 usec } PCPU performance statistics { PCPU core busy (current frequency): 20 % PCPU core busy (maximum frequency): 20 % APERF/MPERF ratio: -1 % Halted time: 187428554783 usec } PCPU performance statistics { PCPU core busy (current frequency): 65 % PCPU core busy (maximum frequency): 65 % APERF/MPERF ratio: -1 % Halted time: 172642150099 usec } PCPU performance statistics { PCPU core busy (current frequency): 55 % PCPU core busy (maximum frequency): 55 % APERF/MPERF ratio: -1 % Halted time: 185952136508 usec } PCPU performance statistics { PCPU core busy (current frequency): 100 % PCPU core busy (maximum frequency): 100 % APERF/MPERF ratio: -1 % Halted time: 172496627680 usec } PCPU performance statistics { PCPU core busy (current frequency): 100 % PCPU core busy (maximum frequency): 100 % APERF/MPERF ratio: -1 % Halted time: 185168370294 usec } PCPU performance statistics { PCPU core busy (current frequency): 100 % PCPU core busy (maximum frequency): 100 % APERF/MPERF ratio: -1 % Halted time: 174843857000 usec } PCPU performance statistics { PCPU core busy (current frequency): 100 % PCPU core busy (maximum frequency): 100 % APERF/MPERF ratio: -1 % Halted time: 188132157292 usec } PCPU performance statistics { PCPU core busy (current frequency): 32 % PCPU core busy (maximum frequency): 32 % APERF/MPERF ratio: -1 % Halted time: 174204368068 usec } PCPU performance statistics { PCPU core busy (current frequency): 63 % PCPU core busy (maximum frequency): 63 % APERF/MPERF ratio: -1 % Halted time: 185497348425 usec } PCPU performance statistics { PCPU core busy (current frequency): 43 % PCPU core busy (maximum frequency): 43 % APERF/MPERF ratio: -1 % Halted time: 172301626490 usec } PCPU performance statistics { PCPU core busy (current frequency): 44 % PCPU core busy (maximum frequency): 44 % APERF/MPERF ratio: -1 % Halted time: 185424645378 usec } PCPU performance statistics { PCPU core busy (current frequency): 40 % PCPU core busy (maximum frequency): 40 % APERF/MPERF ratio: -1 % Halted time: 172333434508 usec } PCPU performance statistics { PCPU core busy (current frequency): 41 % PCPU core busy (maximum frequency): 41 % APERF/MPERF ratio: -1 % Halted time: 184481550403 usec } If you were looking for an average CPU usage percentage across all the cores, I haven't found that directly yet but you could compute it: [root#server:~] vsish -e get $(printf 'power/pcpu/%sperf ' $(vsish -e ls power/pcpu)) | awk '/current/ {cpus+=1;total+=$6} END {print total/cpus "%"}' 78.125% I've been wanting to know this too and I didn't know about the vsish command so thanks for pointing it out.
However, I can't find a command for CPU usage. There isn't a top command or ... There is a top command but it is "esxtop". See https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.monitoring.doc/GUID-D89E8267-C74A-496F-B58E-19672CAB5A53.html for the latest documentation. Note that although this is documentation for ESXi 7.0 esxtop has been around for over a decade so you should be able to use in ESXi 6.7.
neo4j-Gremlin-plugin StackOverflow using tree() pattern
I'm trying to perform a tree query over a sample graph. However it always gives me a StackOverflow Exception. Followed these two doc: http://tinkerpop.incubator.apache.org/docs/3.0.1-incubating/#tree-step https://github.com/tinkerpop/gremlin/wiki/Tree-Pattern To reproduce the error: Neo4j 2.3.2 neo4j-gremlin-plugin on: Ubuntu 14.04 Create the graph: GET http://localhost:7474/tp/gremlin/execute?script=g.addV("Label1").property("name", "Mark").as("v1").addV("Label2").property( "street", "myStreet").as("v2").addV("Label3").property( "number", 11).as("v3").addE("r1").from("v1").to("v2").addE("r2").from("v2").to("v3") Perorm the query GET http://localhost:7474/tp/gremlin/execute?script=g.V().out("r1").out("r2").tree() Neo4j log: 2016-03-17 22:13:56.651+0100 INFO Remote interface ready and available at http://localhost:7474/ 2016-03-17 22:22:58.731+0100 ERROR The exception contained within MappableContainerException could not be mapped to a response, re-throwing to the HTTP container java.lang.StackOverflowError at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._doFindSuperInterfaceChain(TypeFactory.java:1065) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._findSuperInterfaceChain(TypeFactory.java:1060) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._doFindSuperInterfaceChain(TypeFactory.java:1071) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._findSuperInterfaceChain(TypeFactory.java:1060) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._findSuperTypeChain(TypeFactory.java:1014) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:285) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:275) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._fromParamType(TypeFactory.java:862) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._constructType(TypeFactory.java:390) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolveBindings(TypeBindings.java:267) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolveBindings(TypeBindings.java:326) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolve(TypeBindings.java:212) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings.findType(TypeBindings.java:126) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._fromVariable(TypeFactory.java:902) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._constructType(TypeFactory.java:399) [... lines omitted ....] at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:303) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:275) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._fromParamType(TypeFactory.java:862) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._constructType(TypeFactory.java:390) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolveBindings(TypeBindings.java:267) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolveBindings(TypeBindings.java:326) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolve(TypeBindings.java:212) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings.findType(TypeBindings.java:126) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._fromVariable(TypeFactory.java:902) javax.servlet.ServletException: java.lang.StackOverflowError at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:420) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:800) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669) at ch.qos.logback.access.servlet.TeeFilter.doFilter(TeeFilter.java:55) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.neo4j.server.rest.dbms.AuthorizationFilter.doFilter(AuthorizationFilter.java:116) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.neo4j.server.rest.web.CollectUserAgentFilter.doFilter(CollectUserAgentFilter.java:69) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:95) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:620) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:540) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.StackOverflowError at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._doFindSuperInterfaceChain(TypeFactory.java:1065) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._findSuperInterfaceChain(TypeFactory.java:1060) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._doFindSuperInterfaceChain(TypeFactory.java:1071) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._findSuperInterfaceChain(TypeFactory.java:1060) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._findSuperTypeChain(TypeFactory.java:1014) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:285) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:275) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._fromParamType(TypeFactory.java:862) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._constructType(TypeFactory.java:390) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolveBindings(TypeBindings.java:267) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolveBindings(TypeBindings.java:326) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolve(TypeBindings.java:212) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings.findType(TypeBindings.java:126) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._fromVariable(TypeFactory.java:902) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._constructType(TypeFactory.java:399) [... lines omitted ....] at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:303) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:275) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._fromParamType(TypeFactory.java:862) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._constructType(TypeFactory.java:390) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolveBindings(TypeBindings.java:267) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolveBindings(TypeBindings.java:326) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolve(TypeBindings.java:212) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings.findType(TypeBindings.java:126) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._fromVariable(TypeFactory.java:902) 2016-03-17 22:22:59.106+0100 WARN java.lang.StackOverflowError javax.servlet.ServletException: java.lang.StackOverflowError at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:420) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:800) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669) at ch.qos.logback.access.servlet.TeeFilter.doFilter(TeeFilter.java:55) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.neo4j.server.rest.dbms.AuthorizationFilter.doFilter(AuthorizationFilter.java:116) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.neo4j.server.rest.web.CollectUserAgentFilter.doFilter(CollectUserAgentFilter.java:69) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:95) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:620) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:540) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.StackOverflowError at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._doFindSuperInterfaceChain(TypeFactory.java:1065) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._findSuperInterfaceChain(TypeFactory.java:1060) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._doFindSuperInterfaceChain(TypeFactory.java:1071) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._findSuperInterfaceChain(TypeFactory.java:1060) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._findSuperTypeChain(TypeFactory.java:1014) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:285) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:275) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._fromParamType(TypeFactory.java:862) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._constructType(TypeFactory.java:390) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolveBindings(TypeBindings.java:267) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolveBindings(TypeBindings.java:326) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings._resolve(TypeBindings.java:212) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeBindings.findType(TypeBindings.java:126) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._fromVariable(TypeFactory.java:902) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory._constructType(TypeFactory.java:399) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:303) at org.apache.tinkerpop.shaded.jackson.databind.type.TypeFactory.findTypeParameters(TypeFactory.java:275) [... lines omitted ....] Can you help me to dig into this problem? Thanks a lot
This is a likely a TinkerPop issue: TINKERPOP-732 (which was actually a longstanding Jackson issue that just recently got fixed in 2.7.x) and should be fixed for release 3.1.2.
Jenkins process high cpu usage 800%.
I have installed jenkins version 1.614 on ubuntu 12.04 with configuration - 32 GB ram, 2 TB hdd and 8 CPU cores. Currently Jenkins has 594 jobs added. In normal condition when no job is running cpu usage is 0% but When I start any job build cpu usage suddenly reaches 700-800%. Following are the stats for cpu usage. top - 06:29:55 up 160 days, 17:43, 3 users, load average: 4.27, 2.54, 2.43 Tasks: 123 total, 2 running, 118 sleeping, 0 stopped, 3 zombie Cpu0 : 96.7%us, 0.3%sy, 0.0%ni, 3.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 95.7%us, 1.0%sy, 0.0%ni, 3.0%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 96.7%us, 0.7%sy, 0.0%ni, 2.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 92.0%us, 0.7%sy, 0.0%ni, 7.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 96.7%us, 0.3%sy, 0.0%ni, 3.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 96.7%us, 0.7%sy, 0.0%ni, 2.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 97.0%us, 0.0%sy, 0.0%ni, 3.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32809732k total, 26551884k used, 6257848k free, 481144k buffers Swap: 16768892k total, 8376k used, 16760516k free, 15064772k cached 3852 jenkins 20 0 12.5g 8.1g 21m S 774 26.0 2095:06 java 14 root 20 0 0 0 0 S 0 0.0 5:05.81 kworker/2:0 67 root 20 0 0 0 0 S 0 0.0 4:51.63 kworker/3:1 399 root 20 0 0 0 0 S 0 0.0 4:12.45 jbd2/md2-8 1251 root 20 0 0 0 0 S 0 0.0 6:02.19 flush-9:2 6754 appster 20 0 1052m 152m 2244 S 0 0.5 317:39.39 statsd 1 root 20 0 24196 1528 832 S 0 0.0 0:18.22 init I have also deleted job builds older than 30 days and current running job build is also small job, Still cpu usage is very high.
Neo4j server CPU Spike to 90% once the data folder size increases to 50 MB
Neo4j server CPU spikes up to 90% (and higher) as the node size increases to 50 MB. Initially the CPU is well under 15% and suddenly spikes to 90% once a certain size limit is reached. I have turned off the logs as well. Here is my server configuration: c4 x large instance 4 vCPU 7.5 GB memory Here is the neo4j size configuration: neostore.nodestore.db.mapped_memory=40M neostore.relationshipstore.db.mapped_memory=40M neostore.propertystore.db.mapped_memory=150M neostore.propertystore.db.strings.mapped_memory=70M neostore.propertystore.db.arrays.mapped_memory=30M keep_logical_logs=false keep_logical_logs=3 days Below is the heap size and JVM configuration: wrapper.java.initmemory=800 wrapper.java.maxmemory=800 wrapper.java.additional=-XX:+UseConcMarkSweepGC wrapper.java.additional=-XX:+CMSClassUnloadingEnabled wrapper.java.additional=-XX:NewRatio=3 wrapper.java.additional=-d64 wrapper.java.additional=-server wrapper.java.additional=-Xss2048k wrapper.java.additional=-XX:+UseParNewGC Here are the contents of message.log: 12:45:12.943+0000 --- STARTED diagnostics for NEO_STORE_VERSIONS START --- 12:45:12.944+0000 Store versions: 12:45:12.944+0000 Store versions: 12:45:12.944+0000 NeoStore v0.A.1 12:45:12.944+0000 SchemaStore v0.A.1 12:45:12.944+0000 NodeStore v0.A.1 12:45:12.944+0000 RelationshipStore v0.A.1 12:45:12.944+0000 RelationshipTypeStore v0.A.1 12:45:12.944+0000 LabelTokenStore v0.A.1 12:45:12.944+0000 PropertyStore v0.A.1 12:45:12.944+0000 PropertyIndexStore v0.A.1 12:45:12.944+0000 StringPropertyStore v0.A.1 12:45:12.944+0000 ArrayPropertyStore v0.A.1 12:45:12.944+0000 --- STARTED diagnostics for NEO_STORE_VERSIONS END --- 12:45:12.944+0000 --- STARTED diagnostics for NEO_STORE_ID_USAGE START --- 12:45:12.944+0000 Id usage: 12:45:12.944+0000 Id usage: 12:45:12.945+0000 SchemaStore: used=1 high=0 12:45:12.945+0000 NodeStore: used=17545 high=17544 12:45:12.945+0000 RelationshipStore: used=21105 high=21104 12:45:12.945+0000 RelationshipTypeStore: used=1 high=0 12:45:12.945+0000 LabelTokenStore: used=0 high=-1 12:45:12.945+0000 PropertyStore: used=38777 high=38776 12:45:12.945+0000 PropertyIndexStore: used=43 high=42 12:45:12.945+0000 StringPropertyStore: used=106 high=105 12:45:12.945+0000 ArrayPropertyStore: used=1 high=0 12:45:12.946+0000 --- STARTED diagnostics for NEO_STORE_ID_USAGE END --- 12:45:12.946+0000 --- STARTED diagnostics for PERSISTENCE_WINDOW_POOL_STATS START --- 12:45:12.946+0000 --- STARTED diagnostics for PERSISTENCE_WINDOW_POOL_STATS END --- 12:45:12.946+0000 --- STARTED diagnostics for KernelDiagnostics:StoreFiles START --- 12:45:12.946+0000 Disk space on partition (Total / Free / Free %): 8318783488 / 2957176832 / 35 Storage files: (filename : modification date - size) 12:45:12.947+0000 neostore.relationshiptypestore.db.names: 2015-05-09T12:45:12+0000 - 76.00 B 12:45:12.947+0000 active_tx_log: 2015-05-09T12:43:00+0000 - 11.00 B 12:45:12.947+0000 tm_tx_log.1: 2015-05-09T12:43:00+0000 - 32.73 kB 12:45:12.947+0000 neostore.propertystore.db: 2015-05-09T12:45:12+0000 - 1.52 MB 12:45:12.947+0000 neostore.relationshipstore.db.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.948+0000 tm_tx_log.2: 2015-05-09T12:45:12+0000 - 0.00 B 12:45:12.948+0000 neostore.schemastore.db.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.948+0000 neostore.labeltokenstore.db: 2015-05-09T12:45:12+0000 - 0.00 B 12:45:12.948+0000 neostore.nodestore.db.labels: 2015-05-09T12:45:12+0000 - 68.00 B 12:45:12.948+0000 neostore.propertystore.db.index.keys.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.949+0000 index.db: 2015-05-07T22:38:23+0000 - 479.00 B 12:45:12.949+0000 neostore.propertystore.db.index.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.949+0000 schema: 12:45:12.949+0000 label: 12:45:12.949+0000 lucene: 12:45:12.949+0000 write.lock: 2015-05-09T12:45:12+0000 - 0.00 B 12:45:12.950+0000 segments.gen: 2015-05-08T21:07:41+0000 - 20.00 B 12:45:12.950+0000 segments_1: 2015-05-08T21:07:41+0000 - 32.00 B 12:45:12.950+0000 - Total: 2015-05-09T12:45:12+0000 - 52.00 B 12:45:12.950+0000 - Total: 2015-05-07T22:38:10+0000 - 52.00 B 12:45:12.950+0000 - Total: 2015-05-07T22:38:10+0000 - 52.00 B 12:45:12.951+0000 neostore.nodestore.db: 2015-05-09T12:45:12+0000 - 239.87 kB 12:45:12.951+0000 neostore.relationshiptypestore.db.names.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.951+0000 neostore.nodestore.db.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.951+0000 neostore.nodestore.db.labels.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.952+0000 neostore.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.952+0000 neostore.propertystore.db.strings.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.952+0000 neostore.labeltokenstore.db.names: 2015-05-09T12:45:12+0000 - 38.00 B 12:45:12.952+0000 index: 12:45:12.952+0000 lucene: 12:45:12.953+0000 node: 12:45:12.953+0000 postNode: 12:45:12.953+0000 segments.gen: 2015-05-09T12:44:36+0000 - 20.00 B 12:45:12.953+0000 _x74.nrm: 2015-05-09T12:44:36+0000 - 3.24 kB 12:45:12.954+0000 _x74.tis: 2015-05-09T12:44:36+0000 - 178.98 kB 12:45:12.954+0000 segments_3: 2015-05-09T12:44:36+0000 - 299.00 B 12:45:12.954+0000 _x74.fdx: 2015-05-09T12:44:36+0000 - 25.88 kB 12:45:12.954+0000 _x74.fdt: 2015-05-09T12:44:36+0000 - 451.29 kB 12:45:12.954+0000 _x74.tii: 2015-05-09T12:44:36+0000 - 2.34 kB 12:45:12.955+0000 _x74.frq: 2015-05-09T12:44:36+0000 - 68.62 kB 12:45:12.955+0000 _x74.fnm: 2015-05-09T12:44:36+0000 - 322.00 B 12:45:12.955+0000 _x74.prx: 2015-05-09T12:44:36+0000 - 44.24 kB 12:45:12.955+0000 - Total: 2015-05-09T12:44:36+0000 - 775.21 kB 12:45:12.955+0000 eventNode: 12:45:12.956+0000 segments.gen: 2015-05-09T12:36:03+0000 - 20.00 B 12:45:12.956+0000 _d8.cfs: 2015-05-09T12:36:03+0000 - 690.00 B 12:45:12.956+0000 _cr.fdt: 2015-05-09T11:20:27+0000 - 4.19 kB 12:45:12.956+0000 _cr.frq: 2015-05-09T11:20:27+0000 - 96.00 B 12:45:12.956+0000 _cr.fdx: 2015-05-09T11:20:27+0000 - 108.00 B 12:45:12.957+0000 _cz.cfs: 2015-05-09T11:20:27+0000 - 716.00 B 12:45:12.957+0000 _cw.cfs: 2015-05-09T11:20:27+0000 - 716.00 B 12:45:12.957+0000 _cr_1.del: 2015-05-09T12:36:03+0000 - 32.00 B 12:45:12.957+0000 _d2.cfs: 2015-05-09T11:20:27+0000 - 716.00 B 12:45:12.957+0000 _cr.prx: 2015-05-09T11:20:27+0000 - 456.00 B 12:45:12.958+0000 _cr.tis: 2015-05-09T11:20:27+0000 - 738.00 B 12:45:12.958+0000 _d5.cfs: 2015-05-09T11:20:27+0000 - 692.00 B 12:45:12.958+0000 _cr.tii: 2015-05-09T11:20:27+0000 - 35.00 B 12:45:12.958+0000 _ct.cfs: 2015-05-09T11:20:27+0000 - 700.00 B 12:45:12.958+0000 _cr.nrm: 2015-05-09T11:20:27+0000 - 17.00 B 12:45:12.959+0000 _cr.fnm: 2015-05-09T11:20:27+0000 - 65.00 B 12:45:12.959+0000 segments_1: 2015-05-09T12:36:03+0000 - 1.63 kB 12:45:12.959+0000 - Total: 2015-05-09T12:36:03+0000 - 11.48 kB 12:45:12.959+0000 customerNode: 12:45:12.959+0000 segments.gen: 2015-05-09T12:44:36+0000 - 20.00 B 12:45:12.960+0000 segments_2: 2015-05-09T12:44:36+0000 - 1.19 kB 12:45:12.960+0000 _c4y.frq: 2015-05-09T12:43:53+0000 - 66.83 kB 12:45:12.960+0000 _c4y.fdx: 2015-05-09T12:43:53+0000 - 110.61 kB 12:45:12.960+0000 _c4y.tii: 2015-05-09T12:43:53+0000 - 6.34 kB 12:45:12.960+0000 _c4y.fnm: 2015-05-09T12:43:53+0000 - 24.00 B 12:45:12.961+0000 _c4y.prx: 2015-05-09T12:43:53+0000 - 27.65 kB 12:45:12.961+0000 _c4y.tis: 2015-05-09T12:43:53+0000 - 640.17 kB 12:45:12.961+0000 _c4y.nrm: 2015-05-09T12:43:53+0000 - 13.83 kB 12:45:12.961+0000 _c4y.fdt: 2015-05-09T12:43:53+0000 - 657.14 kB 12:45:12.961+0000 _c50.cfs: 2015-05-09T12:44:08+0000 - 319.00 B 12:45:12.962+0000 _c4z.cfs: 2015-05-09T12:44:07+0000 - 319.00 B 12:45:12.962+0000 _c51.cfs: 2015-05-09T12:44:15+0000 - 319.00 B 12:45:12.962+0000 _c52.cfs: 2015-05-09T12:44:36+0000 - 319.00 B 12:45:12.962+0000 - Total: 2015-05-09T12:44:36+0000 - 1.49 MB 12:45:12.962+0000 node_auto_index: 12:45:12.963+0000 segments.gen: 2015-05-09T12:44:36+0000 - 20.00 B 12:45:12.963+0000 segments_2: 2015-05-09T12:44:36+0000 - 990.00 B 12:45:12.963+0000 _16au.prx: 2015-05-09T12:44:09+0000 - 58.00 kB 12:45:12.963+0000 _16au.tis: 2015-05-09T12:44:09+0000 - 755.05 kB 12:45:12.964+0000 _16au_1.del: 2015-05-09T12:44:36+0000 - 39.00 B 12:45:12.964+0000 _16av.cfs: 2015-05-09T12:44:16+0000 - 865.00 B 12:45:12.964+0000 _16au.fdt: 2015-05-09T12:44:09+0000 - 931.55 kB 12:45:12.964+0000 _16ax.cfs: 2015-05-09T12:44:23+0000 - 655.00 B 12:45:12.964+0000 _16au.fdx: 2015-05-09T12:44:09+0000 - 136.88 kB 12:45:12.965+0000 _16au.frq: 2015-05-09T12:44:09+0000 - 129.10 kB 12:45:12.965+0000 _16au.tii: 2015-05-09T12:44:09+0000 - 7.82 kB 12:45:12.965+0000 _16aw.cfs: 2015-05-09T12:44:20+0000 - 826.00 B 12:45:12.965+0000 _16au.nrm: 2015-05-09T12:44:09+0000 - 17.11 kB 12:45:12.965+0000 _16au.fnm: 2015-05-09T12:44:09+0000 - 360.00 B 12:45:12.966+0000 - Total: 2015-05-09T12:44:36+0000 - 1.99 MB 12:45:12.966+0000 movieNode: 12:45:12.966+0000 _wv.tis: 2015-05-09T11:20:25+0000 - 1.01 kB 12:45:12.966+0000 segments.gen: 2015-05-09T12:36:03+0000 - 20.00 B 12:45:12.966+0000 _wx.cfs: 2015-05-09T11:20:25+0000 - 884.00 B 12:45:12.967+0000 _wv_1.del: 2015-05-09T12:36:03+0000 - 35.00 B 12:45:12.967+0000 _wv.fdx: 2015-05-09T11:20:25+0000 - 268.00 B 12:45:12.967+0000 _wv.fnm: 2015-05-09T11:20:25+0000 - 43.00 B 12:45:12.967+0000 _wv.frq: 2015-05-09T11:20:25+0000 - 306.00 B 12:45:12.967+0000 _xi.cfs: 2015-05-09T11:20:31+0000 - 560.00 B 12:45:12.968+0000 _wv.tii: 2015-05-09T11:20:25+0000 - 35.00 B 12:45:12.968+0000 _x9.cfs: 2015-05-09T11:20:25+0000 - 330.00 B 12:45:12.968+0000 _xf.cfs: 2015-05-09T11:20:26+0000 - 824.00 B 12:45:12.968+0000 _wv.nrm: 2015-05-09T11:20:25+0000 - 37.00 B 12:45:12.968+0000 _xc.cfs: 2015-05-09T11:20:25+0000 - 704.00 B 12:45:12.969+0000 _x3.cfs: 2015-05-09T11:20:25+0000 - 608.00 B 12:45:12.969+0000 _wv.prx: 2015-05-09T11:20:25+0000 - 1.12 kB 12:45:12.969+0000 _x0.cfs: 2015-05-09T11:20:25+0000 - 764.00 B 12:45:12.969+0000 _x6.cfs: 2015-05-09T11:20:25+0000 - 678.00 B 12:45:12.969+0000 _wv.fdt: 2015-05-09T11:20:25+0000 - 9.86 kB 12:45:12.969+0000 segments_1: 2015-05-09T12:36:03+0000 - 2.07 kB 12:45:12.970+0000 - Total: 2015-05-09T12:44:36+0000 - 20.02 kB 12:45:12.970+0000 - Total: 2015-05-07T22:38:23+0000 - 4.27 MB 12:45:12.970+0000 - Total: 2015-05-07T22:38:17+0000 - 4.27 MB 12:45:12.970+0000 lucene.log.active: 2015-05-09T12:45:12+0000 - 4.00 B 12:45:12.970+0000 lucene-store.db: 2015-05-09T12:44:36+0000 - 40.00 B 12:45:12.971+0000 lucene.log.1: 2015-05-09T12:45:12+0000 - 16.00 B 12:45:12.971+0000 - Total: 2015-05-09T12:45:12+0000 - 4.27 MB 12:45:12.971+0000 neostore.labeltokenstore.db.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.971+0000 lock: 2015-05-09T12:45:12+0000 - 0.00 B 12:45:12.971+0000 neostore.propertystore.db.index.keys: 2015-05-09T12:45:12+0000 - 1.63 kB 12:45:12.972+0000 neostore.relationshiptypestore.db.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.972+0000 neostore.propertystore.db.index: 2015-05-09T12:45:12+0000 - 387.00 B 12:45:12.972+0000 neostore.propertystore.db.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.972+0000 store_lock: 2015-05-07T22:38:10+0000 - 0.00 B 12:45:12.972+0000 neostore.labeltokenstore.db.names.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.973+0000 messages.log: 2015-05-09T12:45:12+0000 - 159.00 kB 12:45:12.973+0000 nioneo_logical.log.v2: 2015-05-09T12:44:36+0000 - 38.18 kB 12:45:12.973+0000 neostore.propertystore.db.strings: 2015-05-09T12:45:12+0000 - 13.25 kB 12:45:12.973+0000 neostore.relationshipstore.db: 2015-05-09T12:45:12+0000 - 680.14 kB 12:45:12.973+0000 neostore.relationshiptypestore.db: 2015-05-09T12:45:12+0000 - 5.00 B 12:45:12.974+0000 neostore.schemastore.db: 2015-05-09T12:45:12+0000 - 64.00 B 12:45:12.974+0000 nioneo_logical.log.1: 2015-05-09T12:45:12+0000 - 16.00 B 12:45:12.974+0000 neostore.propertystore.db.arrays: 2015-05-09T12:45:12+0000 - 128.00 B 12:45:12.974+0000 neostore: 2015-05-09T12:45:12+0000 - 63.00 B 12:45:12.974+0000 nioneo_logical.log.active: 2015-05-09T12:45:12+0000 - 4.00 B 12:45:12.975+0000 neostore.propertystore.db.arrays.id: 2015-05-09T12:45:12+0000 - 9.00 B 12:45:12.975+0000 --- STARTED diagnostics for KernelDiagnostics:StoreFiles END --- 12:45:12.996+0000 INFO [o.n.k.EmbeddedGraphDatabase]: GC Monitor started. 12:45:13.007+0000 INFO [o.n.k.EmbeddedGraphDatabase]: Database is now ready 12:45:13.007+0000 --- SERVER STARTED START --- 12:45:13.470+0000 --- STARTED diagnostics for org.neo4j.server.configuration.Configurator START --- 12:45:13.471+0000 Server configuration: 12:45:13.471+0000 org.neo4j.server.database.location = data/graph.db 12:45:13.471+0000 org.neo4j.server.webserver.address = 0.0.0.0 12:45:13.471+0000 org.neo4j.server.webserver.port = 80 12:45:13.472+0000 org.neo4j.server.webserver.https.enabled = true 12:45:13.472+0000 org.neo4j.server.webserver.https.port = 7473 12:45:13.472+0000 org.neo4j.server.webserver.https.cert.location = conf/ssl/snakeoil.cert 12:45:13.472+0000 org.neo4j.server.webserver.https.key.location = conf/ssl/snakeoil.key 12:45:13.472+0000 org.neo4j.server.webserver.https.keystore.location = data/keystore 12:45:13.472+0000 org.neo4j.server.webadmin.rrdb.location = data/rrd 12:45:13.472+0000 org.neo4j.server.db.tuning.properties = conf/neo4j.properties 12:45:13.472+0000 org.neo4j.server.manage.console_engines = shell 12:45:13.472+0000 org.neo4j.server.http.log.enabled = false 12:45:13.472+0000 org.neo4j.server.http.log.config = conf/neo4j-http-logging.xml 12:45:13.472+0000 org.neo4j.server.webadmin.management.uri = /db/manage/ 12:45:13.472+0000 org.neo4j.server.webadmin.data.uri = /db/data/ 12:45:13.472+0000 --- STARTED diagnostics for org.neo4j.server.configuration.Configurator END --- 12:45:13.475+0000 Mounted discovery module (org.neo4j.server.rest.discovery) at: / 12:45:13.480+0000 Mounted REST API at: /db/data/ 12:45:13.481+0000 Mounted management API at: /db/manage/ 12:45:13.482+0000 Mounted webadmin at: /webadmin 12:45:13.482+0000 Mounted Neo4j Browser at: /browser 12:45:14.510+0000 Server started on: http://0.0.0.0/ 12:45:14.510+0000 --- SERVER STARTED END --- 15:16:33.125+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 108ms [total block time: 0.108s] 16:11:13.761+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 643ms [total block time: 0.751s] 16:50:04.711+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 173ms [total block time: 0.924s] 17:23:31.160+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 192ms [total block time: 1.116s] 19:20:57.115+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 203ms [total block time: 1.319s] 20:48:50.908+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 663ms [total block time: 1.982s] 22:33:28.689+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 105ms [total block time: 2.087s] 23:35:21.103+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 157ms [total block time: 2.244s] 23:35:47.015+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 164ms [total block time: 2.408s] 23:36:11.302+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 536ms [total block time: 2.944s] 23:54:48.796+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 232ms [total block time: 3.176s] 23:55:32.050+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 185ms [total block time: 3.361s] Would appreciate any guidance in this issue. Thanks!
How to check the footprint of a running task in vxworks?
I would like to know of any commands or utilities there are to check the runtime footprint of application in vxworks (target board). And I'd also like to know how to find the CPU usage of target board.
Not quite sure what you mean by "runtime footprint", but the ti command will show you the stack usage for a particular task: -> ti tNet0 NAME ENTRY TID PRI STATUS PC SP ERRNO DELAY ---------- ------------ -------- --- ---------- -------- -------- ------- ----- tNet0 ipcomNetTask 1040fad0 50 PEND 1014c42b 1078ff10 0 0 full task name : tNet0 task entry : ipcomNetTask process : kernel options : 0x9007 VX_SUPERVISOR_MODE VX_UNBREAKABLE VX_DEALLOC_STACK VX_DEALLOC_TCB VX_DEALLOC_EXC_STACK STACK BASE END SP SIZE HIGH MARGIN --------- -------- -------- -------- ------- ------- ------- execution 10790000 10780000 1078ff10 65536 2404 63132 exception 10428fb8 10426030 12168 360 11808 For cpu usage, enable INCLUDE_SPY in your BSP and execute spy to start the display of the cpu usage: ->spy NAME ENTRY TID PRI total % (ticks) delta % (ticks) ------------ ------------ ---------- --- --------------- --------------- tJobTask 0x10098410 0x103eeb00 0 0% ( 0) 0% ( 0) tExcTask 0x10097880 0x101d6560 0 0% ( 0) 0% ( 0) tLogTask logTask 0x103efa58 0 0% ( 0) 0% ( 0) tShell0 shellTask 0x1058c5c8 1 0% ( 2) 0% ( 2) tWdbTask 0x10141e80 0x104ae950 3 0% ( 0) 0% ( 0) tSpyTask spyComTask 0x1042ecb8 5 0% ( 8) 0% ( 2) ipcom_tickd 0x10060090 0x1058fb50 20 0% ( 0) 0% ( 0) tVxdbgTask 0x10051810 0x104ae658 25 0% ( 0) 0% ( 0) tAioIoTask1 aioIoTask 0x1040df78 50 0% ( 0) 0% ( 0) tAioIoTask0 aioIoTask 0x1040e3a0 50 0% ( 0) 0% ( 0) tNet0 ipcomNetTask 0x1040fad0 50 0% ( 0) 0% ( 0) ipcom_syslog 0x10055190 0x1042e5a8 50 0% ( 0) 0% ( 0) tNetConf 0x100887e0 0x1044f8b8 50 0% ( 0) 0% ( 0) tAioWait aioWaitTask 0x1040aa40 51 0% ( 0) 0% ( 0) KERNEL 0% ( 0) 0% ( 0) INTERRUPT 0% ( 0) 0% ( 0) IDLE 99% ( 2495) 99% ( 498) TOTAL 99% ( 2505) 99% ( 502) ->spyStop