How to read binary image file in julia?
imread(file) gives following error:
ERROR: type: Gray: in T, expected T<:Union(FixedPoint,FloatingPoint), got Type{Bool}
in imread at C:\Users\Harsh\.julia\v0.3\Images\src\io.jl:259
in imread at C:\Users\Harsh\.julia\v0.3\Images\src\io.jl:113
Output of versioinfo(true):
julia> versioninfo(true)
Julia Version 0.3.0
Commit 7681878 (2014-08-20 20:43 UTC)
Platform Info:
System: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Core(TM) i7-3632QM CPU # 2.20GHz
WORD_SIZE: 64
Microsoft Windows [Version 6.3.9600]
uname: MINGW32_NT-6.2 1.0.12(0.46/3/2) 2012-07-05 14:56 i686 unknown
Memory: 7.948513031005859 GB (4763.9375 MB free)
Uptime: 111087.5328943 sec
Load Avg: 0.0 0.0 0.0
Intel(R) Core(TM) i7-3632QM CPU # 2.20GHz:
speed user nice sys idle irq ticks
#1 2195 MHz 13078796 0 2444890 22142265 348375 ticks
#2 2195 MHz 10343718 0 1964375 25357578 165765 ticks
#3 2195 MHz 15137796 0 1628015 20899875 89390 ticks
#4 2195 MHz 15702750 0 1140187 20822718 73968 ticks
#5 2195 MHz 12287718 0 1952390 23425562 52781 ticks
#6 2195 MHz 9467671 0 1546562 26651421 49406 ticks
#7 2195 MHz 13431750 0 1668484 22565437 36375 ticks
#8 2195 MHz 12820796 0 1473484 23371359 30500 ticks
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3
Environment:
ANT_HOME = C:\PROGRA~1\APACHE~1.2
CUDA_BIN_PATH = C:\CUDA\bin64
CUDA_INC_PATH = C:\CUDA\include
CUDA_LIB_PATH = C:\CUDA\lib64
HOMEDRIVE = C:
HOMEPATH = \Users\Harsh
JAVA_HOME = C:\Progra~1\Java\jre6
PATHEXT = .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.RB;.RBW
Package Directory: C:\Users\Harsh\.julia\v0.3
3 required packages:
- DataFrames 0.5.7
- DecisionTree 0.3.4
- Images 0.4.10
14 additional packages:
- ArrayViews 0.4.6
- BinDeps 0.3.5
- Color 0.3.8
- DataArrays 0.2.1
- FixedPointNumbers 0.0.4
- GZip 0.2.13
- Reexport 0.0.1
- SHA 0.0.3
- SIUnits 0.0.2
- SortingAlgorithms 0.0.1
- StatsBase 0.6.5
- TexExtensions 0.0.2
- URIParser 0.0.2
- Zlib 0.1.7
file is this binary file:
Image Link
Original RGB image is this, (which has been processed to obtain the above file) This does not give error.:
This was a bug, thanks for reporting it. It should be fixed if you do a Pkg.update().
Related
We are experiencing performance problems with Stardog requests (about 500 000ms minimum to get an answer). We followed the Debian Based Systems installation described in the Stardog documentation and have a stardog service installed in our Ubutu VM.
Azure machine: Standard D4s v3 (4 virtual processors, 16 Gb memory)
Total amount of memory of the VM = 16 Gio of memory
We tested several JVM environment variables
Xms4g -Xmx4g -XX:MaxDirectMemorySize=8g
Xms8g -Xmx8g -XX:MaxDirectMemorySize=8g
We also tried to upgrade the VM with a machine but without success:
Azure: Standard D8s v3 - 8 virtual processors, 32 Gb memory
By doing the command: systemctl status stardog in the machine with 32Gio memory
we get :
stardog.service - Stardog Knowledge Graph
Loaded: loaded (/etc/systemd/system/stardog.service; enabled; vendor prese>
Active: active (running) since Tue 2023-01-17 15:41:40 UTC; 1min 35s ago
Docs: https://www.stardog.com/
Process: 797 ExecStart=/opt/stardog/stardog-server.sh start (code=exited, s>
Main PID: 969 (java)
Tasks: 76 (limit: 38516)
Memory: 1.9G
CGroup: /system.slice/stardog.service
└─969 java -Dstardog.home=/var/opt/stardog/ -Xmx8g -Xms8g XX:MaxD
stardog-admin server status :
Access Log Enabled : true
Access Log Type : text
Audit Log Enabled : true
Audit Log Type : text
Backup Storage Directory : .backup
CPU Load : 1.88 %
Connection Timeout : 10m
Export Storage Directory : .exports
Memory Heap : 305M (Max: 8.0G)
Memory Mode : DEFAULT{Starrocks.block_cache=20, Starrocks.dict_block_cache=10, Native.starrocks=70, Heap.dict_value=50, Starrocks.txn_block_cache=5, Heap.dict_index=50, Starrocks.untracked_memory=20, Starrocks.memtable=40, Starrocks.buffer_pool=5, Native.query=30}
Memory Query Blocks : 0B (Max: 5.7G)
Memory RSS : 4.3G
Named Graph Security : false
Platform Arch : amd64
Platform OS : Linux 5.15.0-1031-azure, Java 1.8.0_352
Query All Graphs : false
Query Timeout : 1h
Security Disabled : false
Stardog Home : /var/opt/stardog
Stardog Version : 8.1.1
Strict Parsing : true
Uptime : 2 hours 18 minutes 51 seconds
Knowing that there is only stardog server installed in this VM, 8G JVM Heap Memory & 20G Direct Memory for Java, is it normal to have 1.9G in memory (No process in progress)
and 4.1G (when the query is in progress)
"databases.xxxx.queries.latency": {
"count": 7,
"max": 471.44218324400003,
"mean": 0.049260736982859085,
"min": 0.031328932000000004,
"p50": 0.048930366,
"p75": 0.048930366,
"p95": 0.048930366,
"p98": 0.048930366,
"p99": 0.048930366,
"p999": 0.048930366,
"stddev": 0.3961819852037625,
"m15_rate": 0.0016325388459502614,
"m1_rate": 0.0000015369791915358426,
"m5_rate": 0.0006317127755974434,
"mean_rate": 0.0032760240366080024,
"duration_units": "seconds",
"rate_units": "calls/second"
Of all your queries the slowest took 8 minutes to complete while the others completed very quickly. Best to identify the slow query and profile it.
there are very similar questions to this one but all of them are solved by disabling some other wifi source by using modprobe and then reseting rfkill
in my case:
artixlinux:[rail]:/etc/modprobe.d$ rfkill list all
0: phy0: Wireless LAN
Soft blocked: no
Hard blocked: yes
and with nmcli:
wlan0: unavailable
"Qualcomm Atheros AR9485"
wifi (ath9k), 5A:9D:61:C0:BB:F0, sw disabled, hw, mtu 1500
I've already tried to modprobe ath9k but that does nothing
hardware:
[System]
OS: Artix Linux 20220123 n/a
Arch: x86_64
Kernel: 5.18.0-zen1-1-zen
Desktop: KDE
Display Server: x11
[CPU]
Vendor: GenuineIntel
Model: Intel(R) Core(TM) i3-3227U CPU # 1.90GHz
Physical cores: 2
Logical cores: 4
[Memory]
RAM: 3.7 GB
Swap: 0.0 GB
[Graphics]
Vendor: Intel
OpenGL Renderer: Mesa Intel(R) HD Graphics 4000 (IVB GT2)
OpenGL Version: 4.2 (Compatibility Profile) Mesa 22.2.0-devel (git-3e679219a1)
OpenGL Core: 4.2 (Core Profile) Mesa 22.2.0-devel (git-3e679219a1)
OpenGL ES: OpenGL ES 3.0 Mesa 22.2.0-devel (git-3e679219a1)
Vulkan: Supported
See https://askubuntu.com/a/98719
"Hard blocked" cannot be changed by software, look for a wifi toggle on your keyboard or edges of the laptop; the device can also be hard blocked if disabled in the bios.
And:
https://askubuntu.com/questions/98702/how-to-unblock-something-listed-in-rfkill#comment618926_98719
FYI hard block also happens when the wifi is disabled in the bios.
My application is a compute intensive task(I.e. video encoding). When it is running on linux kernel 4.9(Ubuntu 16.04), the cpu usage is 3300%. But when it is running on linux kernel 5.4(Ubuntu 20.04), the cpu Usage is just 2850%. Promise the processes do the same job.
So I wonder if linux kernel had done some cpu scheduling optimization or related work between 4.9 and 5.4? Could you give any advice to investigate the reason?
I am not sure if the version of glic has effect or not, for your information, the version of glic is 2.23 on linux kernel 4.9 while 2.31 on linux kernel 5.4.
CPU Info:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4210 CPU # 2.20GHz
Stepping: 7
CPU MHz: 2200.000
BogoMIPS: 4401.69
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 14080K
NUMA node0 CPU(s): 0-9,20-29
NUMA node1 CPU(s): 10-19,30-39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Output of perf stat on Linux Kernel 4.9
Performance counter stats for process id '32504':
3146297.833447 cpu-clock (msec) # 32.906 CPUs utilized
1,718,778 context-switches # 0.546 K/sec
574,717 cpu-migrations # 0.183 K/sec
2,796,706 page-faults # 0.889 K/sec
6,193,409,215,015 cycles # 1.968 GHz (30.76%)
6,948,575,328,419 instructions # 1.12 insn per cycle (38.47%)
540,538,530,660 branches # 171.801 M/sec (38.47%)
33,087,740,169 branch-misses # 6.12% of all branches (38.50%)
1,966,141,393,632 L1-dcache-loads # 624.906 M/sec (38.49%)
184,477,765,497 L1-dcache-load-misses # 9.38% of all L1-dcache hits (38.47%)
8,324,742,443 LLC-loads # 2.646 M/sec (30.78%)
3,835,471,095 LLC-load-misses # 92.15% of all LL-cache hits (30.76%)
<not supported> L1-icache-loads
187,604,831,388 L1-icache-load-misses (30.78%)
1,965,198,121,190 dTLB-loads # 624.607 M/sec (30.81%)
438,496,889 dTLB-load-misses # 0.02% of all dTLB cache hits (30.79%)
7,139,892,384 iTLB-loads # 2.269 M/sec (30.79%)
260,660,265 iTLB-load-misses # 3.65% of all iTLB cache hits (30.77%)
<not supported> L1-dcache-prefetches
<not supported> L1-dcache-prefetch-misses
95.615072142 seconds time elapsed
Output of perf stat on Linux Kernel 5.4
Performance counter stats for process id '3355137':
2,718,192.32 msec cpu-clock # 29.184 CPUs utilized
1,719,910 context-switches # 0.633 K/sec
448,685 cpu-migrations # 0.165 K/sec
3,884,586 page-faults # 0.001 M/sec
5,927,930,305,757 cycles # 2.181 GHz (30.77%)
6,848,723,995,972 instructions # 1.16 insn per cycle (38.47%)
536,856,379,853 branches # 197.505 M/sec (38.47%)
32,245,288,271 branch-misses # 6.01% of all branches (38.48%)
1,935,640,517,821 L1-dcache-loads # 712.106 M/sec (38.47%)
177,978,528,204 L1-dcache-load-misses # 9.19% of all L1-dcache hits (38.49%)
8,119,842,688 LLC-loads # 2.987 M/sec (30.77%)
3,625,986,107 LLC-load-misses # 44.66% of all LL-cache hits (30.75%)
<not supported> L1-icache-loads
184,001,558,310 L1-icache-load-misses (30.76%)
1,934,701,161,746 dTLB-loads # 711.760 M/sec (30.74%)
676,618,636 dTLB-load-misses # 0.03% of all dTLB cache hits (30.76%)
6,275,901,454 iTLB-loads # 2.309 M/sec (30.78%)
391,706,425 iTLB-load-misses # 6.24% of all iTLB cache hits (30.78%)
<not supported> L1-dcache-prefetches
<not supported> L1-dcache-prefetch-misses
93.139551411 seconds time elapsed
UPDATE:
It is confirmed the performance gain comes from linux kernel 5.4, because the performance on linux kernel 5.3 is the same as linux kernel 4.9.
It is confirmed the performance gain has no relation with libc, because on linux kernel 5.10 whose libc is 2.23 the performance is the same as linux kernel 5.4 whose libc is 2.31
It seems performance gain comes from this fix:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=de53fd7aedb100f03e5d2231cfce0e4993282425
I just flashed a ESP32 with new custom build for lua files, designed from:
https://nodemcu-build.com/ with dev-esp32<>BETA
And it seems to be working, I can connect with ESPlorer and the firmware is installed as viewed in the startup code.
Im trying to toggle the BUILTIN_LED, but I get error using the GPIO commands.
It worked out of the box with simple arduino code, did I mess something up or why is this not working?
Here is the log with error response from GPIO.mode command
Communication with MCU..Got answer! Communication with MCU established.
AutoDetect firmware...
Can't autodetect firmware, because proper answer not received (may be unknown firmware).
Please, reset module or continue.
ts Jun 8 2016 00:22:57
rst:0x1 (POWERON_RESET),boot:0x12 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:2
load:0x3fff0018,len:4
load:0x3fff001c,len:6716
load:0x40078000,len:11116
ho 0 tail 12 room 4
load:0x40080400,len:5940
entry 0x40080700
[0;32mI (69) boot: Chip Revision: 1[0m
[0;32mI (69) boot_comm: chip revision: 1, min. bootloader chip revision: 0[0m
[0;32mI (41) boot: ESP-IDF v3.3.1 2nd stage bootloader[0m
[0;32mI (41) boot: compile time 17:08:58[0m
[0;32mI (41) boot: Enabling RNG early entropy source...[0m
[0;32mI (46) boot: SPI Speed : 40MHz[0m
[0;32mI (50) boot: SPI Mode : DIO[0m
[0;32mI (54) boot: SPI Flash Size : 4MB[0m
[0;32mI (58) boot: Partition Table:[0m
[0;32mI (61) boot: ## Label Usage Type ST Offset Length[0m
[0;32mI (69) boot: 0 nvs WiFi data 01 02 00009000 00006000[0m
[0;32mI (76) boot: 1 phy_init RF data 01 01 0000f000 00001000[0m
[0;32mI (84) boot: 2 factory factory app 00 00 00010000 00180000[0m
[0;32mI (91) boot: 3 lfs unknown c2 01 00190000 00010000[0m
[0;32mI (99) boot: 4 nodemcuspiffs unknown c2 00 001a0000 00070000[0m
[0;32mI (106) boot: End of partition table[0m
[0;32mI (110) boot_comm: chip revision: 1, min. application chip revision: 0[0m
[0;32mI (118) esp_image: segment 0: paddr=0x00010020 vaddr=0x3f400020 size=0x1bdf4 (114164) map[0m
[0;32mI (165) esp_image: segment 1: paddr=0x0002be1c vaddr=0x3ffb0000 size=0x02fcc ( 12236) load[0m
[0;32mI (170) esp_image: segment 2: paddr=0x0002edf0 vaddr=0x40080000 size=0x00400 ( 1024) load[0m
[0;32mI (172) esp_image: segment 3: paddr=0x0002f1f8 vaddr=0x40080400 size=0x00e18 ( 3608) load[0m
[0;32mI (182) esp_image: segment 4: paddr=0x00030018 vaddr=0x400d0018 size=0x84a0c (543244) map[0m
[0;32mI (373) esp_image: segment 5: paddr=0x000b4a2c vaddr=0x40081218 size=0x0f074 ( 61556) load[0m
[0;32mI (398) esp_image: segment 6: paddr=0x000c3aa8 vaddr=0x400c0000 size=0x00064 ( 100) load[0m
[0;32mI (408) boot: Loaded app from partition at offset 0x10000[0m
[0;32mI (408) boot: Disabling RNG early entropy source...[0m
[0;32mI (409) cpu_start: Pro cpu up.[0m
[0;32mI (413) cpu_start: Application information:[0m
[0;32mI (418) cpu_start: Project name: NodeMCU[0m
[0;32mI (423) cpu_start: App version: a8b46af-dirty[0m
[0;32mI (428) cpu_start: Compile time: Apr 15 2020 17:09:01[0m
[0;32mI (434) cpu_start: ELF file SHA256: af54a39e6c0fb11d...[0m
[0;32mI (440) cpu_start: ESP-IDF: v3.3.1[0m
[0;32mI (445) cpu_start: Starting app cpu, entry point is 0x40081244[0m
[0;32mI (0) cpu_start: App cpu up.[0m
[0;32mI (456) heap_init: Initializing. RAM available for dynamic allocation:[0m
[0;32mI (462) heap_init: At 3FFAE6E0 len 00001920 (6 KiB): DRAM[0m
[0;32mI (469) heap_init: At 3FFB9260 len 00026DA0 (155 KiB): DRAM[0m
[0;32mI (475) heap_init: At 3FFE0440 len 00003AE0 (14 KiB): D/IRAM[0m
[0;32mI (481) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM[0m
[0;32mI (488) heap_init: At 4009028C len 0000FD74 (63 KiB): IRAM[0m
[0;32mI (494) cpu_start: Pro cpu start user code[0m
[0;32mI (176) cpu_start: Starting scheduler on PRO CPU.[0m
[0;32mI (0) cpu_start: Starting scheduler on APP CPU.[0m
Mounting flash filesystem...
No LFS image loaded
I (337) wifi: wifi driver task: 3ffc26a8, prio:23, stack:3584, core=0
[0;32mI (337) system_api: Base MAC address is not set, read default base MAC address from BLK0 of EFUSE[0m
[0;32mI (337) system_api: Base MAC address is not set, read default base MAC address from BLK0 of EFUSE[0m
I (367) wifi: wifi firmware version: ac331d7
I (367) wifi: config NVS flash: enabled
I (367) wifi: config nano formating: disabled
I (367) wifi: Init dynamic tx buffer num: 32
I (377) wifi: Init data frame dynamic rx buffer num: 32
I (377) wifi: Init management frame dynamic rx buffer num: 32
I (387) wifi: Init management short buffer num: 32
I (387) wifi: Init static rx buffer size: 1600
I (397) wifi: Init static rx buffer num: 10
I (397) wifi: Init dynamic rx buffer num: 32
NodeMCU ESP32 built on nodemcu-build.com provided by frightanic.com
branch: dev-esp32
commit: a8b46af905b759506e9fd5eabdadbd8beb83e7c2
SSL: false
modules: file,gpio,net,node,tmr,wifi
build 2020-04-15-17-07-08 powered by Lua 5.1.4 on ESP-IDF v3.3.1 on SDK IDF
lua: cannot open init.lua
>
gpio.write(2, LOW)
gpio.write(2, LOW)
stdin:1: bad argument #2 to 'write' (number expected, got nil)
stack traceback:
[C]: in function 'write'
stdin:1: in main chunk
>
As per https://nodemcu.readthedocs.io/en/dev-esp32/modules/gpio/#gpiowrite (and the error message) the second parameter has to be an int; 1 or 0.
Your code
gpio.write(2, LOW)
appears to send nil. Your LOW is not initialized anywhere, is it? What should definitely work fine is this:
LOW = 0
gpio.write(2, LOW)
(or gpio.write(2, 0) of course)
bad argument #2 to 'write' (number expected, got nil)
This tells what the problem is: The second argument to write is nil, i.e. LOW is nil. You can confirm that by typing print(LOW) and it should print nil
The reason is simple: LOW isn't defined anywhere;
as you can read in the documentation, NodeMCU defines gpio.LOW and gpio.HIGH (which, behind the scenes, are just 0 and 1)
As Marcel Stör pointed out, gpio.LOW and gpio.HIGH are only defined for the ESP8266 version of the firmware. For ESP32 you need to use plain numerical 0 and 1 instead (or define your own variables)
I would like to run OrientDB on an EC2 micro (free tier) instance. I am unable to find official documentation for OrientDB that gives memory requirements, however I found this question that says 512MB should be fine. I am running an EC2 micro instance which has 1GB RAM. However, when I try to run OrientDB I get the JRE error shown below. My initial thought was that I needed to increase the jre memory using -xmx, but I guess it would be the shell script that would do this.. Has anyone successfully run OrientDB in an EC2 micro instance or run into this problem?
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007a04a0000, 1431699456, 0) failed; error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 1431699456 bytes for committing reserved memory.
An error report file with more information is saved as:
/tmp/jvm-14728/hs_error.log
Here are the contents of the error log:
OS:Linux
uname:Linux 4.14.47-56.37.amzn1.x86_64 #1 SMP Wed Jun 6 18:49:01 UTC 2018 x86_64
libc:glibc 2.17 NPTL 2.17
rlimit: STACK 8192k, CORE 0k, NPROC 3867, NOFILE 4096, AS infinity
load average:0.00 0.00 0.00
/proc/meminfo:
MemTotal: 1011168 kB
MemFree: 322852 kB
MemAvailable: 822144 kB
Buffers: 83188 kB
Cached: 523056 kB
SwapCached: 0 kB
Active: 254680 kB
Inactive: 369952 kB
Active(anon): 18404 kB
Inactive(anon): 48 kB
Active(file): 236276 kB
Inactive(file): 369904 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 36 kB
Writeback: 0 kB
AnonPages: 18376 kB
Mapped: 31660 kB
Shmem: 56 kB
Slab: 51040 kB
SReclaimable: 41600 kB
SUnreclaim: 9440 kB
KernelStack: 1564 kB
PageTables: 2592 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 505584 kB
Committed_AS: 834340 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 49152 kB
DirectMap2M: 999424 kB
CPU:total 1 (initial active 1) (1 cores per cpu, 1 threads per core) family 6 model 63 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, avx, avx2, aes, erms, tsc
/proc/cpuinfo:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2676 v3 # 2.40GHz
stepping : 2
microcode : 0x3c
cpu MHz : 2400.043
cache size : 30720 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm cpuid_fault invpcid_single pti fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass
bogomips : 4800.05
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
Memory: 4k page, physical 1011168k(322728k free), swap 0k(0k free)
vm_info: OpenJDK 64-Bit Server VM (24.181-b00) for linux-amd64 JRE (1.7.0_181-b00), built on Jun 5 2018 20:36:03 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
time: Mon Aug 20 20:51:08 2018
elapsed time: 0 seconds
Orient can easily run in 512MB though your performance and throughput will not be as high. In OrientDB 3.0.x you can use the environment variable ORIENTDB_OPTS_MEMORY to set it. On the command line I can, for example run:
cd $ORIENTDB_HOME/bin
export ORIENTDB_OPTS_MEMORY="-Xmx512m"
./server.sh
(where $ORIENTDB_HOME is where you have OrientDB installed) and I'm running with 512MB of memory.
As an aside, if you look in $ORIENTDB_HOME/bin/server.sh you'll see that there is even code to check if the server is running on a Raspberry Pi and those range from 256MB to 1GB so the t2.micro will run just fine.