ESP32 GPIO commands - lua

I just flashed a ESP32 with new custom build for lua files, designed from:
https://nodemcu-build.com/ with dev-esp32<>BETA
And it seems to be working, I can connect with ESPlorer and the firmware is installed as viewed in the startup code.
Im trying to toggle the BUILTIN_LED, but I get error using the GPIO commands.
It worked out of the box with simple arduino code, did I mess something up or why is this not working?
Here is the log with error response from GPIO.mode command
Communication with MCU..Got answer! Communication with MCU established.
AutoDetect firmware...
Can't autodetect firmware, because proper answer not received (may be unknown firmware).
Please, reset module or continue.
ts Jun 8 2016 00:22:57
rst:0x1 (POWERON_RESET),boot:0x12 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:2
load:0x3fff0018,len:4
load:0x3fff001c,len:6716
load:0x40078000,len:11116
ho 0 tail 12 room 4
load:0x40080400,len:5940
entry 0x40080700
[0;32mI (69) boot: Chip Revision: 1[0m
[0;32mI (69) boot_comm: chip revision: 1, min. bootloader chip revision: 0[0m
[0;32mI (41) boot: ESP-IDF v3.3.1 2nd stage bootloader[0m
[0;32mI (41) boot: compile time 17:08:58[0m
[0;32mI (41) boot: Enabling RNG early entropy source...[0m
[0;32mI (46) boot: SPI Speed : 40MHz[0m
[0;32mI (50) boot: SPI Mode : DIO[0m
[0;32mI (54) boot: SPI Flash Size : 4MB[0m
[0;32mI (58) boot: Partition Table:[0m
[0;32mI (61) boot: ## Label Usage Type ST Offset Length[0m
[0;32mI (69) boot: 0 nvs WiFi data 01 02 00009000 00006000[0m
[0;32mI (76) boot: 1 phy_init RF data 01 01 0000f000 00001000[0m
[0;32mI (84) boot: 2 factory factory app 00 00 00010000 00180000[0m
[0;32mI (91) boot: 3 lfs unknown c2 01 00190000 00010000[0m
[0;32mI (99) boot: 4 nodemcuspiffs unknown c2 00 001a0000 00070000[0m
[0;32mI (106) boot: End of partition table[0m
[0;32mI (110) boot_comm: chip revision: 1, min. application chip revision: 0[0m
[0;32mI (118) esp_image: segment 0: paddr=0x00010020 vaddr=0x3f400020 size=0x1bdf4 (114164) map[0m
[0;32mI (165) esp_image: segment 1: paddr=0x0002be1c vaddr=0x3ffb0000 size=0x02fcc ( 12236) load[0m
[0;32mI (170) esp_image: segment 2: paddr=0x0002edf0 vaddr=0x40080000 size=0x00400 ( 1024) load[0m
[0;32mI (172) esp_image: segment 3: paddr=0x0002f1f8 vaddr=0x40080400 size=0x00e18 ( 3608) load[0m
[0;32mI (182) esp_image: segment 4: paddr=0x00030018 vaddr=0x400d0018 size=0x84a0c (543244) map[0m
[0;32mI (373) esp_image: segment 5: paddr=0x000b4a2c vaddr=0x40081218 size=0x0f074 ( 61556) load[0m
[0;32mI (398) esp_image: segment 6: paddr=0x000c3aa8 vaddr=0x400c0000 size=0x00064 ( 100) load[0m
[0;32mI (408) boot: Loaded app from partition at offset 0x10000[0m
[0;32mI (408) boot: Disabling RNG early entropy source...[0m
[0;32mI (409) cpu_start: Pro cpu up.[0m
[0;32mI (413) cpu_start: Application information:[0m
[0;32mI (418) cpu_start: Project name: NodeMCU[0m
[0;32mI (423) cpu_start: App version: a8b46af-dirty[0m
[0;32mI (428) cpu_start: Compile time: Apr 15 2020 17:09:01[0m
[0;32mI (434) cpu_start: ELF file SHA256: af54a39e6c0fb11d...[0m
[0;32mI (440) cpu_start: ESP-IDF: v3.3.1[0m
[0;32mI (445) cpu_start: Starting app cpu, entry point is 0x40081244[0m
[0;32mI (0) cpu_start: App cpu up.[0m
[0;32mI (456) heap_init: Initializing. RAM available for dynamic allocation:[0m
[0;32mI (462) heap_init: At 3FFAE6E0 len 00001920 (6 KiB): DRAM[0m
[0;32mI (469) heap_init: At 3FFB9260 len 00026DA0 (155 KiB): DRAM[0m
[0;32mI (475) heap_init: At 3FFE0440 len 00003AE0 (14 KiB): D/IRAM[0m
[0;32mI (481) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM[0m
[0;32mI (488) heap_init: At 4009028C len 0000FD74 (63 KiB): IRAM[0m
[0;32mI (494) cpu_start: Pro cpu start user code[0m
[0;32mI (176) cpu_start: Starting scheduler on PRO CPU.[0m
[0;32mI (0) cpu_start: Starting scheduler on APP CPU.[0m
Mounting flash filesystem...
No LFS image loaded
I (337) wifi: wifi driver task: 3ffc26a8, prio:23, stack:3584, core=0
[0;32mI (337) system_api: Base MAC address is not set, read default base MAC address from BLK0 of EFUSE[0m
[0;32mI (337) system_api: Base MAC address is not set, read default base MAC address from BLK0 of EFUSE[0m
I (367) wifi: wifi firmware version: ac331d7
I (367) wifi: config NVS flash: enabled
I (367) wifi: config nano formating: disabled
I (367) wifi: Init dynamic tx buffer num: 32
I (377) wifi: Init data frame dynamic rx buffer num: 32
I (377) wifi: Init management frame dynamic rx buffer num: 32
I (387) wifi: Init management short buffer num: 32
I (387) wifi: Init static rx buffer size: 1600
I (397) wifi: Init static rx buffer num: 10
I (397) wifi: Init dynamic rx buffer num: 32
NodeMCU ESP32 built on nodemcu-build.com provided by frightanic.com
branch: dev-esp32
commit: a8b46af905b759506e9fd5eabdadbd8beb83e7c2
SSL: false
modules: file,gpio,net,node,tmr,wifi
build 2020-04-15-17-07-08 powered by Lua 5.1.4 on ESP-IDF v3.3.1 on SDK IDF
lua: cannot open init.lua
>
gpio.write(2, LOW)
gpio.write(2, LOW)
stdin:1: bad argument #2 to 'write' (number expected, got nil)
stack traceback:
[C]: in function 'write'
stdin:1: in main chunk
>

As per https://nodemcu.readthedocs.io/en/dev-esp32/modules/gpio/#gpiowrite (and the error message) the second parameter has to be an int; 1 or 0.
Your code
gpio.write(2, LOW)
appears to send nil. Your LOW is not initialized anywhere, is it? What should definitely work fine is this:
LOW = 0
gpio.write(2, LOW)
(or gpio.write(2, 0) of course)

bad argument #2 to 'write' (number expected, got nil)
This tells what the problem is: The second argument to write is nil, i.e. LOW is nil. You can confirm that by typing print(LOW) and it should print nil
The reason is simple: LOW isn't defined anywhere;
as you can read in the documentation, NodeMCU defines gpio.LOW and gpio.HIGH (which, behind the scenes, are just 0 and 1)
As Marcel Stör pointed out, gpio.LOW and gpio.HIGH are only defined for the ESP8266 version of the firmware. For ESP32 you need to use plain numerical 0 and 1 instead (or define your own variables)

Related

PerfKvmCounter::attach failed(1) error running gem5 x86 Full-system mode in docker

I am learning the gem5 from the document in a docker based on x86_64 ubuntu18.04,when I run the x86-ubuntu-run-with-kvm.py example script, the errors came up:
root#76ff3d8f98ef:~/gem5# build/X86/gem5.opt configs/example/gem5_library/x86-ubuntu-run-with-kvm.py
gem5 Simulator System. https://www.gem5.org
gem5 is copyrighted software; use the --copyright option for details.
gem5 version 22.0.0.1
gem5 compiled Jun 20 2022 01:37:09
gem5 started Jun 20 2022 03:28:17
gem5 executing on 76ff3d8f98ef, pid 11037
command line: build/X86/gem5.opt configs/example/gem5_library/x86-ubuntu-run-with-kvm.py
warn: <orphan X86Board>.kvm_vm already has parent not resetting parent.
Note: kvm_vm is not a parameter of X86Board
warn: (Previously declared as <orphan X86Board>.processor.kvm_vm
warn: The simulate package is still in a beta state. The gem5 project does not guarantee the APIs within this package will remain consistent across upcoming releases.
Global frequency set at 1000000000000 ticks per second
build/X86/mem/dram_interface.cc:692: warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (4096 Mbytes)
build/X86/sim/kernel_workload.cc:46: info: kernel located at: /root/.cache/gem5/x86-linux-kernel-5.4.49
build/X86/base/statistics.hh:280: warn: One of the stats is a legacy stat. Legacy stat is a stat that does not belong to any statistics::Group. Legacy stat is deprecated.
0: board.pc.south_bridge.cmos.rtc: Real-time clock set to Sun Jan 1 00:00:00 2012
board.pc.com_1.device: Listening for connections on port 3459
build/X86/dev/intel_8254_timer.cc:128: warn: Reading current count from inactive timer.
0: board.remote_gdb: listening for remote gdb on port 7003
build/X86/cpu/kvm/base.cc:150: info: KVM: Coalesced MMIO disabled by config.
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 2
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 3
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 4
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 5
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 6
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 8
build/X86/cpu/kvm/base.cc:150: info: KVM: Coalesced MMIO disabled by config.
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 2
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 3
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 4
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 5
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 6
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 8
build/X86/sim/simulate.cc:194: info: Entering event queue # 0. Starting simulation...
build/X86/cpu/kvm/perfevent.ccbuild/X86/cpu/kvm/perfevent.cc:183: panic: PerfKvmCounter::attach failed (1)
Memory Usage: 3885212 KBytes
:183: panic: PerfKvmCounter::attach failed (1)
Memory Usage: 3885212 KBytes
Program aborted at tick Aborted (core dumped)
I've checked the md5sum value about the resources files "x86-linux-kernel-5.4.49" and "x86-ubuntu-18.04-img" , they are the same value as showed in the resources json.
BTW, I run the docker by
sudo docker run --name m00xxx --device=/dev/kvm --volume /usr2/m00xxx/gem5:/home/gem5 -P -it gem5test:0614
And I pull the original docker from https://www.gem5.org/documentation/general_docs/building.
I got the backtrace from gdb:
(gdb) bt
#0 __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007f2ef4cc47f1 in __GI_abort () at abort.c:79
#2 0x0000562c8e8383ef in gem5::Logger::exit_helper (this=<optimized out>) at build/X86/base/logging.hh:125
#3 0x0000562c8ea46f2d in gem5::PerfKvmCounter::attach (this=this#entry=0x562c920d2588, config=...,
tid=tid#entry=0, group_fd=group_fd#entry=-1) at build/X86/cpu/kvm/perfevent.cc:183
#4 0x0000562c8ea2b6f9 in gem5::PerfKvmCounter::attach (tid=0, config=..., this=0x562c920d2588)
at build/X86/cpu/kvm/perfevent.hh:208
#5 gem5::BaseKvmCPU::setupCounters (this=this#entry=0x562c920d2000) at build/X86/cpu/kvm/base.cc:1295
#6 0x0000562c8ea2e021 in gem5::BaseKvmCPU::restartEqThread (this=0x562c920d2000) at build/X86/cpu/kvm/base.cc:248
#7 0x0000562c8ef8ef5d in std::function<void ()>::operator()() const (this=0x562c958ccab8)
at /usr/include/c++/7/bits/std_function.h:706
#8 gem5::EventFunctionWrapper::process (this=0x562c958cca80) at build/X86/sim/eventq.hh:1141
#9 gem5::EventQueue::serviceOne (this=this#entry=0x562c9128b3c0) at build/X86/sim/eventq.cc:223
#10 0x0000562c8efb70a0 in gem5::doSimLoop (eventq=eventq#entry=0x562c9128b3c0) at build/X86/sim/simulate.cc:308
#11 0x0000562c8efbb0c1 in gem5::SimulatorThreads::thread_main (queue=0x562c9128b3c0, this=0x562c91b765b0)
at build/X86/sim/simulate.cc:157
#12 gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}::operator()(gem5::EventQueue*) const
(eq=0x562c9128b3c0, __closure=<optimized out>) at build/X86/sim/simulate.cc:105
#13 std::__invoke_impl<void, gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}, gem5::EventQueue*>(std::__invoke_other, gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}&&, gem5::EventQueue*&&) (__f=...) at /usr/include/c++/7/bits/invoke.h:60
#14 std::__invoke<gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}, gem5::EventQueue*>(std::__invoke_result&&, (gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}&&)...) (__fn=...)
at /usr/include/c++/7/bits/invoke.h:95
#15 std::thread::_Invoker<std::tuple<gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}, gem5::EventQueue*> >::_M_invoke<0ul, 1ul>(std::_Index_tuple<0ul, 1ul>) (this=<optimized out>)
at /usr/include/c++/7/thread:234
#16 std::thread::_Invoker<std::tuple<gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}, gem5::EventQueue*> >::operator()() (this=<optimized out>) at /usr/include/c++/7/thread:243
#17 std::thread::_State_impl<std::thread::_Invoker<std::tuple<gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}, gem5::EventQueue*> > >::_M_run() (this=<optimized out>) at /usr/include/c++/7/thread:186
#18 0x00007f2ef56e86df in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#19 0x00007f2ef6e986db in start_thread (arg=0x7f2e2eb19700) at pthread_create.c:463
#20 0x00007f2ef4da561f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
You should set perf_event_paranoid to "-1". In your host machine type:
echo "-1" | sudo tee /proc/sys/kernel/perf_event_paranoid

understanding docker container cpu usages

docker stats shows that the cpu usage to be very high. But top command out shows that 88.3% cpu is not being used. Inside the container is a java service httpthrift service.
docker stats :
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
8a0488xxxx5 540.9% 41.99 GiB / 44 GiB 95.43% 0 B / 0 B 0 B / 35.2 MB 286
top output :
top - 07:56:58 up 2 days, 22:29, 0 users, load average: 2.88, 3.01, 3.05
Tasks: 13 total, 1 running, 12 sleeping, 0 stopped, 0 zombie
%Cpu(s): 8.2 us, 2.7 sy, 0.0 ni, 88.3 id, 0.0 wa, 0.0 hi, 0.9 si, 0.0 st
KiB Mem: 65959920 total, 47983628 used, 17976292 free, 357632 buffers
KiB Swap: 7999484 total, 0 used, 7999484 free. 2788868 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8823 root 20 0 58.950g 0.041t 21080 S 540.9 66.5 16716:32 java
How to reduce the cpu usage and bring it under 100%?
According to the top man page:
When operating in Solaris mode (`I' toggled Off), a task's cpu usage will be divided by the total number of CPUs. After issuing this command, you'll be told the new state of this toggle.
So by pressing the key I when using top in interactive mode, you will switch to the Solaris mode and the CPU usage will be divided by the total number of CPUs (or cores).
P.S.: This option is not available on all versions of top.

What are memory requirments for OrientDB/Can I run on EC2 micro?

I would like to run OrientDB on an EC2 micro (free tier) instance. I am unable to find official documentation for OrientDB that gives memory requirements, however I found this question that says 512MB should be fine. I am running an EC2 micro instance which has 1GB RAM. However, when I try to run OrientDB I get the JRE error shown below. My initial thought was that I needed to increase the jre memory using -xmx, but I guess it would be the shell script that would do this.. Has anyone successfully run OrientDB in an EC2 micro instance or run into this problem?
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007a04a0000, 1431699456, 0) failed; error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 1431699456 bytes for committing reserved memory.
An error report file with more information is saved as:
/tmp/jvm-14728/hs_error.log
Here are the contents of the error log:
OS:Linux
uname:Linux 4.14.47-56.37.amzn1.x86_64 #1 SMP Wed Jun 6 18:49:01 UTC 2018 x86_64
libc:glibc 2.17 NPTL 2.17
rlimit: STACK 8192k, CORE 0k, NPROC 3867, NOFILE 4096, AS infinity
load average:0.00 0.00 0.00
/proc/meminfo:
MemTotal: 1011168 kB
MemFree: 322852 kB
MemAvailable: 822144 kB
Buffers: 83188 kB
Cached: 523056 kB
SwapCached: 0 kB
Active: 254680 kB
Inactive: 369952 kB
Active(anon): 18404 kB
Inactive(anon): 48 kB
Active(file): 236276 kB
Inactive(file): 369904 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 36 kB
Writeback: 0 kB
AnonPages: 18376 kB
Mapped: 31660 kB
Shmem: 56 kB
Slab: 51040 kB
SReclaimable: 41600 kB
SUnreclaim: 9440 kB
KernelStack: 1564 kB
PageTables: 2592 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 505584 kB
Committed_AS: 834340 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 49152 kB
DirectMap2M: 999424 kB
CPU:total 1 (initial active 1) (1 cores per cpu, 1 threads per core) family 6 model 63 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, avx, avx2, aes, erms, tsc
/proc/cpuinfo:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2676 v3 # 2.40GHz
stepping : 2
microcode : 0x3c
cpu MHz : 2400.043
cache size : 30720 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm cpuid_fault invpcid_single pti fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass
bogomips : 4800.05
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
Memory: 4k page, physical 1011168k(322728k free), swap 0k(0k free)
vm_info: OpenJDK 64-Bit Server VM (24.181-b00) for linux-amd64 JRE (1.7.0_181-b00), built on Jun 5 2018 20:36:03 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
time: Mon Aug 20 20:51:08 2018
elapsed time: 0 seconds
Orient can easily run in 512MB though your performance and throughput will not be as high. In OrientDB 3.0.x you can use the environment variable ORIENTDB_OPTS_MEMORY to set it. On the command line I can, for example run:
cd $ORIENTDB_HOME/bin
export ORIENTDB_OPTS_MEMORY="-Xmx512m"
./server.sh
(where $ORIENTDB_HOME is where you have OrientDB installed) and I'm running with 512MB of memory.
As an aside, if you look in $ORIENTDB_HOME/bin/server.sh you'll see that there is even code to check if the server is running on a Raspberry Pi and those range from 256MB to 1GB so the t2.micro will run just fine.

qla2xxx - SNS scan failed

We have two hypervisor(Dell PowerEdge R630, virtualization environment: Proxmox), they are connected with an FC-switch and the FC-switch with the storage(dell compellent).
One of the hypervisor are connected via multipath to the storage and it's works.
The other hypervisor doesn't found the storage.
The not-finder-hypervisor says:
kernel: qla2xxx [0000:81:00.0]-001d: : Found an ISP2532 irq 93 iobase 0x00000000bf40f73b.
kernel: scsi host13: qla2xxx
kernel: qla2xxx [0000:81:00.0]-500a:13: LOOP UP detected (8 Gbps).
kernel: qla2xxx [0000:81:00.0]-00fb:13: QLogic QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA.
kernel: qla2xxx [0000:81:00.0]-00fc:13: ISP2532: PCIe (5.0GT/s x8) # 0000:81:00.0 hdma+ host#=13 fw=8.07.00 (90d5).
kernel: qla2xxx [0000:81:00.0]-209e:13: SNS scan failed -- assuming zero-entry result.
The multipath module is loaded into kernel. Both hypervisor have the same loaded modules.
dm_multipath 28672 1 dm_round_robin
Check SAN-zoning and LUN-secuty on the array: they must define and allow host-connection with the array and LUNs on it.

Job Fails with odd message

I have a job that is failing at the very start of the message:
"#*" and "#N" are reserved sharding specs. Filepattern must not contain any of them.
I have altered the destination location to be something other than the default (an email address) which would include the # symbol but I can still see it is using temporary destinations within that path that I am unable to edit.
Did anyone experience this issue before? I've got a file which is only 65k rows long, I can preview all of the complete data in Data Prep but when I run the job it fails which is super tedious and ~3hrs of cleaning down the drain if this won't run. (I appreciate it's not designed for this, but Excel was being a mare so it seemed like a good solution!)
Edit - Adding Logs:
2018-03-10 (13:47:34) Value "PTableLoadTransformGCS/Shuffle/GroupByKey/Session" materialized.
2018-03-10 (13:47:34) Executing operation PTableLoadTransformGCS/SumQuoteAndDelimiterCounts/GroupByKey/Read+PTableLoadTran...
2018-03-10 (13:47:38) Executing operation PTableLoadTransformGCS/Shuffle/GroupByKey/Close
2018-03-10 (13:47:38) Executing operation PTableStoreTransformGCS/WriteFiles/GroupUnwritten/Create
2018-03-10 (13:47:39) Value "PTableStoreTransformGCS/WriteFiles/GroupUnwritten/Session" materialized.
2018-03-10 (13:47:39) Executing operation PTableLoadTransformGCS/Shuffle/GroupByKey/Read+PTableLoadTransformGCS/Shuffle/Gr...
2018-03-10 (13:47:39) Executing failure step failure49
2018-03-10 (13:47:39) Workflow failed. Causes: (c759db2a23a80ea): "#*" and "#N" are reserved sharding specs. Filepattern m...
(c759db2a23a8c5b): Workflow failed. Causes: (c759db2a23a80ea): "#*" and "#N" are reserved sharding specs. Filepattern must not contain any of them.
2018-03-10 (13:47:39) Cleaning up.
2018-03-10 (13:47:39) Starting worker pool teardown.
2018-03-10 (13:47:39) Stopping worker pool...
And StackDriver warnings or higher:
W ACPI: RSDP 0x00000000000F23A0 000014 (v00 Google)
W ACPI: RSDT 0x00000000BFFF3430 000038 (v01 Google GOOGRSDT 00000001 GOOG 00000001)
W ACPI: FACP 0x00000000BFFFCF60 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001)
W ACPI: DSDT 0x00000000BFFF3470 0017B2 (v01 Google GOOGDSDT 00000001 GOOG 00000001)
W ACPI: FACS 0x00000000BFFFCF00 000040
W ACPI: FACS 0x00000000BFFFCF00 000040
W ACPI: SSDT 0x00000000BFFF65F0 00690D (v01 Google GOOGSSDT 00000001 GOOG 00000001)
W ACPI: APIC 0x00000000BFFF5D10 00006E (v01 Google GOOGAPIC 00000001 GOOG 00000001)
W ACPI: WAET 0x00000000BFFF5CE0 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001)
W ACPI: SRAT 0x00000000BFFF4C30 0000B8 (v01 Google GOOGSRAT 00000001 GOOG 00000001)
W ACPI: 2 ACPI AML tables successfully acquired and loaded
W ACPI: Executed 2 blocks of module-level executable AML code
W acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
W ACPI: Enabled 16 GPEs in block 00 to 0F
W ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
W ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10
W i8042: Warning: Keylock active
W GPT:Primary header thinks Alt. header is not at the end of the disk.
W GPT:41943039 != 524287999
W GPT:Alternate GPT header not at the end of the disk.
W GPT:41943039 != 524287999
W GPT: Use GNU Parted to correct GPT errors.
W device-mapper: verity: Argument 0: 'payload=PARTUUID=245B0EEC-6404-8744-AAF2-E8C6BF78D7B2'
W device-mapper: verity: Argument 1: 'hashtree=PARTUUID=245B0EEC-6404-8744-AAF2-E8C6BF78D7B2'
W device-mapper: verity: Argument 2: 'hashstart=2539520'
W device-mapper: verity: Argument 3: 'alg=sha1'
W device-mapper: verity: Argument 4: 'root_hexdigest=244007b512ddbf69792d485fdcbc3440531f1264'
W device-mapper: verity: Argument 5: 'salt=5bacc0df39d2a60191e9b221ffc962c55e251ead18cf1472bf8d3ed84383765b'
E EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities
W [/usr/lib/tmpfiles.d/var.conf:12] Duplicate line for path "/var/run", ignoring.
W Could not stat /dev/pstore: No such file or directory
W Kernel does not support crash dumping
W Could not load the device policy file.
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/version.cycle for reading: No such file or directory
W No API client: no api servers specified
W Unable to update cni config: No networks found in /etc/cni/net.d
W unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
W No api server defined - no events will be sent to API server.
W Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
W Unable to update cni config: No networks found in /etc/cni/net.d
E Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container /
W No api server defined - no node status update will be sent.
E Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
E [ContainerManager]: Fail to get rootfs information unable to find data for container /
W Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
E Could not find capacity information for resource storage.kubernetes.io/scratch
W eviction manager: no observation found for eviction signal allocatableNodeFs.available
W Profiling Agent not found. Profiles will not be available from this worker.
E debconf: delaying package configuration, since apt-utils is not installed
W [WARNING:metrics_daemon.cc(598)] cannot read /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq
E % Total % Received % Xferd Average Speed Time Time Time Current
E Dload Upload Total Spent Left Speed
E
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 3698 100 3698 0 0 64248 0 --:--:-- --:--:-- --:--:-- 64877

Resources