I am using a script and in that script I'm building a Docker image. I pipe the Dockerfile to docker build as explained here: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#pipe-dockerfile-through-stdin.
Here is my code:
docker build -t alpine-rsync - <<EOF
FROM alpine
RUN apk update
RUN apk add rsync
EOF
Unfotunately, I get this strange error message now when I execute the script. Can somebody help me what I did wrong?
free(): invalid pointer
SIGABRT: abort
PC=0x7ff4beae2e97 m=0 sigcode=18446744073709551610
signal arrived during cgo execution
goroutine 1 [syscall, locked to thread]:
runtime.cgocall(0x4afd50, 0xc420049cc0, 0xc420049ce8)
/usr/lib/go-1.8/src/runtime/cgocall.go:131 +0xe2 fp=0xc420049c90 sp=0xc420049c50
github.com/docker/docker-credential-helpers/secretservice._Cfunc_free(0x2135270)
github.com/docker/docker-credential-helpers/secretservice/_obj/_cgo_gotypes.go:111 +0x41 fp=0xc420049cc0 sp=0xc420049c90
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List.func5(0x2135270)
/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:96 +0x60 fp=0xc420049cf8 sp=0xc420049cc0
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List(0x0, 0x756060, 0xc420012360)
/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:97 +0x217 fp=0xc420049da0 sp=0xc420049cf8
github.com/docker/docker-credential-helpers/secretservice.(*Secretservice).List(0x77e548, 0xc420049e88, 0x410022, 0xc4200122c0)
<autogenerated>:4 +0x46 fp=0xc420049de0 sp=0xc420049da0
github.com/docker/docker-credential-helpers/credentials.List(0x756ba0, 0x77e548, 0x7560e0, 0xc42000e018, 0x0, 0x10)
/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:145 +0x3e fp=0xc420049e68 sp=0xc420049de0
github.com/docker/docker-credential-helpers/credentials.HandleCommand(0x756ba0, 0x77e548, 0x7ffced22f7a4, 0x4, 0x7560a0, 0xc42000e010, 0x7560e0, 0xc42000e018, 0x40e398, 0x4d35c0)
/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:60 +0x16d fp=0xc420049ed8 sp=0xc420049e68
github.com/docker/docker-credential-helpers/credentials.Serve(0x756ba0, 0x77e548)
/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:41 +0x1cb fp=0xc420049f58 sp=0xc420049ed8
main.main()
/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/secretservice/cmd/main_linux.go:9 +0x4f fp=0xc420049f88 sp=0xc420049f58
runtime.main()
/usr/lib/go-1.8/src/runtime/proc.go:185 +0x20a fp=0xc420049fe0 sp=0xc420049f88
runtime.goexit()
/usr/lib/go-1.8/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc420049fe8 sp=0xc420049fe0
goroutine 17 [syscall, locked to thread]:
runtime.goexit()
/usr/lib/go-1.8/src/runtime/asm_amd64.s:2197 +0x1
rax 0x0
rbx 0x7ffced22dcf0
rcx 0x7ff4beae2e97
rdx 0x0
rdi 0x2
rsi 0x7ffced22da80
rbp 0x7ffced22ddf0
rsp 0x7ffced22da80
r8 0x0
r9 0x7ffced22da80
r10 0x8
r11 0x246
r12 0x7ffced22dcf0
r13 0x1000
r14 0x0
r15 0x30
rip 0x7ff4beae2e97
rflags 0x246
cs 0x33
fs 0x0
gs 0x0
Sending build context to Docker daemon 2.048kB
As can be seen here, this is an issue with docker-credential-helpers and not with the syntax.
According to the issue above, this has been fixed in v0.6.3 which is available to download from the release page on Github.
Alternately, as suggested by this comment, you can remove docker-credential-helpers with the following command and continue as normal:
dpkg -r --force-depends golang-docker-credential-helpers
Related
I am learning the gem5 from the document in a docker based on x86_64 ubuntu18.04,when I run the x86-ubuntu-run-with-kvm.py example script, the errors came up:
root#76ff3d8f98ef:~/gem5# build/X86/gem5.opt configs/example/gem5_library/x86-ubuntu-run-with-kvm.py
gem5 Simulator System. https://www.gem5.org
gem5 is copyrighted software; use the --copyright option for details.
gem5 version 22.0.0.1
gem5 compiled Jun 20 2022 01:37:09
gem5 started Jun 20 2022 03:28:17
gem5 executing on 76ff3d8f98ef, pid 11037
command line: build/X86/gem5.opt configs/example/gem5_library/x86-ubuntu-run-with-kvm.py
warn: <orphan X86Board>.kvm_vm already has parent not resetting parent.
Note: kvm_vm is not a parameter of X86Board
warn: (Previously declared as <orphan X86Board>.processor.kvm_vm
warn: The simulate package is still in a beta state. The gem5 project does not guarantee the APIs within this package will remain consistent across upcoming releases.
Global frequency set at 1000000000000 ticks per second
build/X86/mem/dram_interface.cc:692: warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (4096 Mbytes)
build/X86/sim/kernel_workload.cc:46: info: kernel located at: /root/.cache/gem5/x86-linux-kernel-5.4.49
build/X86/base/statistics.hh:280: warn: One of the stats is a legacy stat. Legacy stat is a stat that does not belong to any statistics::Group. Legacy stat is deprecated.
0: board.pc.south_bridge.cmos.rtc: Real-time clock set to Sun Jan 1 00:00:00 2012
board.pc.com_1.device: Listening for connections on port 3459
build/X86/dev/intel_8254_timer.cc:128: warn: Reading current count from inactive timer.
0: board.remote_gdb: listening for remote gdb on port 7003
build/X86/cpu/kvm/base.cc:150: info: KVM: Coalesced MMIO disabled by config.
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 2
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 3
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 4
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 5
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 6
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 8
build/X86/cpu/kvm/base.cc:150: info: KVM: Coalesced MMIO disabled by config.
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 2
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 3
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 4
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 5
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 6
build/X86/arch/x86/cpuid.cc:181: warn: x86 cpuid family 0x0000: unimplemented function 8
build/X86/sim/simulate.cc:194: info: Entering event queue # 0. Starting simulation...
build/X86/cpu/kvm/perfevent.ccbuild/X86/cpu/kvm/perfevent.cc:183: panic: PerfKvmCounter::attach failed (1)
Memory Usage: 3885212 KBytes
:183: panic: PerfKvmCounter::attach failed (1)
Memory Usage: 3885212 KBytes
Program aborted at tick Aborted (core dumped)
I've checked the md5sum value about the resources files "x86-linux-kernel-5.4.49" and "x86-ubuntu-18.04-img" , they are the same value as showed in the resources json.
BTW, I run the docker by
sudo docker run --name m00xxx --device=/dev/kvm --volume /usr2/m00xxx/gem5:/home/gem5 -P -it gem5test:0614
And I pull the original docker from https://www.gem5.org/documentation/general_docs/building.
I got the backtrace from gdb:
(gdb) bt
#0 __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007f2ef4cc47f1 in __GI_abort () at abort.c:79
#2 0x0000562c8e8383ef in gem5::Logger::exit_helper (this=<optimized out>) at build/X86/base/logging.hh:125
#3 0x0000562c8ea46f2d in gem5::PerfKvmCounter::attach (this=this#entry=0x562c920d2588, config=...,
tid=tid#entry=0, group_fd=group_fd#entry=-1) at build/X86/cpu/kvm/perfevent.cc:183
#4 0x0000562c8ea2b6f9 in gem5::PerfKvmCounter::attach (tid=0, config=..., this=0x562c920d2588)
at build/X86/cpu/kvm/perfevent.hh:208
#5 gem5::BaseKvmCPU::setupCounters (this=this#entry=0x562c920d2000) at build/X86/cpu/kvm/base.cc:1295
#6 0x0000562c8ea2e021 in gem5::BaseKvmCPU::restartEqThread (this=0x562c920d2000) at build/X86/cpu/kvm/base.cc:248
#7 0x0000562c8ef8ef5d in std::function<void ()>::operator()() const (this=0x562c958ccab8)
at /usr/include/c++/7/bits/std_function.h:706
#8 gem5::EventFunctionWrapper::process (this=0x562c958cca80) at build/X86/sim/eventq.hh:1141
#9 gem5::EventQueue::serviceOne (this=this#entry=0x562c9128b3c0) at build/X86/sim/eventq.cc:223
#10 0x0000562c8efb70a0 in gem5::doSimLoop (eventq=eventq#entry=0x562c9128b3c0) at build/X86/sim/simulate.cc:308
#11 0x0000562c8efbb0c1 in gem5::SimulatorThreads::thread_main (queue=0x562c9128b3c0, this=0x562c91b765b0)
at build/X86/sim/simulate.cc:157
#12 gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}::operator()(gem5::EventQueue*) const
(eq=0x562c9128b3c0, __closure=<optimized out>) at build/X86/sim/simulate.cc:105
#13 std::__invoke_impl<void, gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}, gem5::EventQueue*>(std::__invoke_other, gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}&&, gem5::EventQueue*&&) (__f=...) at /usr/include/c++/7/bits/invoke.h:60
#14 std::__invoke<gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}, gem5::EventQueue*>(std::__invoke_result&&, (gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}&&)...) (__fn=...)
at /usr/include/c++/7/bits/invoke.h:95
#15 std::thread::_Invoker<std::tuple<gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}, gem5::EventQueue*> >::_M_invoke<0ul, 1ul>(std::_Index_tuple<0ul, 1ul>) (this=<optimized out>)
at /usr/include/c++/7/thread:234
#16 std::thread::_Invoker<std::tuple<gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}, gem5::EventQueue*> >::operator()() (this=<optimized out>) at /usr/include/c++/7/thread:243
#17 std::thread::_State_impl<std::thread::_Invoker<std::tuple<gem5::SimulatorThreads::runUntilLocalExit()::{lambda(gem5::EventQueue*)#1}, gem5::EventQueue*> > >::_M_run() (this=<optimized out>) at /usr/include/c++/7/thread:186
#18 0x00007f2ef56e86df in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#19 0x00007f2ef6e986db in start_thread (arg=0x7f2e2eb19700) at pthread_create.c:463
#20 0x00007f2ef4da561f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
You should set perf_event_paranoid to "-1". In your host machine type:
echo "-1" | sudo tee /proc/sys/kernel/perf_event_paranoid
I am running ESXI 7.0 on a Dell 3930 Rack PC. This PC has an NVME SSD and a 1TB Sata HDD plugged in. I used the Dell ESXI ISO image while setting up.
I can see the NVM and PCH controllers when I browse storage. The name of controller showing is: 'Cannon Lake PCH-H AHCI Controller'
When I goto devices, I can also see the 'Local ATA Disk' there. Despite all attempts, I am not able to create a VMFS datastore and always receive an error saying 'cannot change host configuration'
I tried clearing the partition from ESXI Web client but wasn't successful either. The vmkernel logs show the following when I try to create a datastore
2021-05-30T09:48:08.091Z cpu8:1049247)vmw_ahci[00000017]: ExecInternalCommandPolled:FAIL!!: Internal command 2f, 00
2021-05-30T09:48:08.091Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:Fail to get error log for port 0
2021-05-30T09:48:08.092Z cpu8:1049247)vmw_ahci[00000017]: _IssueComReset:Issuing comreset...
2021-05-30T09:48:08.226Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:fail a command on slot 1
2021-05-30T09:48:08.226Z cpu12:1049325)ScsiDeviceIO: 4062: Cmd(0x455a74543900) 0x28, CmdSN 0xe from world 1196852 to dev "t10.ATA_____ST1000LM0492D2GH172__________________________________ZGS23QAV" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x4 0x44
2021-05-30T09:48:08.226Z cpu12:1049325)0x0.
2021-05-30T09:48:08.264Z cpu8:1048723)vmw_ahci[00000017]: CompletionBottomHalf:strange irq(s), 0x4000000
2021-05-30T09:48:08.264Z cpu8:1048723)vmw_ahci[00000017]: CompletionBottomHalf:PORT_IRQ_IF_NONFATAL exception.
2021-05-30T09:48:08.264Z cpu8:1048723)vmw_ahci[00000017]: LogExceptionSignal:Port 0, Signal: --|--|--|--|--|--|IR|--|--|--|--|-- (0x0040) Curr: --|--|--|--|--|--|--|--|--|--|--|-- (0x0000)
2021-05-30T09:48:08.264Z cpu8:1049247)vmw_ahci[00000017]: LogExceptionProcess:Port 0, Process: --|--|--|--|--|--|IR|--|--|--|--|-- (0x0040) Curr: --|--|--|--|--|--|IR|--|--|--|--|-- (0x0040)
2021-05-30T09:48:08.264Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:Performing device reset due to Port IRQ Error.
2021-05-30T09:48:08.264Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:hardware stop on slot 0x1, activeTags 0x00000002
2021-05-30T09:48:08.286Z cpu8:1049247)vmw_ahci[00000017]: ExecInternalCommandPolled:port status: 0x40000001, tf status: 0x451
2021-05-30T09:48:08.288Z cpu8:1049247)vmw_ahci[00000017]: ExecInternalCommandPolled:FAIL!!: Internal command 2f, 00
2021-05-30T09:48:08.288Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:Fail to get error log for port 0
2021-05-30T09:48:08.289Z cpu8:1049247)vmw_ahci[00000017]: _IssueComReset:Issuing comreset...
2021-05-30T09:48:08.414Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:fail a command on slot 1
2021-05-30T09:48:08.414Z cpu12:1049325)ScsiDeviceIO: 4062: Cmd(0x455a744e2600) 0x28, CmdSN 0x15 from world 1196852 to dev "t10.ATA_____ST1000LM0492D2GH172__________________________________ZGS23QAV" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x4 0x44
2021-05-30T09:48:08.414Z cpu12:1049325)0x0.
2021-05-30T09:48:08.449Z cpu8:1049830)vmw_ahci[00000017]: CompletionBottomHalf:Error port=0, PxIS=0x08000000, PxTDF=0x40,PxSERR=0x00400100, PxCI=0x00000000, PxSACT=0x00000002, ActiveTags=0x00000002
2021-05-30T09:48:08.449Z cpu8:1049830)vmw_ahci[00000017]: CompletionBottomHalf:SCSI cmd 0x2a on slot 1 lba=0x0, lbc=0x22
2021-05-30T09:48:08.449Z cpu8:1049830)vmw_ahci[00000017]: CompletionBottomHalf:cfis->command= 0x61
2021-05-30T09:48:08.449Z cpu8:1049830)vmw_ahci[00000017]: LogExceptionSignal:Port 0, Signal: --|--|--|--|--|TF|--|--|--|--|--|-- (0x0020) Curr: --|--|--|--|--|--|--|--|--|--|--|-- (0x0000)
2021-05-30T09:48:08.449Z cpu8:1049247)vmw_ahci[00000017]: LogExceptionProcess:Port 0, Process: --|--|--|--|--|TF|--|--|--|--|--|-- (0x0020) Curr: --|--|--|--|--|TF|--|--|--|--|--|-- (0x0020)
2021-05-30T09:48:08.449Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:Performing device reset due to Task File Error.
2021-05-30T09:48:08.449Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:hardware stop on slot 0x1, activeTags 0x00000002
2021-05-30T09:48:08.461Z cpu8:1049247)vmw_ahci[00000017]: ExecInternalCommandPolled:port status: 0x40000008, tf status: 0x84c1
2021-05-30T09:48:08.462Z cpu8:1049247)vmw_ahci[00000017]: _IssueComReset:Issuing comreset...
2021-05-30T09:48:08.618Z cpu8:1049247)vmw_ahci[00000017]: ExecInternalCommandPolled:FAIL!!: Internal command 2f, 00
2021-05-30T09:48:08.618Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:Fail to get error log for port 0
2021-05-30T09:48:08.619Z cpu8:1049247)vmw_ahci[00000017]: _IssueComReset:Issuing comreset...
2021-05-30T09:48:08.661Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:fail a command on slot 1
2021-05-30T09:48:08.661Z cpu12:1049325)ScsiDeviceIO: 4062: Cmd(0x455a744aa700) 0x2a, CmdSN 0x2 from world 1196852 to dev "t10.ATA_____ST1000LM0492D2GH172__________________________________ZGS23QAV" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x4 0x44
2021-05-30T09:48:08.661Z cpu12:1049325)0x0.
2021-05-30T09:48:12.698Z cpu11:1049281)NMP: nmp_ResetDeviceLogThrottling:3776: last error status from device t10.ATA_____ST1000LM0492D2GH172__________________________________ZGS23QAV repeated 6 times
2021-05-30T09:48:42.644Z cpu7:1049176)INFO (ne1000): false RX hang detected on vmnic0
2021-05-30T09:51:12.698Z cpu3:1049363)DVFilter: 6344: Checking disconnected filters for timeouts
2021-05-30T09:52:20.250Z cpu2:1049176)INFO (ne1000): false RX hang detected on vmnic0
2021-05-30T09:52:32.136Z cpu8:1051618)vmw_ahci[00000017]: CompletionBottomHalf:strange irq(s), 0x4000000
2021-05-30T09:52:32.136Z cpu8:1051618)vmw_ahci[00000017]: CompletionBottomHalf:PORT_IRQ_IF_NONFATAL exception.
2021-05-30T09:52:32.136Z cpu8:1051618)vmw_ahci[00000017]: LogExceptionSignal:Port 0, Signal: --|--|--|--|--|--|IR|--|--|--|--|-- (0x0040) Curr: --|--|--|--|--|--|--|--|--|--|--|-- (0x0000)
2021-05-30T09:52:32.136Z cpu8:1049247)vmw_ahci[00000017]: LogExceptionProcess:Port 0, Process: --|--|--|--|--|--|IR|--|--|--|--|-- (0x0040) Curr: --|--|--|--|--|--|IR|--|--|--|--|-- (0x0040)
2021-05-30T09:52:32.136Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:Performing device reset due to Port IRQ Error.
2021-05-30T09:52:32.137Z cpu8:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:hardware stop on slot 0x1, activeTags 0x00000002
2021-05-30T09:52:32.159Z cpu8:1049247)vmw_ahci[00000017]: ExecInternalCommandPolled:port status: 0x40000001, tf status: 0x451
2021-05-30T09:52:32.161Z cpu2:1049247)vmw_ahci[00000017]: ExecInternalCommandPolled:FAIL!!: Internal command 2f, 00
2021-05-30T09:52:32.161Z cpu2:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:Fail to get error log for port 0
2021-05-30T09:52:32.162Z cpu2:1049247)vmw_ahci[00000017]: _IssueComReset:Issuing comreset...
2021-05-30T09:52:32.283Z cpu2:1049247)vmw_ahci[00000017]: ExceptionHandlerWorld:fail a command on slot 1
2021-05-30T09:52:32.283Z cpu12:1048622)NMP: nmp_ThrottleLogForDevice:3856: Cmd 0x28 (0x455a733c8440, 0) to dev "t10.ATA_____ST1000LM0492D2GH172__________________________________ZGS23QAV" on path "vmhba0:C0:T0:L0" Failed:
2021-05-30T09:52:32.283Z cpu12:1048622)NMP: nmp_ThrottleLogForDevice:3865: H:0x0 D:0x2 P:0x0 Valid sense data: 0x4 0x44 0x0. Act:NONE. cmdId.initiator=0x451a20b1a7b8 CmdSN 0x18a60
2021-05-30T09:52:32.283Z cpu12:1048622)ScsiDeviceIO: 4062: Cmd(0x455a733c8440) 0x28, CmdSN 0x18a60 from world 0 to dev "t10.ATA_____ST1000LM0492D2GH172__________________________________ZGS23QAV" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x4 0x44
2021-05-30T09:52:32.283Z cpu12:1048622)0x0.
I had some doubts if the computer's AHCI controller(Cannon Lake PCH-H AHCI Controller) is compatible with esxi 7 but cannot find any resource that confirms this. I read somewhere that disabling the default AHCI driver by the following ssh command may help:
esxcli system module set --enabled=false --module=vmw_ahci
I tried this and if the driver is disabled, post restart the controller wont even display at all. So this had to be re-enabled.
I also tried clearing out the partition table as this drive has no useful information but it always throws an 'input/output' error to any partedUtil command. It seems any write attempt to this device does not work.
When I try the partedUtil getptbl command, the partition format is described as 'unknown'.
FYI, before I setup ESXI, the HDD in question was a diskdrive for a Ubuntu OS and was accessible.
Any leads that could help fix this issue would be welcome.
I'm building a Linux module, on a Debian distribution, using DKMS, and I have a kernel freeze that I need to debug.
I've installed the crash command line tool, with the kernel debug symbols.
When a freeze happens, I have a crash file which I load with the crash command, and I can see the filenames and line numbers for each of the lines from the kernel trace, but when it is about my module, I don't have any filename, nor line number.
Here is an example:
WARNING: active task ffff98d215239e80 on cpu 1 not found in PID hash
KERNEL: /usr/lib/debug/boot/vmlinux-4.19.0-8-amd64
DUMPFILE: /var/crash/202004292108/dump.202004292108 [PARTIAL DUMP]
CPUS: 12
DATE: Wed Apr 29 21:07:00 2020
UPTIME: 00:13:32
LOAD AVERAGE: 0.62, 0.57, 0.37
TASKS: 886
NODENAME: debian
RELEASE: 4.19.0-8-amd64
VERSION: #1 SMP Debian 4.19.98-1+deb10u1 (2020-04-27)
MACHINE: x86_64 (3700 Mhz)
MEMORY: 16 GB
PANIC: "BUG: unable to handle kernel NULL pointer dereference at 0000000000000018"
PID: 15775
COMMAND: "douaned"
TASK: ffff98d215239e80 [THREAD_INFO: ffff98d215239e80]
CPU: 1
STATE: EXIT_DEAD (PANIC)
crash> bt -l
PID: 15775 TASK: ffff98d215239e80 CPU: 1 COMMAND: "douaned"
#0 [ffff98d36e843870] machine_kexec at ffffffff87656967
./debian/build/build_amd64_none_amd64/./arch/x86/include/asm/mem_encrypt.h: 75
#1 [ffff98d36e8438c8] __crash_kexec at ffffffff877213ad
./debian/build/build_amd64_none_amd64/./kernel/kexec_core.c: 958
#2 [ffff98d36e843990] crash_kexec at ffffffff877221fd
./debian/build/build_amd64_none_amd64/./include/linux/compiler.h: 219
#3 [ffff98d36e8439a8] oops_end at ffffffff8762a0ad
./debian/build/build_amd64_none_amd64/./arch/x86/kernel/dumpstack.c: 334
#4 [ffff98d36e8439c8] no_context at ffffffff876653ee
./debian/build/build_amd64_none_amd64/./arch/x86/mm/fault.c: 808
#5 [ffff98d36e843a20] __do_page_fault at ffffffff87665ab2
./debian/build/build_amd64_none_amd64/./arch/x86/mm/fault.c: 1323
#6 [ffff98d36e843a90] page_fault at ffffffff87e0114e
/build/linux-AqT5hv/linux-4.19.98/arch/x86/entry/entry_64.S: 1204
[exception RIP: idr_find]
RIP: ffffffff87d0b860 RSP: ffff98d36e843b40 RFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff98d20e848000 RCX: 0000000000000000
RDX: 000000009456dad9 RSI: 0000000000005807 RDI: 0000000000000008
RBP: ffff98d363bc3ae8 R8: 000000000001421d R9: 00000000917bc5e0
R10: ffff98d36e843bae R11: 0000000000000049 R12: 0000000000000004
R13: ffff98d226491480 R14: ffff98d206f28cfc R15: ffff98d206f28d10
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
./debian/build/build_amd64_none_amd64/./lib/idr.c: 177
#7 [ffff98d36e843b40] find_get_pid at ffffffff8769e846
./debian/build/build_amd64_none_amd64/./include/linux/pid.h: 71
#8 [ffff98d36e843b48] netfiler_packet_hook at ffffffffc1962eda [douane]
#9 [ffff98d36e843ce8] nf_hook_slow at ffffffff87c2eea4
./debian/build/build_amd64_none_amd64/./net/netfilter/core.c: 512
#10 [ffff98d36e843d18] __ip_local_out at ffffffff87c3c8a7
./debian/build/build_amd64_none_amd64/./include/linux/netfilter.h: 248
#11 [ffff98d36e843d70] ip_local_out at ffffffff87c3c927
./debian/build/build_amd64_none_amd64/./net/ipv4/ip_output.c: 123
...
Have a look at the line #8.
From the DKMS documentation, I found the STRIPE=[#] config to which I passed no in order to not remove the debug symbols (as the documentation says), but I still don't have any filenames nor line numbers.
Can anyone tell how can I have them showing please?
I finally figured it out !
You first need to know where is installed your kernel module file (douane.ko in my case, which is installed in /lib/modules/$(uname -r)/updates/dkms/), then after having loaded the crash file, like it is shown in the question, you then need to load the module's debug symbols using the mod command:
crash> mod -s douane /lib/modules/4.19.0-8-amd64/updates/dkms/douane.ko
MODULE NAME SIZE OBJECT FILE
ffffffffc17f8040 douane 28672 /lib/modules/4.19.0-8-amd64/updates/dkms/douane.ko
From now on, using bt -l will show you also the line number from your kernel module source code too.
I wrote a docker-compose.yml which is working on my local ubuntu 18.04 system. I ran the same yml on my ubuntu server 18.04 lts machine and got the errormessage in the title. I tried to increase TasksMax for the process but it didnt work.
I also set DefaultTasksMax=infinity in /etc/systemd/system.conf.
Some outputs:
systemctl show -p TasksMax docker:
TasksMax=infinity
systemctl status docker | grep Tasks:
└─50-TasksMax.conf (This line is not showing up on my local
machine)
Tasks: 53
This is leading me to the point that i still havent made the configs correct.
Here is the Docker output when excecuting docker-compose up -d:
ERROR: for wordpress-cert Cannot start service wordpress-cert: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f5cd5db7e97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f5cd5db7e97\\\\\\\\nstack: frame={sp:0x7ffe59e1f6c0, fp:0x0} stack=[0x7ffe59620d50,0x7ffe59e1fd80)\\\\\\\\n00007ffe59e1f5c0: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\nruntime: unknown pc 0x7f5cd5db7e97\\\\\\\\nstack: frame={sp:0x7ffe59e1f6c0, fp:0x0} stack=[0x7ffe59620d50,0x7ffe59e1fd80)\\\\\\\\n00007ffe59e1f5c0: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\n\\\\\\\\ngoroutine 1 [runnable, locked to thread]:\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Register(0xc0005e3b00, 0x55922583c640, 0xc000760b90, 0x0, 0x0)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:298 +0x1e6\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0xc0005e3b00, 0xc00075d000, 0x1, 0x1)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:403 +0x78\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.MustRegister(...)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:152\\\\\\\\ngithub.com/docker/docker/vendor/github.com/docker/go-metrics.Register(...)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/docker/go-metrics/register.go:8\\\\\\\\ngithub.com/docker/docker/builder/dockerfile.init.0()\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/builder/dockerfile/metrics.go:43 +0x2d3\\\\\\\\n\\\\\\\\ngoroutine 20 [syscall]:\\\\\\\\nos/signal.signal_recv(0x559222e8e556)\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/sigqueue.go:139 +0x9e\\\\\\\\nos/signal.loop()\\\\\\\\n\\\\\\\\t/usr/local/go/src/os/signal/signal_unix.go:23 +0x24\\\\\\\\ncreated by os/signal.init.0\\\\\\\\n\\\\\\\\t/usr/local/go/src/os/signal/signal_unix.go:29 +0x43\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f5cd6165840\\\\\\\\nrcx 0xffffffffffffffff\\\\\\\\nrdx 0x0\\\\\\\\nrdi 0x2\\\\\\\\nrsi 0x7ffe59e1f6c0\\\\\\\\nrbp 0x559224b7603d\\\\\\\\nrsp 0x7ffe59e1f6c0\\\\\\\\nr8 0x0\\\\\\\\nr9 0x7ffe59e1f6c0\\\\\\\\nr10 0x8\\\\\\\\nr11 0x246\\\\\\\\nr12 0x5592271bd1f0\\\\\\\\nr13 0x11\\\\\\\\nr14 0x559224b1c640\\\\\\\\nr15 0x0\\\\\\\\nrip 0x7f5cd5db7Creating nextcloud-cert ... error
unknown'
ERROR: for nextcloud-cert Cannot start service nextcloud-cert: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f2376cbce97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f2376cbce97\\\\\\\\nstack: frame={sp:0x7fff898f4ad0, fp:0x0} stack=[0x7fff890f6070,0x7fff898f50a0)\\\\\\\\n00007fff898f49d0: [...] 00007fff898f4b40 0000564eca7254a3 <runtime.persistentalloc1+515> [...] \\\\\\\\nruntime: unknown pc 0x7f2376cbce97\\\\\\\\nstack: frame={sp:0x7fff898f4ad0, fp:0x0} stack=[0x7fff890f6070,0x7fff898f50a0)\\\\\\\\n00007fff898f49d0: [...] \\\\\\\\n\\\\\\\\ngoroutine 1 [running]:\\\\\\\\nruntime.systemstack_switch()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc00006e798 sp=0xc00006e790 pc=0x564eca774450\\\\\\\\nruntime.main()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/proc.go:130 +0x67 fp=0xc00006e7e0 sp=0xc00006e798 pc=0x564eca747cd7\\\\\\\\nruntime.goexit()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc00006e7e8 sp=0xc00006e7e0 pc=0x564eca776551\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f237706a840\\\\\\\\nrcx 0xffffffffffffffff\\\Creating wordpress-it-mueller ... error
rsp 0x7fff898f4ad0\\\\\\\\nr8 0x0\\\\\\\\nr9 0x7fff898f4ad0\\\\\\\\nr10 0x8\\\\\\\\nr11 0x246\\\\\\\\nr12 0x564ed04ff990\\\\\\\\nr13 0x11\\\\\\\\nr14 0x564ecc404640\\\\\\\\nr15 0x0\\\\\\\\nrip 0x7f2376cbce97\\\\\\\\nrflags 0x246\\\\\\\\ncs 0x33\\\\\\\\nfs 0x0\\\\\\\\ngs Creating wordpress-cert-it-mueller ... done
ERROR: for wordpress-it-mueller Cannot start service wordpress-it-mueller: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f3d844b2e97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f3d844b2e97\\\\\\\\nstack: frame={sp:0x7ffeff965720, fp:0x0} stack=[0x7ffeff166cc0,0x7ffeff965cf0)\\\\\\\\n00007ffeff965620: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\nruntime: unknown pc 0x7f3d844b2e97\\\\\\\\nstack: frame={sp:0x7ffeff965720, fp:0x0} stack=[0x7ffeff166cc0,0x7ffeff965cf0)\\\\\\\\n00007ffeff965620: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\n\\\\\\\\ngoroutine 1 [running]:\\\\\\\\nruntime.systemstack_switch()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc00006e798 sp=0xc00006e790 pc=0x55a7e9792450\\\\\\\\nruntime.main()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/proc.go:130 +0x67 fp=0xc00006e7e0 sp=0xc00006e798 pc=0x55a7e9765cd7\\\\\\\\nruntime.goexit()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc00006e7e8 sp=0xc00006e7e0 pc=0x55a7e9794551\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f3d84860840\\\\\\\\nrcx 0xffffffffffffffff\\\\\\\\nrdx 0x0\\\\\\\\nrdi 0x2\\\\\\\\nrsi 0x7ffeff965720\\\\\\\\nrbp 0x55a7eb47c03d\\\\\\\\nrsp 0x7ffeff965720\\\\\\\\nr8 0x0\\\\\\\\Creating nextcloud ... done
0x11\\\\\\\\nr14 0x55a7eb422640\\\\\\\\nr15 0x0\\\\\\\\nrip 0x7f3d844b2e97\\\\\\\\nrflags 0x246\\\\\\\\ncs 0x33\\\\\\\\nfs 0x0\\\\\\\\ngs 0x0\\\\\\\\n\\\\\\"\\"": unknown'
ERROR: for wordpress-cert Cannot start service wordpress-cert: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f5cd5db7e97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f5cd5db7e97\\\\\\\\nstack: frame={sp:0x7ffe59e1f6c0, fp:0x0} stack=[0x7ffe59620d50,0x7ffe59e1fd80)\\\\\\\\n00007ffe59e1f5c0: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\nruntime: unknown pc 0x7f5cd5db7e97\\\\\\\\nstack: frame={sp:0x7ffe59e1f6c0, fp:0x0} stack=[0x7ffe59620d50,0x7ffe59e1fd80)\\\\\\\\n00007ffe59e1f5c0: [...]ffffffffffffffff ffffffffffffffff \\\\\\\\n\\\\\\\\ngoroutine 1 [runnable, locked to thread]:\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Register(0xc0005e3b00, 0x55922583c640, 0xc000760b90, 0x0, 0x0)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:298 +0x1e6\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0xc0005e3b00, 0xc00075d000, 0x1, 0x1)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:403 +0x78\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.MustRegister(...)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:152\\\\\\\\ngithub.com/docker/docker/vendor/github.com/docker/go-metrics.Register(...)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/docker/go-metrics/register.go:8\\\\\\\\ngithub.com/docker/docker/builder/dockerfile.init.0()\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/builder/dockerfile/metrics.go:43 +0x2d3\\\\\\\\n\\\\\\\\ngoroutine 20 [syscall]:\\\\\\\\nos/signal.signal_recv(0x559222e8e556)\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/sigqueue.go:139 +0x9e\\\\\\\\nos/signal.loop()\\\\\\\\n\\\\\\\\t/usr/local/go/src/os/signal/signal_unix.go:23 +0x24\\\\\\\\ncreated by os/signal.init.0\\\\\\\\n\\\\\\\\t/usr/local/go/src/os/signal/signal_unix.go:29 +0x43\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f5cd6165840\\\\\\\\nrcx 0xffffffffffffffff\\\\\\\\nrdx 0x0\\\\\\\\nrdi 0x2\\\\\\\\nrsi 0x7ffe59e1f6c0\\\\\\\\nrbp 0x559224b7603d\\\\\\\\nrsp 0x7ffe59e1f6c0\\\\\\\\nr8 0x0\\\\\\\\nr9 0x7ffe59e1f6c0\\\\\\\\nr10 0x8\\\\\\\\nr11 0x246\\\\\\\\nr12 0x5592271bd1f0\\\\\\\\nr13 0x11\\\\\\\\nr14 0x559224b1c640\\\\\\\\nr15 0x0\\\\\\\\nrip 0x7f5cd5db7e97\\\\\\\\nrflags 0x246\\\\\\\\ncs 0x33\\\\\\\\nfs 0x0\\\\\\\\ngs 0x0\\\\\\\\n\\\\\\"\\"": unknown'
ERROR: for nextcloud-cert Cannot start service nextcloud-cert: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f2376cbce97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f2376cbce97\\\\\\\\nstack: frame={sp:0x7fff898f4ad0, fp:0x0} stack=[0x7fff890f6070,0x7fff898f50a0)\\\\\\\\n00007fff898f49d0: [...]00007fff898f4ae8 00000000ffffffff \\\\\\\\n00007fff898f4af0: 00007fff898f4b40 0000564eca7254a3 <runtime.persistentalloc1+515> \\\\\\\\n00007fff898f4b00: 0000564ece51d0f0 0000000000000000 [...] \\\\\\\\nruntime: unknown pc 0x7f2376cbce97\\\\\\\\nstack: frame={sp:0x7fff898f4ad0, fp:0x0} stack=[0x7fff890f6070,0x7fff898f50a0)\\\\\\\\n00007fff898f49d0: [...] 00007fff898f4ae8 00000000ffffffff \\\\\\\\n00007fff898f4af0: 00007fff898f4b40 0000564eca7254a3 <runtime.persistentalloc1+515> \\\\\\\\n00007fff898f4b00: 0000564ece51d0f0 0000000000000000 [...] \\\\\\\\n\\\\\\\\ngoroutine 1 [running]:\\\\\\\\nruntime.systemstack_switch()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc00006e798 sp=0xc00006e790 pc=0x564eca774450\\\\\\\\nruntime.main()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/proc.go:130 +0x67 fp=0xc00006e7e0 sp=0xc00006e798 pc=0x564eca747cd7\\\\\\\\nruntime.goexit()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc00006e7e8 sp=0xc00006e7e0 pc=0x564eca776551\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f237706a840\\\\\\\\nrcx 0xffffffffffffffff\\\\\\\\nrdx 0x0\\\\\\\\nrdi 0x2\\\\\\\\nrsi 0x7fff898f4ad0\\\\\\\\nrbp 0x564ecc45e03d\\\\\\\\nrsp 0x7fff898f4ad0\\\\\\\\nr8 [...] 0x0\\\\\\\\ngs 0x0\\\\\\\\n\\\\\\"\\"": unknown'
ERROR: for wordpress-it-mueller Cannot start service wordpress-it-mueller: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f3d844b2e97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f3d844b2e97\\\\\\\\nstack: frame={sp:0x7ffeff965720, fp:0x0} stack=[0x7ffeff166cc0,0x7ffeff965cf0)\\\\\\\\n00007ffeff965620:[...] ffffffffffffffff ffffffffffffffff \\\\\\\\nruntime: unknown pc 0x7f3d844b2e97\\\\\\\\nstack: frame={sp:0x7ffeff965720, fp:0x0} stack=[0x7ffeff166cc0,0x7ffeff965cf0)\\\\\\\\n00007ffeff965620: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\n\\\\\\\\ngoroutine 1 [running]:\\\\\\\\nruntime.systemstack_switch()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc00006e798 sp=0xc00006e790 pc=0x55a7e9792450\\\\\\\\nruntime.main()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/proc.go:130 +0x67 fp=0xc00006e7e0 sp=0xc00006e798 pc=0x55a7e9765cd7\\\\\\\\nruntime.goexit()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc00006e7e8 sp=0xc00006e7e0 pc=0x55a7e9794551\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f3d84860840\\\\\\\\nrcx 0xffffffffffffffff\\\\\\\\nrdx 0x0\\\\\\\\nrdi 0x2\\\\\\\\nrsi 0x7ffeff965720\\\\\\\\nrbp 0x55a7eb47c03d\\\\\\\\nrsp 0x7ffeff965720\\\\\\\\nr8 0x0\\\\\\\\nr9 0x7ffeff965720\\\\\\\\nr10 0x8\\\\\\\\nr11 0x246\\\\\\\\nr12 0x55a7ee582990\\\\\\\\nr13 0x11\\\\\\\\nr14 0x55a7eb422640\\\\\\\\nr15 0x0\\\\\\\\nrip 0x7f3d844b2e97\\\\\\\\nrflags 0x246\\\\\\\\ncs 0x33\\\\\\\\nfs 0x0\\\\\\\\ngs 0x0\\\\\\\\n\\\\\\"\\"": unknown'
ERROR: Encountered errors while bringing up the project.
Any ideas what i might doing wrong are very appriciated!
I also had this problem on a v-server from strato.
It seems like they removed the hard limit and allow changing the task limit now:
https://www.strato.de/faq/server/prozesse-vs-threads-bei-linux-v-servern/
To solve the error I just added the following line to /etc/systemd/system.conf and restarted
DefaultTasksMax=1000
The problem was that the server provider Strato has a hard process limit for their virtual servers. I configured it correctly, but I did not expect such a restriction. It cost me so much time... You should not go cheap with your provider, I have learned my lesson.
One solution:
Comment in and modify:
/etc/systemd/logind.conf UserTasksMax
then:
sudo systemctl daemon-reload
sudo systemctl daemon-reexec
sudo service systemd-logind restart
I have the same issue with grafana-agent and I've added security_opt option to fix it:
version: "3"
services:
agent:
image: grafana/agent:latest
security_opt:
- seccomp:unconfined
volumes:
- ./agent/config:/etc/agent-config
entrypoint:
- /bin/agent
- -config.file=/etc/agent-config/agent.yaml
- -metrics.wal-directory=/tmp/agent/wal
- -enable-features=integrations-next
- -config.expand-env
- -config.enable-read-api
ports:
- "12345:12345"
I am trying to get Docker Desktop for Mac up and running on my Mac under MacOs Mojave 10.14.5. After install, upon running
docker -v
I receive the following error:
failed MSpanList_Insert 0x743000 0x1d03f7990b2f 0x0
fatal error: MSpanList_Insert
I have tried:
Rebooting the system.
Uninstalling and reinstalling docker and rebooting.
Checking to see if I have go installed (I did not). See this article
Installing go (go1.12.6 darwin/amd64) using brew.
Uninstalling and reinstalling go.
More detail from the error is below:
failed MSpanList_Insert 0x743000 0x1d03f7990b2f 0x0
fatal error: MSpanList_Insert
runtime stack:
runtime.throw(0x69460b)
/usr/local/go/src/runtime/panic.go:491 +0xad fp=0x7ffeefbff240 sp=0x7ffeefbff210
runtime.MSpanList_Insert(0x6bd368, 0x743000)
/usr/local/go/src/runtime/mheap.c:692 +0x8f fp=0x7ffeefbff268 sp=0x7ffeefbff240
MHeap_FreeSpanLocked(0x6b9f60, 0x743000, 0x100)
/usr/local/go/src/runtime/mheap.c:583 +0x163 fp=0x7ffeefbff2a8 sp=0x7ffeefbff268
MHeap_Grow(0x6b9f60, 0x8, 0x0)
/usr/local/go/src/runtime/mheap.c:420 +0x1a8 fp=0x7ffeefbff2e8 sp=0x7ffeefbff2a8
MHeap_AllocSpanLocked(0x6b9f60, 0x1, 0x0)
/usr/local/go/src/runtime/mheap.c:298 +0x365 fp=0x7ffeefbff328 sp=0x7ffeefbff2e8
mheap_alloc(0x6b9f60, 0x1, 0x12, 0x0)
/usr/local/go/src/runtime/mheap.c:190 +0x121 fp=0x7ffeefbff350 sp=0x7ffeefbff328
....