when cargo install cargo-generate get an error error: failed to fetch https://github.com/rust-lang/crates.io-index%60
during carry out this command,i get a warning :warning: spurious network error (2 tries remaining): [28] Timeout was reached (Connection timeout after 30004 ms); class=Net (12)
then give a error:
cargo install micro-http
Updating crates.io index
warning: spurious network error (2 tries remaining): [28] Timeout was reached (Connection timeout after 30004 ms); class=Net (12)
warning: spurious network error (1 tries remaining): [28] Timeout was reached (Connection timeout after 30001 ms); class=Net (12)
error: failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
network failure seems to have happened
if a proxy or similar is necessary `net.git-fetch-with-cli` may help here
https://doc.rust-lang.org/cargo/reference/config.html#netgit-fetch-with-cli
Caused by:
[28] Timeout was reached (Connection timeout after 30004 ms); class=Net (12)
i can visit github.com ,and i refix my rustup to the newest version;but all these methods do not work, who can help me fix this;
i can use my M1 mac to fix cargo-generate, who can tell me which aspect i meet,thank you very much
I have solve my problem
git config --list
lookup my git config ,i find i have set a proxy for git,like this:
proxy='xxxx.xxx'
i carry out git config unset proxy to cancel my proxy
then i fix my problem
I'm trying to setup tensorflow to use GPU acceleration with WSL 2 running Ubuntu 20.04. I'm following this tutorial and am running into the error seen here. However, when I follow the solution there and try to start docker with sudo service docker start I get told docker is an unrecognized service. However, considering I can access the help menu and whatnot, I know docker is installed. While I can get docker to work with the desktop tool, since it doesn't support Cuda as mentioned in the SO post from earlier, it's not very helpful. It's not really giving me error logs or anything, so please ask if you need more details.
Edit:
Considering the lack of details, here are a list of solutions I've tried to no avail. 1 2 3
Update: I used sudo dockerd to get the container started and tried running the nvidia benchmark container only to be met with
INFO[2020-07-18T21:04:05.875283800-04:00] shim containerd-shim started address=/containerd-shim/021834ef5e5600bdf62a6a9e26dff7ffc1c76dd4ec9dadb9c1fcafb6c88b6e1b.sock debug=false pid=1960
INFO[2020-07-18T21:04:05.899420200-04:00] shim reaped id=70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736
ERRO[2020-07-18T21:04:05.909710600-04:00] stream copy error: reading from a closed fifo
ERRO[2020-07-18T21:04:05.909753500-04:00] stream copy error: reading from a closed fifo
ERRO[2020-07-18T21:04:06.001006700-04:00] 70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736 cleanup: failed to delete container from containerd: no such container
ERRO[2020-07-18T21:04:06.001045100-04:00] Handler for POST /v1.40/containers/70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736/start returned error: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown.
ERRO[0000] error waiting for container: context canceled
Update 2: After installing windows insider and making everything as up to date as possible, I encountered a different error.
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies=<N> (number of bodies (>= 1) to run in simulation)
-device=<d> (where d=0,1,2.... for the CUDA device to use)
-numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation)
-compare (compares simulation results running once on the default GPU and once on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Error: only 0 Devices available, 1 requested. Exiting.
I have a GTX 970, so I'm not sure why it's not being detected. After running sudo lshw -C display, it was confirmed that my graphics card isn't being detected. I got:
*-display UNCLAIMED
description: 3D controller
product: Microsoft Corporation
vendor: Microsoft Corporation
physical id: 4
bus info: pci#941e:00:00.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: bus_master cap_list
configuration: latency=0
Nebula Graph fails on CentOS 6.5, the error message is as follows:
# storage log
Heartbeat failed, status:RPC failure in MetaClient: N6apache6thrift9transport19TTransportExceptionE: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused): Connection refused
# meta log
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0415 22:32:38.944437 15532 AsyncServerSocket.cpp:762] failed to set SO_REUSEPORT on async server socket Protocol not available
E0415 22:32:38.945001 15510 ThriftServer.cpp:440] Got an exception while setting up the server: 92failed to bind to async server socket: [::]:0: Protocol not available
E0415 22:32:38.945057 15510 RaftexService.cpp:90] Setup the Raftex Service failed, error: 92failed to bind to async server socket: [::]:0: Protocol not available
E0415 22:32:38.949586 15463 NebulaStore.cpp:47] Start the raft service failed
E0415 22:32:38.949597 15463 MetaDaemon.cpp:88] Nebula store init failed
E0415 22:32:38.949796 15463 MetaDaemon.cpp:215] Init kv failed!
Nebula service status is as follows:
[root#redhat6 scripts]# ./nebula.service status all
[WARN] The maximum files allowed to open might be too few: 1024
[INFO] nebula-metad: Exited
[INFO] nebula-graphd: Exited
[INFO] nebula-storaged: Running as 15547, Listening on 44500
Reason for error: CentOS 6.5 system kernel version is 2.6.32, which is less than 3.9. However, SO_REUSEPORT only supports Linux 3.9 and above.
Upgrading the system to CentOS 7.5 can solve the problem by itself.
I wrote a docker-compose.yml which is working on my local ubuntu 18.04 system. I ran the same yml on my ubuntu server 18.04 lts machine and got the errormessage in the title. I tried to increase TasksMax for the process but it didnt work.
I also set DefaultTasksMax=infinity in /etc/systemd/system.conf.
Some outputs:
systemctl show -p TasksMax docker:
TasksMax=infinity
systemctl status docker | grep Tasks:
└─50-TasksMax.conf (This line is not showing up on my local
machine)
Tasks: 53
This is leading me to the point that i still havent made the configs correct.
Here is the Docker output when excecuting docker-compose up -d:
ERROR: for wordpress-cert Cannot start service wordpress-cert: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f5cd5db7e97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f5cd5db7e97\\\\\\\\nstack: frame={sp:0x7ffe59e1f6c0, fp:0x0} stack=[0x7ffe59620d50,0x7ffe59e1fd80)\\\\\\\\n00007ffe59e1f5c0: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\nruntime: unknown pc 0x7f5cd5db7e97\\\\\\\\nstack: frame={sp:0x7ffe59e1f6c0, fp:0x0} stack=[0x7ffe59620d50,0x7ffe59e1fd80)\\\\\\\\n00007ffe59e1f5c0: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\n\\\\\\\\ngoroutine 1 [runnable, locked to thread]:\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Register(0xc0005e3b00, 0x55922583c640, 0xc000760b90, 0x0, 0x0)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:298 +0x1e6\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0xc0005e3b00, 0xc00075d000, 0x1, 0x1)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:403 +0x78\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.MustRegister(...)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:152\\\\\\\\ngithub.com/docker/docker/vendor/github.com/docker/go-metrics.Register(...)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/docker/go-metrics/register.go:8\\\\\\\\ngithub.com/docker/docker/builder/dockerfile.init.0()\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/builder/dockerfile/metrics.go:43 +0x2d3\\\\\\\\n\\\\\\\\ngoroutine 20 [syscall]:\\\\\\\\nos/signal.signal_recv(0x559222e8e556)\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/sigqueue.go:139 +0x9e\\\\\\\\nos/signal.loop()\\\\\\\\n\\\\\\\\t/usr/local/go/src/os/signal/signal_unix.go:23 +0x24\\\\\\\\ncreated by os/signal.init.0\\\\\\\\n\\\\\\\\t/usr/local/go/src/os/signal/signal_unix.go:29 +0x43\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f5cd6165840\\\\\\\\nrcx 0xffffffffffffffff\\\\\\\\nrdx 0x0\\\\\\\\nrdi 0x2\\\\\\\\nrsi 0x7ffe59e1f6c0\\\\\\\\nrbp 0x559224b7603d\\\\\\\\nrsp 0x7ffe59e1f6c0\\\\\\\\nr8 0x0\\\\\\\\nr9 0x7ffe59e1f6c0\\\\\\\\nr10 0x8\\\\\\\\nr11 0x246\\\\\\\\nr12 0x5592271bd1f0\\\\\\\\nr13 0x11\\\\\\\\nr14 0x559224b1c640\\\\\\\\nr15 0x0\\\\\\\\nrip 0x7f5cd5db7Creating nextcloud-cert ... error
unknown'
ERROR: for nextcloud-cert Cannot start service nextcloud-cert: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f2376cbce97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f2376cbce97\\\\\\\\nstack: frame={sp:0x7fff898f4ad0, fp:0x0} stack=[0x7fff890f6070,0x7fff898f50a0)\\\\\\\\n00007fff898f49d0: [...] 00007fff898f4b40 0000564eca7254a3 <runtime.persistentalloc1+515> [...] \\\\\\\\nruntime: unknown pc 0x7f2376cbce97\\\\\\\\nstack: frame={sp:0x7fff898f4ad0, fp:0x0} stack=[0x7fff890f6070,0x7fff898f50a0)\\\\\\\\n00007fff898f49d0: [...] \\\\\\\\n\\\\\\\\ngoroutine 1 [running]:\\\\\\\\nruntime.systemstack_switch()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc00006e798 sp=0xc00006e790 pc=0x564eca774450\\\\\\\\nruntime.main()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/proc.go:130 +0x67 fp=0xc00006e7e0 sp=0xc00006e798 pc=0x564eca747cd7\\\\\\\\nruntime.goexit()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc00006e7e8 sp=0xc00006e7e0 pc=0x564eca776551\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f237706a840\\\\\\\\nrcx 0xffffffffffffffff\\\Creating wordpress-it-mueller ... error
rsp 0x7fff898f4ad0\\\\\\\\nr8 0x0\\\\\\\\nr9 0x7fff898f4ad0\\\\\\\\nr10 0x8\\\\\\\\nr11 0x246\\\\\\\\nr12 0x564ed04ff990\\\\\\\\nr13 0x11\\\\\\\\nr14 0x564ecc404640\\\\\\\\nr15 0x0\\\\\\\\nrip 0x7f2376cbce97\\\\\\\\nrflags 0x246\\\\\\\\ncs 0x33\\\\\\\\nfs 0x0\\\\\\\\ngs Creating wordpress-cert-it-mueller ... done
ERROR: for wordpress-it-mueller Cannot start service wordpress-it-mueller: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f3d844b2e97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f3d844b2e97\\\\\\\\nstack: frame={sp:0x7ffeff965720, fp:0x0} stack=[0x7ffeff166cc0,0x7ffeff965cf0)\\\\\\\\n00007ffeff965620: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\nruntime: unknown pc 0x7f3d844b2e97\\\\\\\\nstack: frame={sp:0x7ffeff965720, fp:0x0} stack=[0x7ffeff166cc0,0x7ffeff965cf0)\\\\\\\\n00007ffeff965620: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\n\\\\\\\\ngoroutine 1 [running]:\\\\\\\\nruntime.systemstack_switch()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc00006e798 sp=0xc00006e790 pc=0x55a7e9792450\\\\\\\\nruntime.main()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/proc.go:130 +0x67 fp=0xc00006e7e0 sp=0xc00006e798 pc=0x55a7e9765cd7\\\\\\\\nruntime.goexit()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc00006e7e8 sp=0xc00006e7e0 pc=0x55a7e9794551\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f3d84860840\\\\\\\\nrcx 0xffffffffffffffff\\\\\\\\nrdx 0x0\\\\\\\\nrdi 0x2\\\\\\\\nrsi 0x7ffeff965720\\\\\\\\nrbp 0x55a7eb47c03d\\\\\\\\nrsp 0x7ffeff965720\\\\\\\\nr8 0x0\\\\\\\\Creating nextcloud ... done
0x11\\\\\\\\nr14 0x55a7eb422640\\\\\\\\nr15 0x0\\\\\\\\nrip 0x7f3d844b2e97\\\\\\\\nrflags 0x246\\\\\\\\ncs 0x33\\\\\\\\nfs 0x0\\\\\\\\ngs 0x0\\\\\\\\n\\\\\\"\\"": unknown'
ERROR: for wordpress-cert Cannot start service wordpress-cert: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f5cd5db7e97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f5cd5db7e97\\\\\\\\nstack: frame={sp:0x7ffe59e1f6c0, fp:0x0} stack=[0x7ffe59620d50,0x7ffe59e1fd80)\\\\\\\\n00007ffe59e1f5c0: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\nruntime: unknown pc 0x7f5cd5db7e97\\\\\\\\nstack: frame={sp:0x7ffe59e1f6c0, fp:0x0} stack=[0x7ffe59620d50,0x7ffe59e1fd80)\\\\\\\\n00007ffe59e1f5c0: [...]ffffffffffffffff ffffffffffffffff \\\\\\\\n\\\\\\\\ngoroutine 1 [runnable, locked to thread]:\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Register(0xc0005e3b00, 0x55922583c640, 0xc000760b90, 0x0, 0x0)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:298 +0x1e6\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0xc0005e3b00, 0xc00075d000, 0x1, 0x1)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:403 +0x78\\\\\\\\ngithub.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus.MustRegister(...)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/prometheus/client_golang/prometheus/registry.go:152\\\\\\\\ngithub.com/docker/docker/vendor/github.com/docker/go-metrics.Register(...)\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/vendor/github.com/docker/go-metrics/register.go:8\\\\\\\\ngithub.com/docker/docker/builder/dockerfile.init.0()\\\\\\\\n\\\\\\\\t/go/src/github.com/docker/docker/builder/dockerfile/metrics.go:43 +0x2d3\\\\\\\\n\\\\\\\\ngoroutine 20 [syscall]:\\\\\\\\nos/signal.signal_recv(0x559222e8e556)\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/sigqueue.go:139 +0x9e\\\\\\\\nos/signal.loop()\\\\\\\\n\\\\\\\\t/usr/local/go/src/os/signal/signal_unix.go:23 +0x24\\\\\\\\ncreated by os/signal.init.0\\\\\\\\n\\\\\\\\t/usr/local/go/src/os/signal/signal_unix.go:29 +0x43\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f5cd6165840\\\\\\\\nrcx 0xffffffffffffffff\\\\\\\\nrdx 0x0\\\\\\\\nrdi 0x2\\\\\\\\nrsi 0x7ffe59e1f6c0\\\\\\\\nrbp 0x559224b7603d\\\\\\\\nrsp 0x7ffe59e1f6c0\\\\\\\\nr8 0x0\\\\\\\\nr9 0x7ffe59e1f6c0\\\\\\\\nr10 0x8\\\\\\\\nr11 0x246\\\\\\\\nr12 0x5592271bd1f0\\\\\\\\nr13 0x11\\\\\\\\nr14 0x559224b1c640\\\\\\\\nr15 0x0\\\\\\\\nrip 0x7f5cd5db7e97\\\\\\\\nrflags 0x246\\\\\\\\ncs 0x33\\\\\\\\nfs 0x0\\\\\\\\ngs 0x0\\\\\\\\n\\\\\\"\\"": unknown'
ERROR: for nextcloud-cert Cannot start service nextcloud-cert: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f2376cbce97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f2376cbce97\\\\\\\\nstack: frame={sp:0x7fff898f4ad0, fp:0x0} stack=[0x7fff890f6070,0x7fff898f50a0)\\\\\\\\n00007fff898f49d0: [...]00007fff898f4ae8 00000000ffffffff \\\\\\\\n00007fff898f4af0: 00007fff898f4b40 0000564eca7254a3 <runtime.persistentalloc1+515> \\\\\\\\n00007fff898f4b00: 0000564ece51d0f0 0000000000000000 [...] \\\\\\\\nruntime: unknown pc 0x7f2376cbce97\\\\\\\\nstack: frame={sp:0x7fff898f4ad0, fp:0x0} stack=[0x7fff890f6070,0x7fff898f50a0)\\\\\\\\n00007fff898f49d0: [...] 00007fff898f4ae8 00000000ffffffff \\\\\\\\n00007fff898f4af0: 00007fff898f4b40 0000564eca7254a3 <runtime.persistentalloc1+515> \\\\\\\\n00007fff898f4b00: 0000564ece51d0f0 0000000000000000 [...] \\\\\\\\n\\\\\\\\ngoroutine 1 [running]:\\\\\\\\nruntime.systemstack_switch()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc00006e798 sp=0xc00006e790 pc=0x564eca774450\\\\\\\\nruntime.main()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/proc.go:130 +0x67 fp=0xc00006e7e0 sp=0xc00006e798 pc=0x564eca747cd7\\\\\\\\nruntime.goexit()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc00006e7e8 sp=0xc00006e7e0 pc=0x564eca776551\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f237706a840\\\\\\\\nrcx 0xffffffffffffffff\\\\\\\\nrdx 0x0\\\\\\\\nrdi 0x2\\\\\\\\nrsi 0x7fff898f4ad0\\\\\\\\nrbp 0x564ecc45e03d\\\\\\\\nrsp 0x7fff898f4ad0\\\\\\\\nr8 [...] 0x0\\\\\\\\ngs 0x0\\\\\\\\n\\\\\\"\\"": unknown'
ERROR: for wordpress-it-mueller Cannot start service wordpress-it-mueller: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \\"process_linux.go:413: running prestart hook 0 caused \\\\\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable\\\\\\\\nSIGABRT: abort\\\\\\\\nPC=0x7f3d844b2e97 m=0 sigcode=18446744073709551610\\\\\\\\n\\\\\\\\ngoroutine 0 [idle]:\\\\\\\\nruntime: unknown pc 0x7f3d844b2e97\\\\\\\\nstack: frame={sp:0x7ffeff965720, fp:0x0} stack=[0x7ffeff166cc0,0x7ffeff965cf0)\\\\\\\\n00007ffeff965620:[...] ffffffffffffffff ffffffffffffffff \\\\\\\\nruntime: unknown pc 0x7f3d844b2e97\\\\\\\\nstack: frame={sp:0x7ffeff965720, fp:0x0} stack=[0x7ffeff166cc0,0x7ffeff965cf0)\\\\\\\\n00007ffeff965620: [...] ffffffffffffffff ffffffffffffffff \\\\\\\\n\\\\\\\\ngoroutine 1 [running]:\\\\\\\\nruntime.systemstack_switch()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc00006e798 sp=0xc00006e790 pc=0x55a7e9792450\\\\\\\\nruntime.main()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/proc.go:130 +0x67 fp=0xc00006e7e0 sp=0xc00006e798 pc=0x55a7e9765cd7\\\\\\\\nruntime.goexit()\\\\\\\\n\\\\\\\\t/usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc00006e7e8 sp=0xc00006e7e0 pc=0x55a7e9794551\\\\\\\\n\\\\\\\\nrax 0x0\\\\\\\\nrbx 0x7f3d84860840\\\\\\\\nrcx 0xffffffffffffffff\\\\\\\\nrdx 0x0\\\\\\\\nrdi 0x2\\\\\\\\nrsi 0x7ffeff965720\\\\\\\\nrbp 0x55a7eb47c03d\\\\\\\\nrsp 0x7ffeff965720\\\\\\\\nr8 0x0\\\\\\\\nr9 0x7ffeff965720\\\\\\\\nr10 0x8\\\\\\\\nr11 0x246\\\\\\\\nr12 0x55a7ee582990\\\\\\\\nr13 0x11\\\\\\\\nr14 0x55a7eb422640\\\\\\\\nr15 0x0\\\\\\\\nrip 0x7f3d844b2e97\\\\\\\\nrflags 0x246\\\\\\\\ncs 0x33\\\\\\\\nfs 0x0\\\\\\\\ngs 0x0\\\\\\\\n\\\\\\"\\"": unknown'
ERROR: Encountered errors while bringing up the project.
Any ideas what i might doing wrong are very appriciated!
I also had this problem on a v-server from strato.
It seems like they removed the hard limit and allow changing the task limit now:
https://www.strato.de/faq/server/prozesse-vs-threads-bei-linux-v-servern/
To solve the error I just added the following line to /etc/systemd/system.conf and restarted
DefaultTasksMax=1000
The problem was that the server provider Strato has a hard process limit for their virtual servers. I configured it correctly, but I did not expect such a restriction. It cost me so much time... You should not go cheap with your provider, I have learned my lesson.
One solution:
Comment in and modify:
/etc/systemd/logind.conf UserTasksMax
then:
sudo systemctl daemon-reload
sudo systemctl daemon-reexec
sudo service systemd-logind restart
I have the same issue with grafana-agent and I've added security_opt option to fix it:
version: "3"
services:
agent:
image: grafana/agent:latest
security_opt:
- seccomp:unconfined
volumes:
- ./agent/config:/etc/agent-config
entrypoint:
- /bin/agent
- -config.file=/etc/agent-config/agent.yaml
- -metrics.wal-directory=/tmp/agent/wal
- -enable-features=integrations-next
- -config.expand-env
- -config.enable-read-api
ports:
- "12345:12345"
I have a job that is failing at the very start of the message:
"#*" and "#N" are reserved sharding specs. Filepattern must not contain any of them.
I have altered the destination location to be something other than the default (an email address) which would include the # symbol but I can still see it is using temporary destinations within that path that I am unable to edit.
Did anyone experience this issue before? I've got a file which is only 65k rows long, I can preview all of the complete data in Data Prep but when I run the job it fails which is super tedious and ~3hrs of cleaning down the drain if this won't run. (I appreciate it's not designed for this, but Excel was being a mare so it seemed like a good solution!)
Edit - Adding Logs:
2018-03-10 (13:47:34) Value "PTableLoadTransformGCS/Shuffle/GroupByKey/Session" materialized.
2018-03-10 (13:47:34) Executing operation PTableLoadTransformGCS/SumQuoteAndDelimiterCounts/GroupByKey/Read+PTableLoadTran...
2018-03-10 (13:47:38) Executing operation PTableLoadTransformGCS/Shuffle/GroupByKey/Close
2018-03-10 (13:47:38) Executing operation PTableStoreTransformGCS/WriteFiles/GroupUnwritten/Create
2018-03-10 (13:47:39) Value "PTableStoreTransformGCS/WriteFiles/GroupUnwritten/Session" materialized.
2018-03-10 (13:47:39) Executing operation PTableLoadTransformGCS/Shuffle/GroupByKey/Read+PTableLoadTransformGCS/Shuffle/Gr...
2018-03-10 (13:47:39) Executing failure step failure49
2018-03-10 (13:47:39) Workflow failed. Causes: (c759db2a23a80ea): "#*" and "#N" are reserved sharding specs. Filepattern m...
(c759db2a23a8c5b): Workflow failed. Causes: (c759db2a23a80ea): "#*" and "#N" are reserved sharding specs. Filepattern must not contain any of them.
2018-03-10 (13:47:39) Cleaning up.
2018-03-10 (13:47:39) Starting worker pool teardown.
2018-03-10 (13:47:39) Stopping worker pool...
And StackDriver warnings or higher:
W ACPI: RSDP 0x00000000000F23A0 000014 (v00 Google)
W ACPI: RSDT 0x00000000BFFF3430 000038 (v01 Google GOOGRSDT 00000001 GOOG 00000001)
W ACPI: FACP 0x00000000BFFFCF60 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001)
W ACPI: DSDT 0x00000000BFFF3470 0017B2 (v01 Google GOOGDSDT 00000001 GOOG 00000001)
W ACPI: FACS 0x00000000BFFFCF00 000040
W ACPI: FACS 0x00000000BFFFCF00 000040
W ACPI: SSDT 0x00000000BFFF65F0 00690D (v01 Google GOOGSSDT 00000001 GOOG 00000001)
W ACPI: APIC 0x00000000BFFF5D10 00006E (v01 Google GOOGAPIC 00000001 GOOG 00000001)
W ACPI: WAET 0x00000000BFFF5CE0 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001)
W ACPI: SRAT 0x00000000BFFF4C30 0000B8 (v01 Google GOOGSRAT 00000001 GOOG 00000001)
W ACPI: 2 ACPI AML tables successfully acquired and loaded
W ACPI: Executed 2 blocks of module-level executable AML code
W acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
W ACPI: Enabled 16 GPEs in block 00 to 0F
W ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
W ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10
W i8042: Warning: Keylock active
W GPT:Primary header thinks Alt. header is not at the end of the disk.
W GPT:41943039 != 524287999
W GPT:Alternate GPT header not at the end of the disk.
W GPT:41943039 != 524287999
W GPT: Use GNU Parted to correct GPT errors.
W device-mapper: verity: Argument 0: 'payload=PARTUUID=245B0EEC-6404-8744-AAF2-E8C6BF78D7B2'
W device-mapper: verity: Argument 1: 'hashtree=PARTUUID=245B0EEC-6404-8744-AAF2-E8C6BF78D7B2'
W device-mapper: verity: Argument 2: 'hashstart=2539520'
W device-mapper: verity: Argument 3: 'alg=sha1'
W device-mapper: verity: Argument 4: 'root_hexdigest=244007b512ddbf69792d485fdcbc3440531f1264'
W device-mapper: verity: Argument 5: 'salt=5bacc0df39d2a60191e9b221ffc962c55e251ead18cf1472bf8d3ed84383765b'
E EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities
W [/usr/lib/tmpfiles.d/var.conf:12] Duplicate line for path "/var/run", ignoring.
W Could not stat /dev/pstore: No such file or directory
W Kernel does not support crash dumping
W Could not load the device policy file.
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/version.cycle for reading: No such file or directory
W No API client: no api servers specified
W Unable to update cni config: No networks found in /etc/cni/net.d
W unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
W No api server defined - no events will be sent to API server.
W Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
W Unable to update cni config: No networks found in /etc/cni/net.d
E Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container /
W No api server defined - no node status update will be sent.
E Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
E [ContainerManager]: Fail to get rootfs information unable to find data for container /
W Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
E Could not find capacity information for resource storage.kubernetes.io/scratch
W eviction manager: no observation found for eviction signal allocatableNodeFs.available
W Profiling Agent not found. Profiles will not be available from this worker.
E debconf: delaying package configuration, since apt-utils is not installed
W [WARNING:metrics_daemon.cc(598)] cannot read /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq
E % Total % Received % Xferd Average Speed Time Time Time Current
E Dload Upload Total Spent Left Speed
E
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 3698 100 3698 0 0 64248 0 --:--:-- --:--:-- --:--:-- 64877