I was trying to make a AIO practice via kernel AIO API. Here is some code:
#define _GNU_SOURCE /* syscall() not POSIX */
#define ALIGN_SIZE 4096
#define RD_WR_SIZE 1024
/* ... */
/* Make the alignment according to page size to validate open() */
posix_memalign(&buf, ALIGN_SIZE, RD_WR_SIZE);
fd = open("aio_test_file", O_RDWR | O_CREAT | O_DIRECT, 0644);
if (fd == -1) {
perror("open");
return -1;
}
/* ... */
But my open() calling still fail:
open: Invalid argument
I have searched that error. Some of them says that you must make the alignment in the case of direct I/O. By using the command:
$ sudo dumpe2fs /dev/sda1 | grep -i "block size"
I got the block size is 4096. But why the open() calling still fail?
From the man page:
O_DIRECT support was added under Linux in kernel version 2.4.10. Older
Linux kernels simply ignore this flag. Some file systems may not
implement the flag and open() will fail with EINVAL if it is used.
That would be my first guess.
Related
currently i'm trying to build a yocto image for a sc20 from quectel and i'm getting this error.
Yocto error
file msm8990.inc has RECOVERYFS_SIZE_EXT4='268435456'
how can i handle this type of issue?
here is a complete log
ERROR: machine-recovery-image-1.0-r0 do_makesystem: Function failed: do_makesystem (log file is located at /opt/poky/build/tmp-glibc/work/msm8909-oe-linux-gnueabi/machine-recovery-image/1.0-r0/temp/log.do_makesystem.3590122)
ERROR: Logfile of failure stored in: /opt/poky/build/tmp-glibc/work/msm8909-oe-linux-gnueabi/machine-recovery-image/1.0-r0/temp/log.do_makesystem.3590122
Log data follows:
| DEBUG: Executing shell function do_makesystem
| error: ext4_allocate_best_fit_partial: failed to allocate 74 blocks, out of space?
| fs_config_file: (none)
| No fs_config applied as no fsconfig.conf file passed using -C option.
| Creating filesystem with parameters:
| Size: 268435456
| Block size: 4096
| Blocks per group: 32768
| Inodes per group: 8192
| Inode size: 256
| Journal blocks: 1024
| Label:
| Blocks: 65536
| Block groups: 2
| Reserved block group size: 15
| WARNING: /opt/poky/build/tmp-glibc/work/msm8909-oe-linux-gnueabi/machine-recovery-image/1.0-r0/temp/run.do_makesystem.3590122:1 exit 1 from 'make_ext4fs -s -a / -S /opt/poky/build/tmp-glibc/work/msm8909-oe-linux-gnueabi/machine-recovery-image/1.0-r0/rootfs/etc/selinux/mls/contexts/files/file_contexts -l 268435456 /opt/poky/build/tmp-glibc/deploy/images/msm8909-quec-smart/msm8909-recoveryfs.ext4 /opt/poky/build/tmp-glibc/work/msm8909-oe-linux-gnueabi/machine-recovery-image/1.0-r0/rootfs'
| ERROR: Function failed: do_makesystem (log file is located at /opt/poky/build/tmp-glibc/work/msm8909-oe-linux-gnueabi/machine-recovery-image/1.0-r0/temp/log.do_makesystem.3590122)
ERROR: Task (/opt/poky/meta-qti-bsp/recipes-products/images/machine-recovery-image.bb:do_makesystem) failed with exit code '1'
NOTE: Tasks Summary: Attempted 7550 tasks of which 7547 didn't need to be rerun and 1 failed.
NOTE: Writing buildhistory
Summary: 1 task failed:
/opt/poky/meta-qti-bsp/recipes-products/images/machine-recovery-image.bb:do_makesystem
Summary: There were 5 WARNING messages shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
/opt/poky/build
error: ext4_allocate_best_fit_partial: failed to allocate 74 blocks,
out of space?
The error is related to the size of the image. Your system files tree is bigger than your image size.
To solve the problem you need to increase the root filesystem partition size:
make menuconfig -> Target Images -> Root filesystem partition size (in MB)
Just for fun, I'm attempting to build the irixxxx's qemu-irix as a Docker image.
It builds successfuly under Debian Buster (using GCC 8 container). However, it fails to build under Alpine 3.10 (it fails only when I selected the Irix targets), with the following errors (at the make stage):
CC util/oslib-posix.o
In file included from util/oslib-posix.c:39:
/usr/include/sys/signal.h:1:2: warning: #warning redirecting incorrect #include <sys/signal.h> to <signal.h> [-Wcpp]
#warning redirecting incorrect #include <sys/signal.h> to <signal.h>
^~~~~~~
CC util/qemu-openpty.o
util/qemu-openpty.c: In function 'qemu_openpty_raw':
util/qemu-openpty.c:123:9: warning: implicit declaration of function 'openpty'; did you mean 'openat'? [-Wimplicit-function-declaration]
if (openpty(&amaster, aslave, pty_buf, NULL, NULL) < 0) {
^~~~~~~
openat
util/qemu-openpty.c:123:9: warning: nested extern declaration of 'openpty' [-Wnested-externs]
(...)
CC irix-linux-user/linux-user/syscall.o
/qemu-irix/linux-user/syscall.c:6784:22: error: 'F_EXLCK' undeclared here (not in a function); did you mean 'F_RDLCK'?
TRANSTBL_CONVERT(F_EXLCK),
^~~~~~~
/qemu-irix/linux-user/syscall.c:6779:51: note: in definition of macro 'TRANSTBL_CONVERT'
#define TRANSTBL_CONVERT(a) { -1, TARGET_##a, -1, a }
^
/qemu-irix/linux-user/syscall.c:6785:22: error: 'F_SHLCK' undeclared here (not in a function); did you mean 'F_RDLCK'?
TRANSTBL_CONVERT(F_SHLCK),
^~~~~~~
/qemu-irix/linux-user/syscall.c:6779:51: note: in definition of macro 'TRANSTBL_CONVERT'
#define TRANSTBL_CONVERT(a) { -1, TARGET_##a, -1, a }
^
In file included from /qemu-irix/linux-user/syscall.c:121:
/qemu-irix/linux-user/syscall.c: In function 'target_to_host_sigevent':
/qemu-irix/linux-user/syscall.c:7422:27: error: 'struct sigevent' has no member named '_sigev_un'; did you mean 'sigev_value'?
__get_user(host_sevp->_sigev_un._tid, &target_sevp->_sigev_un._tid);
^~~~~~~~~
/qemu-irix/linux-user/qemu.h:501:5: note: in definition of macro '__get_user_e'
((x) = (typeof(*hptr))( \
^
/qemu-irix/linux-user/syscall.c:7422:5: note: in expansion of macro '__get_user'
__get_user(host_sevp->_sigev_un._tid, &target_sevp->_sigev_un._tid);
^~~~~~~~~~
/qemu-irix/linux-user/syscall.c:7422:36: error: '(const bitmask_transtbl *)&<erroneous-expression>' is a pointer; did you mean to use '->'?
__get_user(host_sevp->_sigev_un._tid, &target_sevp->_sigev_un._tid);
^
/qemu-irix/linux-user/qemu.h:501:5: note: in definition of macro '__get_user_e'
((x) = (typeof(*hptr))( \
^
/qemu-irix/linux-user/syscall.c:7422:5: note: in expansion of macro '__get_user'
__get_user(host_sevp->_sigev_un._tid, &target_sevp->_sigev_un._tid);
^~~~~~~~~~
/qemu-irix/linux-user/qemu.h:506:13: warning: left-hand operand of comma expression has no effect [-Wunused-value]
(hptr)), (void)0)
^
/qemu-irix/linux-user/qemu.h:510:31: note: in expansion of macro '__get_user_e'
# define __get_user(x, hptr) __get_user_e(x, hptr, be)
^~~~~~~~~~~~
/qemu-irix/linux-user/syscall.c:7422:5: note: in expansion of macro '__get_user'
__get_user(host_sevp->_sigev_un._tid, &target_sevp->_sigev_un._tid);
^~~~~~~~~~
/qemu-irix/linux-user/syscall.c: In function 'do_syscall':
/qemu-irix/linux-user/syscall.c:13545:25: warning: implicit declaration of function 'sethostid'; did you mean 'gethostid'? [-Wimplicit-function-declaration]
ret = get_errno(sethostid(arg1));
^~~~~~~~~
gethostid
/qemu-irix/linux-user/syscall.c:13545:25: warning: nested extern declaration of 'sethostid' [-Wnested-externs]
make[1]: *** [/qemu-irix/rules.mak:66: linux-user/syscall.o] Error 1
make: *** [Makefile:472: subdir-irix-linux-user] Error 2
The command '/bin/sh -c make && DESTDIR=/tmp/qemu make install' returned a non-zero code: 2
I researched about the differences between Debian's and Alpine's GCC (in fact, libc6 and musl) about similar failures, but I haven't found any information about how to solve this without modifying the code.
So, where can I find more information about this issue and which packages may solve it? Thanks.
Dockerfiles and full log here. Docker container image (Debian Buster) here.
The difference seems to be caused by the underlying libc implementations: glibc of Debian, vs musl-libc of Alpine.
While GNU libc is the defacto standard libc implementation in Linux, musl libc is used by a handful of distributions, such as Alpine Linux and Void Linux. musl is a minimalistic strict-POSIX libc implementation, and is generally not compatible with glibc. Usually, software projects have to be ported to musl libc to be supported on Alpine, especially non trivial applications.
The compilation of syscall.c breaks on several places, the first being:
/qemu-irix/linux-user/syscall.c:6784:22: error: 'F_EXLCK' undeclared here (not in a function); did you mean 'F_RDLCK'?
TRANSTBL_CONVERT(F_EXLCK)
The F_EXLCK is macro is not defined in musl libc's fcntl.h. However, it could be easily patched by defining it manually, for example with make CFLAGS='"-DF_EXLCK=4"'. This is how qemu is patched for musl for non-irix targets (patch link).
However, there are more undefined macros down the road, such as __SIGRTMIN and __SIGRTMAX, the macro TRANSTBL_CONVERT, and probably others. Patching them ad-hoc may not be enough - so it appears to be that the qemu-irix project has to be properly ported for Alpine and musl libc.
If you're willing, you could try following existing qemu musl patches, and attempt to patch it yourself for qemu-irix:
- https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg04773.html
- https://github.com/NixOS/nixpkgs/pull/46449/files
Off topic comment, use make -j to build parallely, which will end much faster.
I have used spiffsimg to create a single file containing multiple lua files:
# ./spiffsimg -f lua.img -c 262144 -r lua.script
f 4227 init.lua
f 413 cfg.lua
f 2233 setupWifi.lua
f 7498 configServer.lua
f 558 cfgForm.htm
f 4255 setupConfig.lua
f 14192 main.lua
#
I then use esptool.py to flash the NodeMCU firmware and the file containing the lua files to the esp8266 (NodeMCU dev kit):
c:\esptool-master>c:\Python27\python esptool.py -p COM7 write_flash -fs 32m -fm dio 0x00000 nodemcu-dev-9-modules-2016-07-18-12-06-36-integer.bin 0x78000 lua.img
esptool.py v1.0.2-dev
Connecting...
Running Cesanta flasher stub...
Flash params set to 0x0240
Writing 446464 # 0x0... 446464 (100 %)
Wrote 446464 bytes at 0x0 in 38.9 seconds (91.9 kbit/s)...
Writing 262144 # 0x78000... 262144 (100 %)
Wrote 262144 bytes at 0x78000 in 22.8 seconds (91.9 kbit/s)...
Leaving...
I then run ESPLorer to check the status and get:
PORT OPEN 115200
Communication with MCU..Got answer! AutoDetect firmware...
Can't autodetect firmware, because proper answer not received.
NodeMCU custom build by frightanic.com
branch: dev
commit: b21b3e08aad633ccfd5fd29066400a06bb699ae2
SSL: true
modules: file,gpio,http,net,node,rtctime,tmr,uart,wifi
build built on: 2016-07-18 12:05
powered by Lua 5.1.4 on SDK 1.5.4(baaeaebb)
lua: cannot open init.lua
>
----------------------------
No files found.
----------------------------
>
Total : 3455015 bytes
Used : 0 bytes
Remain: 3455015 bytes
The NodeMCU firmware flashed correctly, but the lua files can't be located.
I have tried flashing to other locations (0x84000, 0x7c000), but I am just guessing at these locations based on reading threads on github.
I used the NodeMCU file.fscfg() routine to get the flash address and size. If I only flash the NodeMCU firmware I get the following:
print (file.fscfg())
524288 3653632
534288 is 0x80000, so I tried flashing only the spiffsimg file (lua.img) to 0x8000, then ran the same print statement and got:
print (file.fscfg())
786432 3391488
The flash address incremented by the exact number of bytes in the lua.img - which I don't understand, why would the flash address change? Is the first number returned by file.fscfg not the starting flash address, but the ending flash address?
What is the correct address for flashing an image file, contain lua files, that was created by spiffsimg?
The version of spiffsimg found here will provide the correct address for flashing an image file that contains lua files.
Do not use this version of spiffsimg as it is out of date.
To install the spiffsimg utility, you need to download and install the entire nodemcu-firmware package (into a linux environment, use make to install - note: make on my debian linux box generated an error, but i was able to go to the ../tools/spiffsimg subdirectory and run make on the Makefile found in that directory to create the utility).
The spiffsimg instructions found here are quite clear, with one exception: the file name you specify, with the -f parameter, needs to include the characters %x. The %x will be replaced with the address that the image file should be flashed to.
For example, the command
spiffsimage -f %x-luaFiles.img -S 4MB -U 465783 -r lua.script
will create a file, in the local directory, with a name like: 80000-luaFiles.img. Which means you should install that image file at address 0x80000 on the ESP8266.
I've never done that myself but I'm reasonably confident the correct answer can be extracted from the docs.
-f specifies the filename for the disk image. '%x' will be replaced
by the calculated offset of the file system.
And a bit further down
The disk image file is placed into the bin directory and it is named
0x<offset>-<size>.bin where the offset is the location where it
should be flashed, and the size is the size of the flash part.
However, there's a slight mismatch between the two statements. We may have a bug in the docs. If "'%x' will be replaced..." then I'd expected the final name won't contain 0x anymore.
Furthermore, it is possible to define a fixed SPIFFS position when you build the firmware.
#define SPIFFS_FIXED_LOCATION 0x100000
This specifies that the SPIFFS filesystem starts at 1Mb from the start of the flash. Unless
otherwise specified, it will run to the end of the flash (excluding
the 16k of space reserved by the SDK).
Here are my specs:
GTX 1070
Driver 367 (installed from .run)
Ubuntu 16.04
CUDA 8.0 (installed from .run)
Cudnn 5
Bazel 0.3.0 (potential problem?)
gcc 4.9.3
Tensorflow installed from source
To verify versions:
volcart#volcart-Precision-Tower-7910:~/$ nvidia-smi
Fri Aug 5 15:03:32 2016
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.35 Driver Version: 367.35 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 0000:03:00.0 On | N/A |
| 0% 38C P8 11W / 185W | 495MiB / 8113MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 20303 G /usr/lib/xorg/Xorg 280MiB |
| 0 20909 G compiz 114MiB |
| 0 21562 G ...s-passed-by-fd --v8-snapshot-passed-by-fd 98MiB |
+-----------------------------------------------------------------------------+
volcart#volcart-Precision-Tower-7910:~/$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Wed_May__4_21:01:56_CDT_2016
Cuda compilation tools, release 8.0, V8.0.26
volcart#volcart-Precision-Tower-7910:~/$ bazel version
Build label: 0.3.0
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Fri Jun 10 11:38:23 2016 (1465558703)
Build timestamp: 1465558703
Build timestamp as int: 1465558703
volcart#volcart-Precision-Tower-7910:~/$ gcc -vUsing built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.9/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.9.3-13ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-4.9/README.Bugs --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.9 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.9 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-4.9-amd64/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-4.9-amd64 --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-4.9-amd64 --with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 4.9.3 (Ubuntu 4.9.3-13ubuntu2)
I did switch bazel versions, so I executed bazel clean successfully.
I can verify CUDA is functional via ~/NVIDIA_CUDA-8.0_Samples/1_Utilities/deviceQuery$
volcart#volcart-Precision-Tower-7910:~/NVIDIA_CUDA-8.0_Samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 1070"
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 8113 MBytes (8507162624 bytes)
(15) Multiprocessors, (128) CUDA Cores/MP: 1920 CUDA Cores
GPU Max Clock rate: 1797 MHz (1.80 GHz)
Memory Clock rate: 4004 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 3 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1070
Result = PASS
When I ./configure I enter all the defaults.
The current errors
When I build the training example I get this:
volcart#volcart-Precision-Tower-7910:/usr/local/lib/python2.7/dist-packages/tensorflow$ sudo bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
Sending SIGTERM to previous Bazel server (pid=7108)... done.
.
INFO: Found 1 target...
...
./tensorflow/core/platform/default/logging.h: In instantiation of 'std::string* tensorflow::internal::Check_LTImpl(const T1&, const T2&, const char*) [with T1 = int; T2 = long unsigned int; std::string = std::basic_string<char>]':
tensorflow/core/common_runtime/gpu/gpu_device.cc:567:5: required from here
./tensorflow/core/platform/default/logging.h:197:35: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
TF_DEFINE_CHECK_OP_IMPL(Check_LT, < )
^
./tensorflow/core/platform/macros.h:54:29: note: in definition of macro 'TF_PREDICT_TRUE'
#define TF_PREDICT_TRUE(x) (x)
^
./tensorflow/core/platform/default/logging.h:197:1: note: in expansion of macro 'TF_DEFINE_CHECK_OP_IMPL'
TF_DEFINE_CHECK_OP_IMPL(Check_LT, < )
^
ERROR: /usr/local/lib/python2.7/dist-packages/tensorflow/tensorflow/cc/BUILD:199:1: Linking of rule '//tensorflow/cc:tutorials_example_trainer' failed: crosstool_wrapper_driver_is_not_gcc failed: error executing command third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -o bazel-out/local_linux-opt/bin/tensorflow/cc/tutorials_example_trainer ... (remaining 805 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
bazel-out/local_linux-opt/bin/tensorflow/cc/_objs/tutorials_example_trainer/tensorflow/cc/tutorials/example_trainer.o: In function `tensorflow::example::ConcurrentSteps(tensorflow::example::Options const*, int)':
example_trainer.cc:(.text._ZN10tensorflow7example15ConcurrentStepsEPKNS0_7OptionsEi+0x517): undefined reference to `google::protobuf::internal::empty_string_'
bazel-out/local_linux-opt/bin/tensorflow/core/kernels/libidentity_reader_op.lo(identity_reader_op.o): In function `tensorflow::IdentityReader::SerializeStateLocked(std::string*)':
identity_reader_op.cc:(.text._ZN10tensorflow14IdentityReader20SerializeStateLockedEPSs[_ZN10tensorflow14IdentityReader20SerializeStateLockedEPSs]+0x36): undefined reference to `google::protobuf::MessageLite::SerializeToString(std::string*) const'
bazel-out/local_linux-opt/bin/tensorflow/core/kernels/libwhole_file_read_ops.lo(whole_file_read_ops.o): In function `tensorflow::WholeFileReader::SerializeStateLocked(std::string*)':
And when I try to build the pip package I get this:
volcart#volcart-Precision-Tower-7910:/usr/local/lib/python2.7/dist-packages/tensorflow$ sudo bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
WARNING: /usr/local/lib/python2.7/dist-packages/tensorflow/util/python/BUILD:11:16: in includes attribute of cc_library rule //util/python:python_headers: 'python_include' resolves to 'util/python/python_include' not in 'third_party'. This will be an error in the future.
WARNING: /home/volcart/.cache/bazel/_bazel_root/109ad80a732aaece8a87d1e3693889e7/external/gemmlowp/BUILD:102:12: in hdrs attribute of cc_library rule #gemmlowp//:eight_bit_int_gemm: Artifact 'external/gemmlowp/public/bit_depth.h' is duplicated (through '#gemmlowp//:eight_bit_int_gemm_public_headers' and '#gemmlowp//:gemmlowp_headers').
WARNING: /home/volcart/.cache/bazel/_bazel_root/109ad80a732aaece8a87d1e3693889e7/external/gemmlowp/BUILD:102:12: in hdrs attribute of cc_library rule #gemmlowp//:eight_bit_int_gemm: Artifact 'external/gemmlowp/public/gemmlowp.h' is duplicated (through '#gemmlowp//:eight_bit_int_gemm_public_headers' and '#gemmlowp//:gemmlowp_headers').
WARNING: /home/volcart/.cache/bazel/_bazel_root/109ad80a732aaece8a87d1e3693889e7/external/gemmlowp/BUILD:102:12: in hdrs attribute of cc_library rule #gemmlowp//:eight_bit_int_gemm: Artifact 'external/gemmlowp/public/map.h' is duplicated (through '#gemmlowp//:eight_bit_int_gemm_public_headers' and '#gemmlowp//:gemmlowp_headers').
WARNING: /home/volcart/.cache/bazel/_bazel_root/109ad80a732aaece8a87d1e3693889e7/external/gemmlowp/BUILD:102:12: in hdrs attribute of cc_library rule #gemmlowp//:eight_bit_int_gemm: Artifact 'external/gemmlowp/public/output_stages.h' is duplicated (through '#gemmlowp//:eight_bit_int_gemm_public_headers' and '#gemmlowp//:gemmlowp_headers').
WARNING: /home/volcart/.cache/bazel/_bazel_root/109ad80a732aaece8a87d1e3693889e7/external/gemmlowp/BUILD:102:12: in hdrs attribute of cc_library rule #gemmlowp//:eight_bit_int_gemm: Artifact 'external/gemmlowp/profiling/instrumentation.h' is duplicated (through '#gemmlowp//:eight_bit_int_gemm_public_headers' and '#gemmlowp//:gemmlowp_headers').
WARNING: /home/volcart/.cache/bazel/_bazel_root/109ad80a732aaece8a87d1e3693889e7/external/gemmlowp/BUILD:102:12: in hdrs attribute of cc_library rule #gemmlowp//:eight_bit_int_gemm: Artifact 'external/gemmlowp/profiling/profiler.h' is duplicated (through '#gemmlowp//:eight_bit_int_gemm_public_headers' and '#gemmlowp//:gemmlowp_headers').
INFO: Found 1 target...
INFO: From Compiling external/protobuf/src/google/protobuf/util/internal/utility.cc [for host]:
...
INFO: From Compiling tensorflow/core/distributed_runtime/tensor_coding.cc:
tensorflow/core/distributed_runtime/tensor_coding.cc: In member function 'bool tensorflow::TensorResponse::ParseTensorSubmessage(google::protobuf::io::CodedInputStream*, tensorflow::TensorProto*)':
tensorflow/core/distributed_runtime/tensor_coding.cc:123:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (num_bytes != buf.size()) return false;
^
ERROR: /usr/local/lib/python2.7/dist-packages/tensorflow/tensorflow/core/kernels/BUILD:1498:1: undeclared inclusion(s) in rule '//tensorflow/core/kernels:batchtospace_op_gpu':
this rule is missing dependency declarations for the following files included by 'tensorflow/core/kernels/batchtospace_op_gpu.cu.cc':
'/usr/local/cuda-8.0/include/cuda_runtime.h'
'/usr/local/cuda-8.0/include/host_config.h'
'/usr/local/cuda-8.0/include/builtin_types.h'
'/usr/local/cuda-8.0/include/device_types.h'
'/usr/local/cuda-8.0/include/host_defines.h'
'/usr/local/cuda-8.0/include/driver_types.h'
'/usr/local/cuda-8.0/include/surface_types.h'
'/usr/local/cuda-8.0/include/texture_types.h'
'/usr/local/cuda-8.0/include/vector_types.h'
'/usr/local/cuda-8.0/include/library_types.h'
'/usr/local/cuda-8.0/include/channel_descriptor.h'
'/usr/local/cuda-8.0/include/cuda_runtime_api.h'
'/usr/local/cuda-8.0/include/cuda_device_runtime_api.h'
'/usr/local/cuda-8.0/include/driver_functions.h'
'/usr/local/cuda-8.0/include/vector_functions.h'
'/usr/local/cuda-8.0/include/vector_functions.hpp'
'/usr/local/cuda-8.0/include/common_functions.h'
'/usr/local/cuda-8.0/include/math_functions.h'
'/usr/local/cuda-8.0/include/math_functions.hpp'
'/usr/local/cuda-8.0/include/math_functions_dbl_ptx3.h'
'/usr/local/cuda-8.0/include/math_functions_dbl_ptx3.hpp'
'/usr/local/cuda-8.0/include/cuda_surface_types.h'
'/usr/local/cuda-8.0/include/cuda_texture_types.h'
'/usr/local/cuda-8.0/include/device_functions.h'
'/usr/local/cuda-8.0/include/device_functions.hpp'
'/usr/local/cuda-8.0/include/device_atomic_functions.h'
'/usr/local/cuda-8.0/include/device_atomic_functions.hpp'
'/usr/local/cuda-8.0/include/device_double_functions.h'
'/usr/local/cuda-8.0/include/device_double_functions.hpp'
'/usr/local/cuda-8.0/include/sm_20_atomic_functions.h'
'/usr/local/cuda-8.0/include/sm_20_atomic_functions.hpp'
'/usr/local/cuda-8.0/include/sm_32_atomic_functions.h'
'/usr/local/cuda-8.0/include/sm_32_atomic_functions.hpp'
'/usr/local/cuda-8.0/include/sm_35_atomic_functions.h'
'/usr/local/cuda-8.0/include/sm_60_atomic_functions.h'
'/usr/local/cuda-8.0/include/sm_60_atomic_functions.hpp'
'/usr/local/cuda-8.0/include/sm_20_intrinsics.h'
'/usr/local/cuda-8.0/include/sm_20_intrinsics.hpp'
'/usr/local/cuda-8.0/include/sm_30_intrinsics.h'
'/usr/local/cuda-8.0/include/sm_30_intrinsics.hpp'
'/usr/local/cuda-8.0/include/sm_32_intrinsics.h'
'/usr/local/cuda-8.0/include/sm_32_intrinsics.hpp'
'/usr/local/cuda-8.0/include/sm_35_intrinsics.h'
'/usr/local/cuda-8.0/include/surface_functions.h'
'/usr/local/cuda-8.0/include/texture_fetch_functions.h'
'/usr/local/cuda-8.0/include/texture_indirect_functions.h'
'/usr/local/cuda-8.0/include/surface_indirect_functions.h'
'/usr/local/cuda-8.0/include/device_launch_parameters.h'
'/usr/local/cuda-8.0/include/cuda_fp16.h'
'/usr/local/cuda-8.0/include/math_constants.h'
'/usr/local/cuda-8.0/include/curand_kernel.h'
'/usr/local/cuda-8.0/include/curand.h'
'/usr/local/cuda-8.0/include/curand_discrete.h'
'/usr/local/cuda-8.0/include/curand_precalc.h'
'/usr/local/cuda-8.0/include/curand_mrg32k3a.h'
'/usr/local/cuda-8.0/include/curand_mtgp32_kernel.h'
'/usr/local/cuda-8.0/include/cuda.h'
'/usr/local/cuda-8.0/include/curand_mtgp32.h'
'/usr/local/cuda-8.0/include/curand_philox4x32_x.h'
'/usr/local/cuda-8.0/include/curand_globals.h'
'/usr/local/cuda-8.0/include/curand_uniform.h'
'/usr/local/cuda-8.0/include/curand_normal.h'
'/usr/local/cuda-8.0/include/curand_normal_static.h'
'/usr/local/cuda-8.0/include/curand_lognormal.h'
'/usr/local/cuda-8.0/include/curand_poisson.h'
'/usr/local/cuda-8.0/include/curand_discrete2.h'.
nvcc warning : option '--relaxed-constexpr' has been deprecated and replaced by option '--expt-relaxed-constexpr'.
nvcc warning : option '--relaxed-constexpr' has been deprecated and replaced by option '--expt-relaxed-constexpr'.
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 138.913s, Critical Path: 102.63s
I saw some people complaining about bazel 0.3.1, maybe need to downgrade to 0.3.0. The error you gave is not very informative, that's just the parent script saying that child script failed, there should be more info on the console with the actual error.
I went through the setup steps two days ago for GTX 1080 and it worked with this config.
Ubuntu 16.04
Nvidia Driver: nvidia-367.35 (installed from .run file)
Bazel 0.3.0
gcc: 4.9.3 (default with 16.04)
CUDA 8.0.27 (installed from .run file into default dirs)
compute capability: (use default values for config)
I have a sys d service. I want to implement a watch dog for that.
It's something like,
[Unit]
Description=Watchdog example service
[Service]
Type=notify
Environment=NOTIFY_SOCKET=/run/%p.sock
ExecStartPre=-/usr/bin/docker kill %p
ExecStartPre=-/usr/bin/docker rm %p
ExecStart=/usr/libexec/sdnotify-proxy /run/%p.sock /usr/bin/docker run \
--env=NOTIFY_SOCKET=/run/%p.sock \
--name %p pranav93/test_watchdogged python hello.py
ExecStop=/usr/bin/docker stop %p
Restart=on-success
WatchdogSec=30s
RestartSec=30s
[Install]
WantedBy=multi-user.target
According to docs I've to call sd_notify("watchdog=1") every half of the interval specified (In this case it's 15s). But I've no idea how to call that function in a service. Help would be highly appreciated.
I had to install systemd lib:
sudo apt-get install libsystemd-dev
And compile the program passing it to linker:
gcc testWatchDogProcess.c -o testWatchDogProcess -lsystemd
I´ve made some changes in #rameshrgtvl code to make it run directly without any warning or errors.
#include <systemd/sd-daemon.h>
#include <fcntl.h>
#include <time.h>
/* This should be sent once you are done with your initialization */
/* Until you call this systemd will keep your service as activating status */
/* Once you called, systemd will change the status of ur service to active */
#define true 1
int main ()
{
sd_notify (0, "READY=1");
/* Way to get the WatchdogSec value from service file */
char * env;
int interval=0;
int isRun = true;
env = getenv("WATCHDOG_USEC");
if (env)
{
interval = atoi(env)/(2*1000000);
}
/* Ping systsemd once you are done with Init */
sd_notify (0, "WATCHDOG=1");
/* Now go for periodic notification */
while(isRun == true)
{
sleep(interval);
/* Way to ping systemd */
sd_notify (0, "WATCHDOG=1");
}
return 0;
}
Sample Service File
[Unit]
Description=Test watchdog Demo process
DefaultDependencies=false
Requires=basic.target
[Service]
Type=notify
WatchdogSec=10s
ExecStart=/usr/bin/TestWatchDogProcess
StartLimitInterval=5min
StartLimitBurst=5
StartLimitAction=reboot
Restart=always
Sample Code For testWatchDogProcess.c:
#include "systemd/sd-daemon.h"
#include <fcntl.h>
#include <time.h>
/* This should be sent once you are done with your initialization */
/* Until you call this systemd will keep your service as activating status */
/* Once you called, systemd will change the status of ur service to active */
sd_notify (0, "READY=1");
/* Way to get the WatchdogSec value from service file */
env = getenv("WATCHDOG_USEC");
if(env != NULL)
int interval = atoi(env)/(2*1000000);
/* Ping systsemd once you are done with Init */
sd_notify (0, "WATCHDOG=1");
/* Now go for periodic notification */
while(isRun == true)
{
sleep(interval);
/* Way to ping systemd */
sd_notify (0, "WATCHDOG=1");
}
return 0;
}
Note: Based on your systemd version please take care to include proper header and library during compilation.
sd_notify(0,"WATCHDOG=1") is a API for notifying systemd that your process is working fine.
As Type=notify has been used, sd_notify(0,"WATCHDOG=1") should be called in your application not in service and this must be called at regular interval(before 30 sec as WatchdogSec=30s is mentioned in your service file) so that systemd get notified else,
systemd will think of this as failed service and hence systemd will kill your service and restart it.