I set a Command Line phase in a TFS Builds to execute a Robocopy and it returns an error code 1, although there are no errors during the robocopy execution.
If I run the Robocopy command directly in the Cmd it works, and the Job log shows that the Robocopy works porperly until the end:
2019-02-27T10:21:58.3234459Z Total Copied Skipped
Mismatch FAILED Extras
2019-02-27T10:21:58.3234459Z Dirs : 1688 0 1688 0 0 0
2019-02-27T10:21:58.3234459Z Files : 6107 6 6101 0 0 0
2019-02-27T10:21:58.3234459Z Bytes : 246.01 m 299.2 k 245.71 m 0 0 0
2019-02-27T10:21:58.3234459Z Times : 0:00:17 0:00:00 0:00:00 0:00:17
2019-02-27T10:21:58.3234459Z
2019-02-27T10:21:58.3234459Z
2019-02-27T10:21:58.3234459Z Speed : 3879329 Bytes/sec.
2019-02-27T10:21:58.3234459Z Speed : 221.976 MegaBytes/min.
2019-02-27T10:21:58.3234459Z
2019-02-27T10:21:58.3234459Z Ended : Wed Feb 27 11:21:58 2019
2019-02-27T10:21:58.3702460Z ##[error]Process completed with exit code 1.
Here is an image about the Build configuration:
RoboCopy has ExitCodes > 0.
In your example Exit Code = 1 means One or more files were copied successfully (that is, new files have arrived).
To fix this you could create a Powershell Script, which executes the copy and overwrites the Exit code.
like
param( [String] $sourcesDirectory, [String] $destinationDirectory, [String] $attributes)
robocopy $sourcesDirectory $destinationDirectory $attributes
if( $LASTEXITCODE -ge 8 )
{
throw ("An error occured while copying. [RoboCopyCode: $($LASTEXITCODE)]")
}
else
{
$global:LASTEXITCODE = 0;
}
exit 0
robocopy use the error code different, error code 1 is not a real error, it just saying that one or more files were copied successfully.
TFS recognize error code 1 as a real error and fail the build.
To solve that you need to change the robocopy error code:
(robocopy c:\dirA c:\dirB *.*) ^& IF %ERRORLEVEL% LEQ 1 exit 0
The ^& IF %ERRORLEVEL% LEQ 1 exit 0 convert the error code 1 to 0 and then the TFS build will not be failed.
Related
I've been following this tutorial in order to deploy my first DAPP
https://docs.terra.money/docs/develop/dapp/quick-start/using-terrain-localterra.html
I've ran this in order to install localterra and run docker
git clone --branch v0.5.2 --depth 1 https://github.com/terra-money/localterra
cd localterra
docker-compose up
Secondly I've scaffolded my dApp
terrain new my-terra-dappcd
cd my-terra-dapp
npm install
I've encountered problems at the third step, the Deploy
When Running
terrain deploy counter --signer validator
I get this
How can I possibly solve this error?
Would really appreciate an answer, thank you!
Finished release [optimized] target(s) in 0.09s
Running script 'optimize': 'docker run --rm -v "$(pwd)":/code --mount type=volume,source="$(basename "$(pwd)")_cache",target=/code/target --mount type=volume,source=registry_cache,target=/usr/local/cargo/registry cosmwasm/rust-optimizer:0.12.3
'
Info: RUSTC_WRAPPER=sccache
Info: sccache stats before build
Compile requests 0
Compile requests executed 0
Cache hits 0
Cache misses 0
Cache timeouts 0
Cache read errors 0
Forced recaches 0
Cache write errors 0
Compilation failures 0
Cache errors 0
Non-cacheable compilations 0
Non-cacheable calls 0
Non-compilation calls 0
Unsupported compiler calls 0
Average cache write 0.000 s
Average cache read miss 0.000 s
Average cache read hit 0.000 s
Failed distributed compilations 0
Cache location Local disk: "/root/.cache/sccache"
Cache size 0 bytes
Max cache size 10 GiB
Building contract in /code ...
Updating crates.io index
Killed
Finished, status of exit status: 137
Error: ENOENT: no such file or directory, open 'artifacts/counter.wasm'
Code: ENOENT
It seems Docker ran out of memory; the tutorial you linked mentions this potential problem, and suggests increasing the amount of memory available to Docker.
My documentation tests are silently not executed in my Docker environment while everything works on both Windows and Ubuntu/Debian hosts.
I created a minimal Github Repository to demonstrate the issue. I tried two different versions of Rust nightly and Rust stable, debug/release, all without success. See my Dockerfile and complete build output.
Example code:
/// Fixes string arrays which can also be objects into string arrays
/// # Examples
///
/// ```
/// assert_eq!(cargo_test_doc_docker::add(1, 2), 3);
/// ```
pub fn add(a: i32, b: i32) -> i32 {
a + b
}
Result when executing on Debian:
arturh#host:~/projects/cargo-test-doc-docker$ cargo test
Compiling cargo-test-doc-docker v0.1.0 (/home/arturh/projects/cargo-test-doc-docker)
Finished test [unoptimized + debuginfo] target(s) in 2.39s
Running target/debug/deps/cargo_test_doc_docker-9d5ae146cd4c3628
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/cargo_test_doc_docker-2a696d2579128ce1
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests cargo-test-doc-docker
running 1 test
test src/lib.rs - add (line 4) ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
The problem occurs when executing the build on Docker. This is a minimal Dockerfile that reproduces the problem:
FROM ekidd/rust-musl-builder:nightly-2020-01-26-openssl11 as build
COPY --chown=rust:rust . .
RUN cargo test; echo $?
Result for every Rust toolchain I tried:
Step 6/17 : RUN cargo test; echo $?
---> Running in b266fc72f3c1
Compiling cargo-test-doc-docker v0.1.0 (/home/rust/src)
Finished test [unoptimized + debuginfo] target(s) in 0.32s
Running target/x86_64-unknown-linux-musl/debug/deps/cargo_test_doc_docker-7b40e7e5b47f49eb
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/x86_64-unknown-linux-musl/debug/deps/cargo_test_doc_docker-0bfec9752a7bec14
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
0
It does not even try to execute any doc tests and exits with zero so it's not easily noticed. I guess it must be something the Docker base image does, but what could that be?
Cross Compilation
This is a logical, if surprising, outcome of cross-compilation.
To understand why, imagine that you:
Compile on a Linux x64 machine (Host).
Target a Windows ARM machine.
The generated code cannot be executed on the current host (Linux x64): it is prepared for a different CPU (instruction set) and OS (system calls).
Since the tests -- unit tests, integration tests, and documentation tests -- are also generated for the target architecture, they cannot be executed on the host either.
What to do with the tests?
If your code has no specific dependency on a specific platform, then you can content yourself with compiling for the host and running those.
Otherwise, you will need access to a machine that can actually run the cross-compiled binaries. You can still use cross-compilation to speed up building those binaries, and then upload them to either a physical or virtual machine to run them.
AFAIK Cargo does not help with the latter, so you'll need your own scripts.
Shepmaster was right, when I target the x86_64-unknown-linux-musl it also does not work locally on Debian:
arturh#host:~/projects/cargo-test-doc-docker$ cargo test --target=x86_64-unknown-linux-musl; echo $?
Compiling cargo-test-doc-docker v0.1.0 (/home/arturh/projects/cargo-test-doc-docker)
Finished test [unoptimized + debuginfo] target(s) in 0.28s
Running target/x86_64-unknown-linux-musl/debug/deps/cargo_test_doc_docker-8dfff5631875d404
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/x86_64-unknown-linux-musl/debug/deps/cargo_test_doc_docker-eb877250b708174b
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
0
I guess I must have a separate build step for testing the doc tests with the target x86_64-unknown-linux-gnu.
I have a Jenkins server which builds docker images (using the docker maven plugin).
These Builds normally take about 40 seconds. But sometimes they take up to 1.5 hours.
Now I am wondering why. And I am also wondering how I can debug the situation.
The output of the build in the jenkins console just tells me that it hangs during the maven docker build. Example output where it hangs:
[INFO] ------------------------------------------------------------------------
[INFO] Building MY Docker Image MY Image 0.5.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] >>> docker-maven-plugin:0.15.14:build (default-cli) > initialize # docker-image-my-image >>>
[INFO]
[INFO] <<< docker-maven-plugin:0.15.14:build (default-cli) < initialize # docker-image-my-image <<<
[INFO]
[INFO] --- docker-maven-plugin:0.15.14:build (default-cli) # docker-image-my-image ---
[INFO] Building tar: /usr/home/jenkinshome/workspace/JOB_NAME/source-repo/docker/image-my-image/docker-build.tar
[INFO] DOCKER> docker-build.tar: Created [my-image] in 69 milliseconds
And now it hangs.
Can I somehow watch at which operation docker is hanging?
UPDATE
The build also hangs on the host maschine, when started directly via docker build.
This is the end of the output of doing this with strace:
futex(0xc820028908, FUTEX_WAKE, 1) = 1
clock_gettime(CLOCK_REALTIME, {1486567620, 329675667}) = 0
clock_gettime(CLOCK_REALTIME, {1486567620, 329756643}) = 0
socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
setsockopt(3, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
connect(3, {sa_family=AF_LOCAL, sun_path="/var/run/docker.sock"}, 23) = 0
clock_gettime(CLOCK_REALTIME, {1486567620, 330059768}) = 0
epoll_create1(EPOLL_CLOEXEC) = 4
epoll_ctl(4, EPOLL_CTL_ADD, 3, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=3892185696, u64=139659048782432}}) = 0
getsockname(3, {sa_family=AF_LOCAL, NULL}, [2]) = 0
getpeername(3, {sa_family=AF_LOCAL, sun_path="/var/run/docker.sock"}, [23]) = 0
futex(0xc820028908, FUTEX_WAKE, 1) = 1
read(3, 0xc820349000, 4096) = -1 EAGAIN (Resource temporarily unavailable)
write(3, "POST /v1.24/build?buildargs=%7B%"..., 349) = 349
futex(0xc820028d08, FUTEX_WAKE, 1) = 1
write(3, "7ff\r\nockerfile\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 2054) = 2054
ioctl(0, TIOCGWINSZ, {ws_row=57, ws_col=105, ws_xpixel=0, ws_ypixel=0}) = 0
ioctl(0, TIOCGWINSZ, {ws_row=57, ws_col=105, ws_xpixel=0, ws_ypixel=0}) = 0
) = 481, "Sending build context to Docker "..., 48Sending build context to Docker daemon 2.048 kB
write(1, "\r\n", 2
) = 2
write(3, "0\r\n\r\n", 5) = 5
futex(0xc820028d08, FUTEX_WAKE, 1) = 1
futex(0x12d3a48, FUTEX_WAIT, 0, NULLStep 1 : FROM jboss-db
---> 74a0020a9922
Step 2 : MAINTAINER ***
---> Using cache
---> 5d38cbd5501b
Step 3 : USER root:root
---> Running in 64b14554d8be
And at this point it hangs
You have a bunch of options here, and you'll have to feel out which one works best for you. You can:
strace the process on the docker host.
Use something like sysdig on the host, which is easier to use and more detailed.
do printf debugging or some sort of logging that indicates your build is still alive to stdout/stderr and the docker log -f <build container> to see if it's the build that's stuck or the jenkins slave/master processes.
Any more detailed advice will probably need more information about your setup, what you mean by hang and which component is hung exactly.
I am trying to use for the first time the Yocto tool for my BeagleBoneBlack.
First I run this bash file to install Yocto:
#!/bin/bash
WKDIR=/work
mkdir -p $WKDIR/beaglebone-black/yocto/sources
mkdir -p $WKDIR/beaglebone-black/yocto/builds
cd $WKDIR/beaglebone-black/yocto/sources
git clone -b morty git://git.yoctoproject.org/poky.git poky-morty
cd $WKDIR/beaglebone-black/yocto/
source sources/poky-morty/oe-init-build-env builds/build-bbb-morty
Then I edited the file local.conf at "build-bbb-morty/conf" diretory:
MACHINE ?= "beaglebone"
and added
DL_DIR ?= "${TOPDIR}/../dl"
IMAGE_INSTALL_append = " kernel-modules kernel-devicetree"
Then I run bitbake:> bitbake core-image-minimal
After about 8 hours in my Core i7 five generation I got this result at my terminal output and I have no idea what I need to do to fix it:
bitbake core-image-minimal
Parsing recipes: 100% |########################################################################################################| Time: 0:02:55
Parsing of 864 .bb files complete (0 cached, 864 parsed). 1318 targets, 67 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
Build Configuration:
BB_VERSION = "1.32.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "Ubuntu-16.04"
TARGET_SYS = "arm-poky-linux-gnueabi"
MACHINE = "beaglebone"
DISTRO = "poky"
DISTRO_VERSION = "2.2.1"
TUNE_FEATURES = "arm armv7a vfp neon callconvention-hard cortexa8"
TARGET_FPU = "hard"
meta
meta-poky
meta-yocto-bsp = "morty:a3fa5ce87619e81d7acfa43340dd18d8f2b2d7dc"
NOTE: Fetching uninative binary shim from http ://downloads.yoctoproject.org/releases/uninative/1.4/x86_64-nativesdk-libc.tar.bz2;sha256sum=101ff8f2580c193488db9e76f9646fb6ed38b65fb76f403acb0e2178ce7127ca
--2017-01-18 15:51:09-- http ://downloads.yoctoproject.org/releases/uninative/1.4/x86_64-nativesdk-libc.tar.bz2
Resolving downloads.yoctoproject.org (downloads.yoctoproject.org)... 198.145.20.127
Connecting to downloads.yoctoproject.org (downloads.yoctoproject.org)|198.145.20.127|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2473216 (2.4M) [application/octet-stream]
Saving to: ‘/work/beaglebone-black/yocto/builds/build-bbb-morty/../dl/uninative/101ff8f2580c193488db9e76f9646fb6ed38b65fb76f403acb0e2178ce7127ca/x86_64-nativesdk-libc.tar.bz2’
2017-01-18 15:51:18 (297 KB/s) - ‘/work/beaglebone-black/yocto/builds/build-bbb-morty/../dl/uninative/101ff8f2580c193488db9e76f9646fb6ed38b65fb76f403acb0e2178ce7127ca/x86_64-nativesdk-libc.tar.bz2’ saved [2473216/2473216]
Initialising tasks: 100% |#####################################################################################################| Time: 0:00:14
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
WARNING: attr-native-2.4.47-r0 do_fetch: Failed to fetch URL http ://download.savannah.gnu.org/releases/attr/attr-2.4.47.src.tar.gz, attempting MIRRORS if available
WARNING: libpng-native-1.6.24-r0 do_fetch: Failed to fetch URL http ://distfiles.gentoo.org/distfiles/libpng-1.6.24.tar.xz, attempting MIRRORS if available
ERROR: core-image-minimal-1.0-r0 do_image_wic: Function failed: do_image_wic (log file is located at /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788)
ERROR: Logfile of failure stored in: /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788
Log data follows:
| DEBUG: Executing python function set_image_size
| DEBUG: Python function set_image_size finished
| DEBUG: Executing shell function do_image_wic
| Checking basic build environment...
| Done.
|
| Build artifacts not found, exiting.<br/>
| (Please check that the build artifacts for the machine
| selected in local.conf actually exist and that they
| are the correct artifacts for the image (.wks file))
|
| The artifact that couldn't be found was kernel-dir:
| /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/deploy/images/beaglebone
| WARNING: exit code 1 from a shell command.
| ERROR: Function failed: do_image_wic (log file is located at /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788)
ERROR: Task (/work/beaglebone-black/yocto/sources/poky-morty/meta/recipes-core/images/core-image-minimal.bb:do_image_wic) failed with exit code '1'
NOTE: Tasks Summary: Attempted 1771 tasks of which 6 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/work/beaglebone-black/yocto/sources/poky-morty/meta/recipes-core/images/core-image-minimal.bb:do_image_wic
Summary: There were 2 WARNING messages shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
While not sure this could be the reason of the problem, the prefered method to add packages to the image, in the local.conf context is using the CORE_IMAGE_EXTRA_INSTALL variable.
Therefore change:
IMAGE_INSTALL_append = " kernel-modules kernel-devicetree"
to
CORE_IMAGE_EXTRA_INSTALL += "kernel-modules kernel-devicetree"
I think there is no problem with your work method.
It seems to be a build environment problem, but the error log seems to confirm.
your log location at "/work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788"
Your error log indicates the the URL for fetching binaries failed.
You can try using tunnel through proxy. Or you can run the bitbake again because it can also fail sometimes due to network conditions.
I have made a .tcl file to simulate Gossiping-based Ad Hoc Routing. When I try to run it using NS 2.35 I am getting the following error:
xgridsz=, ygridsz=
Sim.: utime/stime: 67 10 stack/heap: -4192 4591616
num_nodes is set 0
warning: Please use -channel as shown in tcl/ex/wireless-mitf.tcl
couldn't read file "": no such file or directory
while executing
"source.orig {}"
("uplevel" body line 1)
invoked from within
"uplevel source.orig [list $fileName]"
invoked from within
"if [$instance_ is_http_url $fileName] {
set buffer [$instance_ read_url $fileName]
uplevel eval $buffer
} else {
uplevel source.orig [list $fileName]
..."
(procedure "source" line 8)
invoked from within
"source $val(cp)"
(file "simAODVG.tcl" line 155)
The link of my tcl file is given below:
https://drive.google.com/file/d/0B1OEPsG0ILELSHhMQzZQdW5STW8/view?usp=sharing
How will I solve this?
Line 155 , "" :
couldn't read file "": no such file or directory
Answer : "" is defined in line 18 : set val(cp) [lindex $argv 1]
...... and argv 1 seems not to be defined anywhere ?