How to variabilize sshTransfer in a jenkinsfile? - jenkins
I'm currently using a simple pipeline in order to delete logs older than 7 days on my servers.
I have 4 servers where the logs are in the same place, and thus 4 times the same sshTransfer command.
Is there anyway to variabilize those 4 redundant "sshTransfer".
Here is my code for more clarity :)
stage('Cleanup logs') {
steps {
script {
echo "1. Connection to server 1"
sshPublisher (
publishers:[
sshPublisherDesc(
configName: server1,
transfers:[
sshTransfer(
execCommand:"find /var/opt/application/log/* -mtime +6 -type f -delete",
execTimeout: 20000,
)
]
)
]
)
echo "2. Connection to server 2"
sshPublisher (
publishers:[
sshPublisherDesc(
configName: server2,
transfers:[
sshTransfer(
execCommand:"find /var/opt/application/log/* -mtime +6 -type f -delete",
execTimeout: 20000,
)
]
)
]
)
echo "3. Connection to server 3"
sshPublisher (
publishers:[
sshPublisherDesc(
configName: server3,
transfers:[
sshTransfer(
execCommand:"find /var/opt/application/log/* -mtime +6 -type f -delete",
execTimeout: 20000,
)
]
)
]
)
echo "4. Connection to server 4"
sshPublisher (
publishers:[
sshPublisherDesc(
configName: server4,
transfers:[
sshTransfer(
execCommand:"find /var/opt/application/log/* -mtime +6 -type f -delete",
execTimeout: 20000,
)
]
)
]
)
}
}
}
Thank you !
Related
pwm-backlight driver not being probed in u-boot
I'm trying to get my PWM working on a custom am33x board (same beagle-bone black target). For some reason I don't see the pwm-backlight driver being probed and thus no PWM as indicated on my scope. Here are my relevant source files: dts snippet: /dts-v1/; #include "am33xx.dtsi" #include <dt-bindings/interrupt-controller/irq.h> / { model = "test"; compatible = "ti,am33xx"; chosen { stdout-path = &uart0; }; backlight: backlight { status = "okay"; compatible = "pwm-backlight"; pwms = <&ehrpwm1 0 10000 0>; brightness-levels = <0 10 20 30 40 50 60 70 80 90 99>; default-brightness-level = <6>; }; }; &am33xx_pinmux { ehrpwm1_pins: pinmux-ehrpwm1-pins { pinctrl-single,pins = < AM33XX_IOPAD(0x848, PIN_OUTPUT_PULLDOWN | MUX_MODE6) /* gpmc_a2.ehrpwm1a */ >; }; }; &ehrpwm1 { u-boot,dm-spl; status = "okay"; pinctrl-names = "default"; pinctrl-0 = <&ehrpwm1_pins>; }; defconfig CONFIG_DM=y CONFIG_CMD_DM=y CONFIG_DM_VIDEO=y CONFIG_DM_PWM=y CONFIG_BACKLIGHT_PWM=y pwm-backlight driver info config BACKLIGHT_PWM bool "Generic PWM based Backlight Driver" depends on DM_VIDEO && DM_PWM default y help If you have a LCD backlight adjustable by PWM, say Y to enable this driver. This driver can be use with "simple-panel" and it understands the standard device tree (leds/backlight/pwm-backlight.txt) (linux version) https://github.com/torvalds/linux/blob/master/Documentation/devicetree/bindings/leds/backlight/pwm-backlight.txt and when I interrupt u-boot and use dm tree you can see that its not probed. Why? => dm tree Class Index Probed Driver Name ----------------------------------------------------------- root 0 [ + ] root_driver root_driver simple_bus 0 [ + ] generic_simple_bus |-- ocp simple_bus 1 [ ] generic_simple_bus | |-- l4_wkup#44c00000 simple_bus 2 [ ] generic_simple_bus | | |-- prcm#200000 simple_bus 3 [ ] generic_simple_bus | | `-- scm#210000 syscon 0 [ ] syscon | | `-- scm_conf#0 gpio 0 [ ] gpio_omap | |-- gpio#44e07000 gpio 1 [ ] gpio_omap | |-- gpio#4804c000 gpio 2 [ ] gpio_omap | |-- gpio#481ac000 gpio 3 [ ] gpio_omap | |-- gpio#481ae000 serial 0 [ + ] omap_serial | |-- serial#44e09000 mmc 0 [ + ] omap_hsmmc | |-- mmc#481d8000 timer 0 [ + ] omap_timer | |-- timer#48040000 timer 1 [ ] omap_timer | |-- timer#48042000 timer 2 [ ] omap_timer | |-- timer#48044000 timer 3 [ ] omap_timer | |-- timer#48046000 timer 4 [ ] omap_timer | |-- timer#48048000 timer 5 [ ] omap_timer | |-- timer#4804a000 misc 0 [ + ] ti-musb-wrapper | `-- usb#47400000 usb 0 [ + ] ti-musb-peripheral | `-- usb#47401000 eth 0 [ + ] usb_ether | `-- usb_ether backlight 0 [ ] pwm_backlight `-- backlight
File is auto created, the script should auto rename and move it, but they don't move
long time listener, first time caller, I have a need to create a Powershell script (ver 2 or lower) that: -continually monitors one specific directory for new/changed files -logs the file that was created with a date/time stamp in a log file that's: --created on a daily basis with the name of "log Date/Time.txt" -renames the file, appending the date/time -logs that it renamed it -maps a drive with a specific username/password combo -moves it from dirA to dirB (the mapped drive is dirB) -logs that it moved it -unmaps the drive -if for whatever reason it stopped running and we start it back up, it'll rename, maps the drive, move, unmap the drive, and log all files in dirA to dirB In it's current form, I've stripped the Mapping of the dir out to troubleshoot it moving the file on a local drive just to keep from troubleshooting a network drive. I've been staring at this for over a week and am tired of hitting my head against the desk. Can someone PLEASE put me out of my misery and let me know what I've done wrong? THANK YOU in advance! I've honestly tried SO many combos, different routines, I don't even know what to put here. In the box below, I've put the main part of the script that isn't working correctly It renames the files as they come in, but doesn't move them. $rename = $_.Name.Split(".")[0] + "_" + ($_.CreationTime | Get-Date -Format MM.dd.yyy) + "_" + ($_.CreationTime | Get-Date -Format hh.mm.ss) + ".log" Write-Output "File: '$name' exists at: $source - renaming existing file first" >> $scriptlog Rename-Item $_ -NewName $rename Wait-Event -Timeout 3 Move-Item "$_($_.Directory)$name" -destination $destination Write-Output "File: '$name' moved to $destination on $date" >> $scriptlog Whole code available below: #Log Rename/Move script $userName = "copyuser" $newpass = Read-Host -Prompt 'Type the new Password' $password = ConvertTo-SecureString -String $newpass -AsPlainText -Force $PathToMonitor = "C:\Users\Administrator\Desktop\FolderA" $destination = "C:\Users\Administrator\Desktop\FolderB" $scriptlog = "C:\Users\Administrator\Desktop\ScriptLogs\" + [datetime]::Today.ToString('MM-dd-yyy') + "_TransferLog.txt" $FileSystemWatcher = New-Object System.IO.FileSystemWatcher $FileSystemWatcher.Path = $PathToMonitor $FileSystemWatcher.IncludeSubdirectories = $false $FileSystemWatcher.EnableRaisingEvents = $true $dateTime = [datetime]::Today.ToString('MM-dd-yyy') + " " + [datetime]::Now.ToString('HH:mm:ss') Write-Output "*******************************************************************************************" >> $scriptLog Write-Output "*********************Starting Log Move Script $dateTime**********************" >> $scriptLog Write-Output "*******************************************************************************************" >> $scriptLog $Action = { $details = $event.SourceEventArgs $Name = $details.Name $FullPath = $details.FullPath $OldFullPath = $details.OldFullPath $OldName = $details.OldName $ChangeType = $details.ChangeType $Timestamp = $event.TimeGenerated $text = "{0} was {1} at {2}" -f $FullPath, $ChangeType, $Timestamp Write-Output "" >> $scriptlog Write-Output $text >> $scriptlog switch ($ChangeType) { 'Changed' { "CHANGE" Get-ChildItem -path $FullPath -Include *.log | % { $rename = $_.Name.Split(".")[0] + "_" + ($_.CreationTime | Get-Date -Format MM.dd.yyy) + "_" + ($_.CreationTime | Get-Date -Format hh.mm.ss) + ".log" Write-Output "File: '$name' exists at: $source - renaming existing file first" >> $scriptlog Rename-Item $_ -NewName $rename Wait-Event -Timeout 3 Move-Item "$_($_.Directory)$name" -destination $destination Write-Output "File: '$name' moved to $destination on $date" >> $scriptlog } } 'Created' { "CREATED" Get-ChildItem -path $FullPath -Include *.log | % { $rename = $_.Name.Split(".")[0] + "_" + ($_.CreationTime | Get-Date -Format MM.dd.yyy) + "_" + ($_.CreationTime | Get-Date -Format hh.mm.ss) + ".log" Write-Output "File: '$name' exists at: $source - renaming existing file first" >> $scriptlog Rename-Item $_ -NewName $rename Wait-Event -Timeout 3 Move-Item "$($_.Directory)$rename" -Destination $destination Write-Output "File: '$name' moved to $destination on $date" >> $scriptlog } } 'Deleted' { "DELETED" } 'Renamed' { } default { Write-Output $_ >> $scriptlog} } } $handlers = . { Register-ObjectEvent -InputObject $FileSystemWatcher -EventName Changed -Action $Action -SourceIdentifier FSChange Register-ObjectEvent -InputObject $FileSystemWatcher -EventName Created -Action $Action -SourceIdentifier FSCreate } Write-Output "Watching for changes to $PathToMonitor" >> $scriptlog try { do { Wait-Event -Timeout 1 Write-host "." -NoNewline } while ($true) } finally { # EndScript Unregister-Event -SourceIdentifier FSChange Unregister-Event -SourceIdentifier FSCreate $handlers | Remove-Job $FileSystemWatcher.EnableRaisingEvents = $false $FileSystemWatcher.Dispose() write-output "Event Handler disabled." >> $scriptlog }
Well, I found my issue. A dumb issue if I look at it and think about it. It was how I was calling the current directory containing the file(s) to move after being renamed. Move-Item "$_($_.Directory)$name" -destination $destination The issue in the above code is the "$($.Directory). It needs to be: Move-Item -path $PathToMonitor$name -destination $destination Other things may work, and may be better, but at least that fixes the moving after naming issue I was having. Now onto adding the Mapping Drive and other stuff that's needed to fully finish what I need. If I think about it, I'll post my entire code when done for anyone else it can help in the future.
Message exchange between Paho java & javascript client not happening
How can I send a message from paho java client to paho javascript client. String topic = "chatWith/904"; String content = "Message from MqttPublishSample"; int qos = 0; String broker = "tcp://127.0.0.1:1883"; String clientId = "JavaSample"; MemoryPersistence persistence = new MemoryPersistence(); try { MqttClient sampleClient = new MqttClient(broker, clientId, persistence); MqttConnectOptions connOpts = new MqttConnectOptions(); connOpts.setCleanSession(true); System.out.println("Connecting to broker: "+broker); sampleClient.connect(connOpts); System.out.println("Connected"); System.out.println("Publishing message: "+content); MqttMessage message = new MqttMessage(content.getBytes()); message.setQos(qos); sampleClient.publish(topic, message); System.out.println("Message published"); sampleClient.disconnect(); System.out.println("Disconnected"); System.exit(0); } catch(MqttException me) { me.printStackTrace(); } Message exchange between two JS client is happening. But between Javasript & java client, message exchange is not happening. Here is JS code client = new Messaging.Client(host, Number(port), clientId); client.onConnect = onConnect; client.onMessageArrived = onMessageArrived; client.onMessageDelivered = onMessageDelivered; client.onConnectionLost = onConnectionLost; var willMessage = createByteMessage({ id: '', 'msgType':'USER_STATUS', 'status': 'Offline', 'senderId': myUserId, 'senderName': myName }, statusTopic); willMessage.retained = true; client.connect({ userName:user, password:password, onSuccess:onConnect, onFailure:onFailure, 'willMessage': willMessage }); Here port is 8083 and mqtt broker is emqtt client.config testclientid0 testclientid1 127.0.0.1 testclientid2 192.168.0.1/24 emqttd.config % -*- mode: erlang;erlang-indent-level: 4;indent-tabs-mode: nil -*- %% ex: ft=erlang ts=4 sw=4 et [{kernel, [ {start_timer, true}, {start_pg2, true} ]}, {sasl, [ {sasl_error_logger, {file, "log/emqttd_sasl.log"}} ]}, {ssl, [ %{versions, ['tlsv1.2', 'tlsv1.1']} ]}, {lager, [ {colored, true}, {async_threshold, 5000}, {error_logger_redirect, false}, {crash_log, "log/emqttd_crash.log"}, {handlers, [ %%{lager_console_backend, info}, %%NOTICE: Level >= error %%{lager_emqtt_backend, error}, {lager_file_backend, [ {formatter_config, [time, " ", pid, " [",severity,"] ", message, "\n"]}, {file, "log/emqttd_error.log"}, {level, error}, {size, 104857600}, {date, "$D0"}, {count, 30} ]} ]} ]}, {esockd, [ {logger, {lager, error}} ]}, {emqttd, [ %% Authentication and Authorization {access, [ %% Authetication. Anonymous Default {auth, [ %% Authentication with username, password %{username, []}, %% Authentication with clientid %{clientid, [{password, no}, {file, "etc/clients.config"}]}, %% Authentication with LDAP % {ldap, [ % {servers, ["localhost"]}, % {port, 389}, % {timeout, 30}, % {user_dn, "uid=$u,ou=People,dc=example,dc=com"}, % {ssl, fasle}, % {sslopts, [ % {"certfile", "ssl.crt"}, % {"keyfile", "ssl.key"}]} % ]}, %% Allow all {anonymous, []} ]}, %% ACL config {acl, [ %% Internal ACL module {internal, [{file, "etc/acl.config"}, {nomatch, allow}]} ]} ]}, %% MQTT Protocol Options {mqtt, [ %% Packet {packet, [ %% Max ClientId Length Allowed {max_clientid_len, 512}, %% Max Packet Size Allowed, 64K default {max_packet_size, 65536} ]}, %% Client {client, [ %% Socket is connected, but no 'CONNECT' packet received {idle_timeout, 30} %% seconds ]}, %% Session {session, [ %% Max number of QoS 1 and 2 messages that can be “in flight” at one time. %% 0 means no limit {max_inflight, 100}, %% Retry interval for redelivering QoS1/2 messages. {unack_retry_interval, 60}, %% Awaiting PUBREL Timeout {await_rel_timeout, 20}, %% Max Packets that Awaiting PUBREL, 0 means no limit {max_awaiting_rel, 0}, %% Statistics Collection Interval(seconds) {collect_interval, 0}, %% Expired after 2 day (unit: minute) {expired_after, 2880} ]}, %% Queue {queue, [ %% simple | priority {type, simple}, %% Topic Priority: 0~255, Default is 0 %% {priority, [{"topic/1", 10}, {"topic/2", 8}]}, %% Max queue length. Enqueued messages when persistent client disconnected, %% or inflight window is full. {max_length, infinity}, %% Low-water mark of queued messages {low_watermark, 0.2}, %% High-water mark of queued messages {high_watermark, 0.6}, %% Queue Qos0 messages? {queue_qos0, true} ]} ]}, %% Broker Options {broker, [ %% System interval of publishing broker $SYS messages {sys_interval, 60}, %% Retained messages {retained, [ %% Expired after seconds, never expired if 0 {expired_after, 0}, %% Max number of retained messages {max_message_num, 100000}, %% Max Payload Size of retained message {max_playload_size, 65536} ]}, %% PubSub and Router {pubsub, [ %% Default should be scheduler numbers {pool_size, 8}, %% Store Subscription: true | false {subscription, true}, %% Route aging time(seconds) {route_aging, 5} ]}, %% Bridge {bridge, [ %%TODO: bridge queue size {max_queue_len, 10000}, %% Ping Interval of bridge node {ping_down_interval, 1} %seconds ]} ]}, %% Modules {modules, [ %% Client presence management module. %% Publish messages when client connected or disconnected {presence, [{qos, 0}]}, %% Subscribe topics automatically when client connected {subscription, [ %% $c will be replaced by clientid %% {"$queue/clients/$c", 1}, %% Static subscriptions from backend backend ]} %% Rewrite rules %% {rewrite, [{file, "etc/rewrite.config"}]} ]}, %% Plugins {plugins, [ %% Plugin App Library Dir {plugins_dir, "./plugins"}, %% File to store loaded plugin names. {loaded_file, "./data/loaded_plugins"} ]}, %% Listeners {listeners, [ {mqtt, 1883, [ %% Size of acceptor pool {acceptors, 16}, %% Maximum number of concurrent clients {max_clients, 512}, %% Socket Access Control {access, [{allow, all}]}, %% Connection Options {connopts, [ %% Rate Limit. Format is 'burst, rate', Unit is KB/Sec %% {rate_limit, "100,10"} %% 100K burst, 10K rate ]}, %% Socket Options {sockopts, [ %Set buffer if hight thoughtput %{recbuf, 4096}, %{sndbuf, 4096}, %{buffer, 4096}, %{nodelay, true}, {backlog, 1024} ]} ]}, {mqtts, 8883, [ %% Size of acceptor pool {acceptors, 4}, %% Maximum number of concurrent clients {max_clients, 512}, %% Socket Access Control {access, [{allow, all}]}, %% SSL certificate and key files {ssl, [{certfile, "etc/ssl/ssl.crt"}, {keyfile, "etc/ssl/ssl.key"}]}, %% Socket Options {sockopts, [ {backlog, 1024} %{buffer, 4096}, ]} ]}, %% WebSocket over HTTPS Listener %% {https, 8083, [ %% %% Size of acceptor pool %% {acceptors, 4}, %% %% Maximum number of concurrent clients %% {max_clients, 512}, %% %% Socket Access Control %% {access, [{allow, all}]}, %% %% SSL certificate and key files %% {ssl, [{certfile, "etc/ssl/ssl.crt"}, %% {keyfile, "etc/ssl/ssl.key"}]}, %% %% Socket Options %% {sockopts, [ %% %{buffer, 4096}, %% {backlog, 1024} %% ]} %%]}, %% HTTP and WebSocket Listener {http, 8083, [ %% Size of acceptor pool {acceptors, 4}, %% Maximum number of concurrent clients {max_clients, 64}, %% Socket Access Control {access, [{allow, all}]}, %% Socket Options {sockopts, [ {backlog, 1024} %{buffer, 4096}, ]} ]} ]}, %% Erlang System Monitor {sysmon, [ %% Long GC, don't monitor in production mode for: %% https://github.com/erlang/otp/blob/feb45017da36be78d4c5784d758ede619fa7bfd3/erts/emulator/beam/erl_gc.c#L421 {long_gc, false}, %% Long Schedule(ms) {long_schedule, 240}, %% 8M words. 32MB on 32-bit VM, 64MB on 64-bit VM. %% 8 * 1024 * 1024 {large_heap, 8388608}, %% Busy Port {busy_port, false}, %% Busy Dist Port {busy_dist_port, true} ]} ]} ].
Virtuoso R2RML rr:IRI generating
I have a problem with generating rr:termType rr:IRI in Virtuoso. I don't know if am I doing it wrong but I followed the W3C specification. My mapping looks like this. When I generate triples with CONSTRUCT statement I still get "URL" but not IRI => <url> (OWNER_LINK and BRAND_LINK columns). Is it something Virtuoso doesn't support or am I coding it the wrong way? DB.DBA.TTLP ( ' #prefix rr: <http://www.w3.org/ns/r2rml#> . #prefix foaf: <http://xmlns.com/foaf/0.1/> . #prefix gr: <http://purl.org/goodrelations/v1#> . #prefix s: <http://schema.org/> . #prefix pod: <http://linked.opendata.cz/ontology/product-open-data.org#> . <#TriplesMap3> a rr:TriplesMap ; rr:logicalTable [ rr:tableSchema "POD" ; rr:tableOwner "DBA" ; rr:tableName "BRAND_OWNER_BSIN" ]; rr:subjectMap [ rr:template "http://linked.opendata.cz/resource/brand-owner-bsin/{BSIN}" ; rr:class gr:BusinessEntity ; rr:graph <http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01> ]; rr:predicateObjectMap [ rr:predicate gr:hasBrand ; rr:objectMap [ rr:parentTriplesMap <#TriplesMap4> ; rr:joinCondition [ rr:child "OWNER_CD" ; rr:parent "OWNER_CD" ; ]; ]; ]; . <#TriplesMap4> a rr:TriplesMap ; rr:logicalTable [ rr:tableSchema "POD" ; rr:tableOwner "DBA" ; rr:tableName "BRAND_OWNER" ]; rr:subjectMap [ rr:template "http://linked.opendata.cz/resource/brand-owner/{OWNER_CD}" ; rr:class gr:BusinessEntity ; rr:graph <http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01> ]; rr:predicateObjectMap [ rr:predicate gr:legalName ; rr:objectMap [ rr:column "OWNER_NM" ]; ]; rr:predicateObjectMap [ rr:predicate s:url ; rr:objectMap [ rr:termType rr:IRI ; rr:column {OWNER_LINK} ; ]; ]; rr:predicateObjectMap [ rr:predicate gr:hasBrand ; rr:objectMap [ rr:parentTriplesMap <#TriplesMap3> ; rr:joinCondition [ rr:child "OWNER_CD" ; rr:parent "OWNER_CD" ; ]; ]; ]; . <#TriplesMap2> a rr:TriplesMap; rr:logicalTable [ rr:tableSchema "POD"; rr:tableOwner "DBA"; rr:tableName "BRAND_TYPE" ]; rr:subjectMap [ rr:template "http://linked.opendata.cz/resource/brand-type/{BRAND_TYPE_CD}" ; rr:class gr:BusinessEntityType ; rr:graph <http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01> ]; rr:predicateObjectMap [ rr:predicate gr:name ; rr:objectMap [ rr:column "BRAND_TYPE_NM" ]; ]; . <#TriplesMap1> a rr:TriplesMap; rr:logicalTable [ rr:tableSchema "POD" ; rr:tableOwner "DBA" ; rr:tableName "BRAND" ]; rr:subjectMap [ rr:template "http://linked.opendata.cz/resource/brand/{BSIN}" ; rr:class gr:Brand ; rr:graph <http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01> ]; rr:predicateObjectMap [ rr:predicate pod:bsin ; rr:objectMap [ rr:column "BSIN" ] ; ]; rr:predicateObjectMap [ rr:predicate gr:name ; rr:objectMap [ rr:column "BRAND_NM" ] ; ]; rr:predicateObjectMap [ rr:predicate s:url ; rr:objectMap [ rr:termType rr:IRI ; rr:column "BRAND_LINK" ; ]; ]; rr:predicateObjectMap [ rr:predicate gr:BusinessEntityType ; rr:objectMap [ rr:parentTriplesMap <#TriplesMap2> ; rr:joinCondition [ rr:child "BRAND_TYPE_CD" ; rr:parent "BRAND_TYPE_CD" ; ]; ]; ]; . ', 'http://product-open-data.org/temp', 'http://product-open-data.org/temp' ); exec ( 'sparql ' || DB.DBA.R2RML_MAKE_QM_FROM_G ('http://product-open-data.org/temp') );
So I figured out my code was wrong it should be like this rr:predicateObjectMap [ rr:predicateMap [ rr:constant s:url ]; rr:objectMap [ rr:termType rr:IRI ; rr:template "{BRAND_LINK}" ; ]; ];. and it's working Thank you.
To be clear — are you saying the R2RML mapping is loading successfully, but when running a SPARQL CONSTRUCT query, the rr:termType rr:IRI mapping is not being displayed in the result set? As the docs indicate, only rr:sqlQuery is not currently supported ...
eurls containing calls to os:cmd fail in Mac OSX 10.6
I have just 'inherited' an OTP project in my workplace and I am having the following problem: A set of tests include a call to a function which uses os:cmd and it seems the reason the tests fail, it's the only difference I have located between failing and successful tests. I am running the tests on Mac OS X 10.6.8 and Erlang 14A. These are the failing functions: execute_shell_command(Config, Event, Params, output) -> execute_with_output(Config, Event, Params); execute_shell_command(Config, Event, Params, status) -> execute_with_status(Config, Event, Params). execute_with_output(Config, Event, Params) -> Command = executable(Config, Event), String = string:join([Command|Params], " "), error_logger:info_msg("Command to be invoked:~n~p", [String]), os:cmd(String). And below are the failing tests: execute_with_output_test() -> Config = [{click, [{log_path, "fixtures"}, {executable, "echo"}]}, {processing_stat_path, "fixtures"}], ?assertMatch("OK", execute_with_output(Config, click, ["-n", "OK"])). execute_with_output_success_test() -> Config = [{click, [{log_path, "fixtures"}, {executable, "true"}]}, {processing_stat_path, "fixtures"}], ?assertMatch("success", execute_with_output(Config, click, [status_as_output_suffix()])). execute_with_output_failure_test() -> Config = [{click, [{log_path, "fixtures"}, {executable, "false"}]}, {processing_stat_path, "fixtures"}], ?assertMatch("error", execute_with_output(Config, click, [status_as_output_suffix()])). execute_with_status_success_test() -> Config = [{click, [{log_path, "fixtures"}, {executable, "true"}]}, {processing_stat_path, "fixtures"}], ?assertMatch(ok, execute_with_status(Config, click, [])). execute_with_status_failure_test() -> Config = [{click, [{log_path, "fixtures"}, {executable, "false"}]}, {processing_stat_path, "fixtures"}], ?assertMatch(error, execute_with_status(Config, click, [])). hdfs_put_command_parameters_echoed_test() -> Config = [{click, [{log_path, "fixtures"}, {executable, "echo -n"}, {delete_after_copy, false}, {hdfs_url, "hdfs://localhost/unprocessed/clicks"}]}, {processing_stat_path, "fixtures"}], {ok, Hostname} = inet:gethostname(), Expected = "fs -put test_file hdfs://localhost/unprocessed/clicks/" ++ Hostname ++ "-test_file", ?assertMatch(Expected, hdfs_put_command(Config, click, "test_file", output)). click_upload_success_test() -> Config = [{click, [{log_path, "fixtures"}, {executable, "fixtures/hadoop_mock_success"}, {delete_after_copy, false}, {hdfs_url, "hdfs://testhost"}]}, {processing_state_path, "fixtures"}], unconsult(configuration_file(test), Config), Callback = click_upload(test), ?assertMatch([ok, ok], Callback(click, test_message)), write_last_processed(Config, click, "click-2009-04-10-17-00"). The output of the test suite erl -noshell -setcookie colonel_hathi -sname test_units_talkative_client -eval 'error_logger:tty(false).' -pa ./ebin -eval 'case lists:member({test,0}, hdfs_loader:module_info(exports)) of true -> hdfs_loader:test(); false -> io:format(" [No test in ~p]~n", [hdfs_loader]) end' -eval 'case lists:member({test,0}, talkative_client_app:module_info(exports)) of true -> talkative_client_app:test(); false -> io:format(" [No test in ~p]~n", [talkative_client_app]) end' -eval 'case lists:member({test,0}, talkative_client_connection:module_info(exports)) of true -> talkative_client_connection:test(); false -> io:format(" [No test in ~p]~n", [talkative_client_connection]) end' -eval 'case lists:member({test,0}, talkative_client_control:module_info(exports)) of true -> talkative_client_control:test(); false -> io:format(" [No test in ~p]~n", [talkative_client_control]) end' -eval 'case lists:member({test,0}, talkative_client_sup:module_info(exports)) of true -> talkative_client_sup:test(); false -> io:format(" [No test in ~p]~n", [talkative_client_sup]) end' -s init stop hdfs_loader: execute_with_output_test...*failed* ::error:{assertMatch_failed, [{module,hdfs_loader}, {line,310}, {expression, "execute_with_output ( Config , click , [ \"-n\" , \"OK\" ] )"}, {expected,"\"OK\""}, {value,"-n OK\n"}]} in function hdfs_loader:'-execute_with_output_test/0-fun-0-'/0 hdfs_loader: execute_with_output_success_test...*failed* ::error:{assertMatch_failed, [{module,hdfs_loader}, {line,315}, {expression, "execute_with_output ( Config , click , [ status_as_output_suffix ( ) ] )"}, {expected,"\"success\""}, {value,"-n success\n"}]} in function hdfs_loader:'-execute_with_output_success_test/0-fun-0-'/0 hdfs_loader: execute_with_output_failure_test...*failed* ::error:{assertMatch_failed, [{module,hdfs_loader}, {line,321}, {expression, "execute_with_output ( Config , click , [ status_as_output_suffix ( ) ] )"}, {expected,"\"error\""}, {value,"-n error\n"}]} in function hdfs_loader:'-execute_with_output_failure_test/0-fun-0-'/0 hdfs_loader: execute_with_status_success_test...*failed* ::error:{assertMatch_failed, [{module,hdfs_loader}, {line,327}, {expression,"execute_with_status ( Config , click , [ ] )"}, {expected,"ok"}, {value,"-n success\n"}]} in function hdfs_loader:'-execute_with_status_success_test/0-fun-0-'/0 hdfs_loader: execute_with_status_failure_test...*failed* ::error:{assertMatch_failed, [{module,hdfs_loader}, {line,332}, {expression,"execute_with_status ( Config , click , [ ] )"}, {expected,"error"}, {value,"-n error\n"}]} in function hdfs_loader:'-execute_with_status_failure_test/0-fun-0-'/0 hdfs_loader: hdfs_put_command_parameters_echoed_test...*failed* ::error:{assertMatch_failed, [{module,hdfs_loader}, {line,342}, {expression, "hdfs_put_command ( Config , click , \"test_file\" , output )"}, {expected,"Expected"}, {value, "-n fs -put test_file hdfs://localhost/unprocessed/clicks/ws-dhcp_3_156-test_file\n"}]} in function hdfs_loader:'-hdfs_put_command_parameters_echoed_test/0-fun-0-'/1 hdfs_loader: click_upload_success_test...*failed* ::error:{assertMatch_failed,[{module,hdfs_loader}, {line,352}, {expression,"Callback ( click , test_message )"}, {expected,"[ ok , ok ]"}, {value,[error,error]}]} in function hdfs_loader:'-click_upload_success_test/0-fun-0-'/1 in call from hdfs_loader:click_upload_success_test/0 ======================================================= Failed: 7. Skipped: 0. Passed: 17. All 5 tests passed. All 46 tests passed. [No test in talkative_client_control] rake aborted! Tests failed Any help appreciated!
On a Linux box: 1> os:cmd(["echo -n test"]). "test" 2> os:cmd(["echo test"]). "test\n" echo behaves differently on Mac. Specifically, -n doesn't work by default. Here might be a possible solution: http://hints.macworld.com/article.php?story=20071106192548833 Another solution is to stop using -n in the tests and either strip or match on the newline character.