Exp error in Unix - readline

Simply put, I get a ` exp error when trying to start a different shell from my putty session.
This is my .bashrc file
export QHOME=/opt/q
export PATH=$PATH:$QHOME
alias q='rlwrap -c q'
The paths are correct, I can't find any information on the error.
[myname#dev-unixtrain ~]$ q
xxx+ 2.7 2011.08.16 Copyright (C) xxxx-xxxx Name Systems
l64/ 1()core 992MB myname dev-unixtrain.company.com 10.29.4.56 2014.03.16 company.com INTERNAL #45486
'exp
So it loads up and automatically goes back to the unix shell.
Any ideas on what the cause might be??
TIA

This is a licence error. You're trying to use q with an expired licence.

Related

docker -w dir prefixed with another dir [duplicate]

Earlier today, I was trying to generate a certificate with a DNSName entry in the SubjectAltName extension:
$ openssl req -new -subj "/C=GB/CN=foo" -addext "subjectAltName = DNS:foo.co.uk" \
-addext "certificatePolicies = 1.2.3.4" -key ./private-key.pem -out ~/req.pem
This command led to the following error message:
name is expected to be in the format /type0=value0/type1=value1/type2=... where characters may be escaped by . This name is not in that format: 'C:/Program Files/Git/C=GB/CN=foo'
problems making Certificate Request
How can I stop Git Bash from treating this string parameter as a filepath, or at least stop it from making this alteration?
The release notes to the Git Bash 2.21.0 update today mentioned this as a known issue. Fortunately, they also described two solutions to the problem:
If you specify command-line options starting with a slash, POSIX-to-Windows path conversion will kick in converting e.g. "/usr/bin/bash.exe" to "C:\Program Files\Git\usr\bin\bash.exe". When that is not desired -- e.g. "--upload-pack=/opt/git/bin/git-upload-pack" or "-L/regex/" -- you need to set the environment variable MSYS_NO_PATHCONV temporarily, like so:
MSYS_NO_PATHCONV=1 git blame -L/pathconv/ msys2_path_conv.cc
Alternatively, you can double the first slash to avoid POSIX-to-Windows path conversion, e.g. "//usr/bin/bash.exe".
Using MSYS_NO_PATHCONV=1 can be problematic if your script accesses files.
Prefixing with a double forward slash doesn't work for the specific case of OpenSSL, as it causes the first DN segment key to be read as "/C" instead of "C", so OpenSSL drops it, outputting:
req: Skipping unknown attribute "/C"
Instead, I used a function that detects if running on bash for Windows, and prefixes with a "dummy" segment if so:
# If running on bash for Windows, any argument starting with a forward slash is automatically
# interpreted as a drive path. To stop that, you can prefix with 2 forward slashes instead
# of 1 - but in the specific case of openssl, that causes the first CN segment key to be read as
# "/O" instead of "O", and is skipped. We work around that by prefixing with a spurious segment,
# which will be skipped by openssl
function fixup_cn_subject() {
local result="${1}"
case $OSTYPE in
msys|win32) result="//XX=x${result}"
esac
echo "$result"
}
# Usage example
MY_SUBJECT=$(fixup_cn_subject "/C=GB/CN=foo")
Found a workaround by passing a dummy value as the first attribute, for example: -subj '//SKIP=skip/C=gb/CN=foo'
I had the same issue using bash, but running the exact same command in Powershell worked for me. Hopefully this will help someone.

How to check the blockchain height in hyperledger-fabric

I am playing with hyperledger-fabric v.1.0 - actually a newbie. How can I check the chain height ? Is there a command or something that I can use to "ask" about the blockchain height? Thanks in advance.
Well, you have a few options of how you can do it:
You can leverage peer cli command line tool to obtain latest available block by running
peer channel fetch newest -o ordererIP:7050 -c mychannel last.block
Next you can leverage configtxlator to decode content of the block as following:
curl -X POST --data-binary #last.block http://localhost:7059/protolator/decode/common.Block
(note you need to start configtxlator first)
Alternative path assumes you are going to use one of available SDK's to invoke QSCC (Query System ChainCode) with GetChainInfo command. This will return you back following structure:
type BlockchainInfo struct {
Height uint64 `protobuf:"varint,1,opt,name=height" json:"height,omitempty"`
CurrentBlockHash []byte `protobuf:"bytes,2,opt,name=currentBlockHash,proto3" json:"currentBlockHash,omitempty"`
PreviousBlockHash []byte `protobuf:"bytes,3,opt,name=previousBlockHash,proto3" json:"previousBlockHash,omitempty"`
}
Which has information about current ledger height.
Another alternative.
Using the cli peer command line (for example docker exec -it cli bash) you can do:
peer channel getinfo -c mychannel
It seems that I found something - maybe cumbersome, but better than nothing:
Command:
docker logs -f peer0.org1.example.com 2>&1 | grep blockNo
Check for the "latest" line in the output, something like:
2017-07-18 19:40:39.586 UTC [historyleveldb] Commit -> DEBU b75b Channel [mychannel]: Updates committed to history database for blockNo [34]
So, if I am not wrong, in this case the block height is: 34
Thanks
you can use blockchain-explorer (UI tool)
https://github.com/hyperledger/blockchain-explorer
You should also be able to use the fabric CORE API (JSON/REST).
See the docs for the Blockchain GET/chain operation at;
https://github.com/hyperledger-archives/fabric/blob/master/docs/API/CoreAPI.md#rest-api

roslaunch failed: cannot launch node

I have downloaded and compiled some Ros nodes from here (just to have more info). I am trying to launch the five ROS nodes with parameters using a launchfile that is taken from that repo.
After executing source catkin_ws/devel_isolated/setup.bash and executing roslaunch crab.launch(the launch file from the link above) the next error appears:
root#beaglebone:~# roslaunch crab.launch
... logging to /root/.ros/log/4f6332fe-dbe2-11e3-86a8-7ec70b079d59/roslaunch-beaglebone-2067.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://beaglebone:58881/
SUMMARY
========
PARAMETERS
* /clearance
* /duration_ripple
* /duration_tripod
* /joint_lower_limit
* /joint_upper_limit
* /port_name
* /robot_description
* /rosdistro
* /rosversion
* /trapezoid_h
* /trapezoid_high_radius
* /trapezoid_low_radius
NODES
/
crab_body_kinematics (crab_body_kinematics/body_kinematics)
crab_gait (crab_gait/gait_kinematics)
crab_imu (crab_imu/imu_control)
crab_leg_kinematics (crab_leg_kinematics/leg_ik_service)
crab_maestro_controller (crab_maestro_controller/controller_sub)
ROS_MASTER_URI=http://localhost:11311
core service [/rosout] found
ERROR: cannot launch node of type [crab_leg_kinematics/leg_ik_service]: can't locate node [leg_ik_service] in package [crab_leg_kinematics]
ERROR: cannot launch node of type [crab_maestro_controller/controller_sub]: can't locate node [controller_sub] in package [crab_maestro_controller]
ERROR: cannot launch node of type [crab_body_kinematics/body_kinematics]: can't locate node [body_kinematics] in package [crab_body_kinematics]
ERROR: cannot launch node of type [crab_gait/gait_kinematics]: can't locate node [gait_kinematics] in package [crab_gait]
ERROR: cannot launch node of type [crab_imu/imu_control]: can't locate node [imu_control] in package [crab_imu]
I have reinstalled the packages as suggested in some other threats about similar problems.
I also have noticed that
1º- if I move all the executablesof the nodes to the folder src/<package>/, I'm able to execute roslaunch crab.launch. But I don´t want to leave it like that, not proper way to work ;)
Additional info:
2º- If I execute, for example, source devel_isolated/<package>/setup.bashand then roslaunch crab.launch, the package which I have just source-d works and executes... (while the other still don't)
3º- So I have source-d all the source devel_isolated/<package>/setup.bash and try again: no one worked this time.
This leads to think that the problems are due to ROS variable enviroment: if I make an export | grep ROSafter 2º, I can see that the package path appears in $ROS_PATH-s and the others are not there:
root#beaglebone:~# export | grep ROS
declare -x ROS_DISTRO="hydro"
declare -x ROS_ETC_DIR="/opt/ros/hydro/etc/ros"
declare -x ROS_MASTER_URI="http://localhost:11311"
declare -x ROS_PACKAGE_PATH="/root/catkin_ws/src/crab_msgs:/root/catkin_ws/src/joy:/root/catkin_ws
/src/ps3joy:/root/catkin_ws/src/xacro:/root/catkin_ws/src/roslint:/root/catkin_ws/src/kdl_parser:/root/catkin_ws
/src/urdf:/root/catkin_ws/src/urdf_parser_plugin:/root/catkin_ws/src:/opt/ros/hydro/share:/opt/ros/hydro
/stacks:/root/ros_catkin_ws/install_isolated/share:/root/ros_catkin_ws/install_isolated/stacks"
declare -x ROS_ROOT="/opt/ros/hydro/share/ros"
declare -x ROS_TEST_RESULTS_DIR="/root/catkin_ws/build_isolated/crab_msgs/test_results"
root#beaglebone:~# source catkin_ws/devel_isolated/crab_imu/setup.bash
declare -x ROS_PACKAGE_PATH="/root/catkin_ws/src/crab_imu:/root/catkin_ws/src/crab_msgs:/root/catkin_ws
/src/joy:/root/catkin_ws/src/ps3joy:/root/catkin_ws/src/xacro:/root/catkin_ws/src/roslint:/root/catkin_ws
/src/kdl_parser:/root/catkin_ws/src/urdf:/root/catkin_ws/src/urdf_parser_plugin:/root/catkin_ws/src:/opt
/ros/hydro/share:/opt/ros/hydro/stacks:/root/ros_catkin_ws/install_isolated/share:/root/ros_catkin_ws
/install_isolated/stacks"
declare -x ROS_TEST_RESULTS_DIR="/root/catkin_ws/build_isolated/crab_imu/test_results"
Seems that 3º overwrites the source executed before..., meaning that in ROS_PACKAGE_PATHdoes not appear all he packages as they should.
I also have tried to force ROS_PACKAGE_PATHusing exportcommand, but it didn't work. So, I have to change more environment variables apart from that, but don't know which one...
So, I don't know if I diagnosis is correct and, if so, what should I do to correct this... Hope I have gathered enough info.
Thanks in advance!!
Iñigo
set the executable bit for files. most probably you need to set executable permissions for files.
chmod +x filename.

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

How to know sqlplus version from inside sqlplus

I am offered (by some framework) to run commands in sqlplus, but am not launching it myself.
I'd like to know the version of that sqlplus running.
Within SQL*Plus, there are some preDEFINEd substitution variable:
SQL> define
DEFINE _DATE = "23-NOV-13" (CHAR)
DEFINE _CONNECT_IDENTIFIER = "" (CHAR)
DEFINE _USER = "" (CHAR)
DEFINE _PRIVILEGE = "" (CHAR)
DEFINE _SQLPLUS_RELEASE = "1102000100" (CHAR)
DEFINE _EDITOR = "Notepad" (CHAR)
Notice the _SQLPLUS_RELEASE. You reference this in SQLPLUS.
For example, you can do something like:
sqlplus -S /nolog<<EOF
prompt &_SQLPLUS_RELEASE
quit
EOF
You can just use command :
sqlplus -V
And you should get :
SQL*Plus: Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
Or :
SQL*Plus: Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
I don't think you can with an actual query. You may be able to get it with this:
SELECT
PROGRAM, MODULE
from v$session s
order by s.sid;
The Module column may contain the version number, it might not. It depends on the program. If memory serves correctly, sqlplus does not give this. For example, TOAD gives "TOAD Freeware 11.0.0.116"
> select &_sqlplus_release from dual;
old 1: select &_sqlplus_release from dual
new 1: select 1803000000 from dual
1803000000
----------
1803000000
You can also just connect to sqlplus through commande line. In LINUX you can do the following:
[orafresh#ljsrv1123 ~]$ sqlplus / as sysdba
Which will return:
SQL*Plus: Release 11.1.0.7.0 - Production on Fri Jul 14 12:47:36 2017
Copyright (c) 1982, 2008, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release
11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options

Resources