Rails port of testing environment - ruby-on-rails

I'd like to test the HTTP API of our Rails app using Faraday and RSpec. Faraday needs the host url + port. Unfortunately the port of the testing environment does always change. How do I access the current port programmatically in the spec?

If using Capybara you can set the port in the spec_helper.rb like so:
Capybara.server_port = 1234
See also: https://github.com/jnicklas/capybara/pull/123

There's probably more than one way to do this, but this is working for me right now:
The command
port = `lsof -p #{Process.pid} -ai TCP -as TCP:LISTEN -Fn | grep ^n | cut -c 4- | uniq`.strip
Note that you'll have to do this at some point after the app has loaded - i.e., you can't use this in your environment.rb or application.rb file.
The explanation
Basically what this command does is as follows:
lsof is the Unix command for LiSt Open Files
-p #{Process.pid} limits it to the current process (i.e., your test web server instance)
-ai TCP limits it to "files" of type TCP (i.e., open TCP ports). (Note: the -a is to make it AND the search with the pid one - the default, using just -i would OR the search)
-as TCP:LISTEN limits to just TCP ports that are being listened on (as opposed to any open port - like your app's connection to Postgres for example)
-Fn tells it to only output the "name" column, which in this case will be the IP/port that is being listened on
The output of that part by itself will be something like this:
p12345
n*:5001
n*:5001
The first line, starting with p is the process ID. There's no way to suppress this.
The next 2 lines (not sure why it can output multiples, but we'll take care of it in a minute) are the "name" column (hence n), followed by the IP + port. In our case (and I imagine yours as well, in a test environment), the web server listens on all available local IPs, thus *. Then it tells us that the port is, in this case 5001.
Finally, we pipe it through...
* grep ^n to eliminate the first line (the process id)
* cut to say "cut from columns 4 on" - i.e., remove the n*: to return just the port, and
* uniq to just get the one instance
(It will also have a trailing newline, thus the strip call.)
The usage
In my case, I'm using this in my Cucumber env.rb thusly, to reconfigure the URL options for ActiveMailer so my email links get generated properly as working links in test:
port = lsof -p #{Process.pid} -ai TCP -as TCP:LISTEN -Fn | grep ^n | cut -c 4- | uniq.strip
MyApp::Application.configure do
config.action_mailer.default_url_options[:host] = "0.0.0.0:#{port}"
end
No doubt you could do the same thing in a helper/config for Rspec as well.

Related

Prevent default redirection from port 80 to 5000 on Synology NAS (DSM 5)

I would like to use a nginx front server on my Synology NAS for reverse-proxying pruposes. The goal is to provide a facade for the non-standard port numbers used by diverse webservers hosted the NAS. nginx should be listening on port 80, otherwise all this wouldn't make any sense.
However DSM comes out of the box with an Apache server that is already listening on port 80. What it does is really silly : it simply redirects to port 5000, which is the entry point to the NAS web manager (DSM).
What I would like to do is disable this functionality, making the port 80 available for my nginx server. How can I do this ?
Since Google redirects to here also for recent Synology DSM, I answer for DSM6 (based on http://tonylawrence.com/posts/unix/synology/freeing-port-80/)
From DSM6, nginx is used as HTTP server and redirection place. The following commands will leave ngingx in place, put run it at port 8880 instead of 80.
ssh into your Synology
sudo -s
cd /usr/syno/share/nginx
Make a backup of server.mustache, DSM.mustache, WWWService.mustache
cp server.mustache server.mustache.bak
cp DSM.mustache DSM.mustache.bak
cp WWWService.mustache WWWService.mustache.bak
sed -i "s/80/8880/g" server.mustache
sed -i "s/80/8880/g" DSM.mustache
sed -i "s/80/8880/g" WWWService.mustache
Optionally, you can also move 443 to 8881:
sed -i "s/443/8881/g" server.mustache
sed -i "s/443/8881/g" DSM.mustache
sed -i "s/443/8881/g" WWWService.mustache
Quit the shell (e.g., via Ctrl+D)
Go to the Control Panel and change any setting (e.g. the Application portal -> Reverse Proxy to forward http://YOURSYNOLOGYHOSTNAME:80 to http://localhost:8181 - 8181 is the port suggested by the pi-hole on DSM tutorial).
tl;dr Edit /usr/syno/etc/synoservice.d/httpd-user.cfg to look like:
{
"init_job_map":{"upstart":["httpd-user"]},
"user_controllable":"no",
"mtu_sensitive":"yes",
"auto_start":"no"
}
Then edit the stop on runlevel to be [0123456] in /etc/init/httpd-user.conf:
Syno-Server> cat /etc/init/httpd-user.conf
description "start httpd-user daemon"
author "Development Infrastructure Team"
console log
reload signal SIGUSR1
start on syno.share.ready and syno.network.ready
stop on runlevel [0123456]
...
... then reboot.
Background infrormation
The answer given by Backslash36 is not the easiest solution and it may also be more difficult to maintain. Here, I give a solution that also doesn't involve starting webstation, which most other solutions demand. Note, for updated documentation see here, which gives a lot of info in general about the synology systems.
It is important to note that the new DSM (> 5.x) use upstart now, so much of the previous documentation is not correct. There are two httpd jobs which run by default on the synology machines:
httpd-sys : serves the administration page(s) and is located on 5000/5001 by default.
httpd-user : this, somewhat confusingly, always runs even if the webstation program is not enabled.
If webstation:
is enabled: then this program serves the user webpages.
is not enabled: then this program sets /usr/syno/synoman/phpsrc/web as its DocumentRoot (/usr/syno/synoman/phpsrc/web/index.cgi -> /usr/syno/synoman/webman/index.cgi), meaning that a call to http://address.of.my.dsm will call the index.cgi file. This cgi file is what drives the redirect to 5000 (or whatever you have set the admin_port to be).
From the command line, you can check what the [secure_]admin_port is set to:
Syno-Server> get_key_value /etc/synoinfo.conf admin_port
5184
Syno-Server> get_key_value /etc/synoinfo.conf secure_admin_port
5185
where I have set mine differently.
Ok, now to the solution. The best solution is simply to stop the httpd-user daemon from starting. This is presumably what you want anyways (e.g. to start another server like `nginx' in a docker). To do this, edit the relevant upstart configuration file:
Syno-Server> cat /usr/syno/etc/synoservice.d/httpd-user.cfg
{
"init_job_map":{"upstart":["httpd-user"]},
"user_controllable":"no",
"mtu_sensitive":"yes",
"auto_start":"no"
}
so that the "auto_start" entry is "no" (as it is above). It will presumably be "yes" on your machine and by default. Then edit the stop on runlevel to be [0123456] in /etc/init/httpd-user.conf:
Syno-Server> cat /etc/init/httpd-user.conf
description "start httpd-user daemon"
author "Development Infrastructure Team"
console log
reload signal SIGUSR1
start on syno.share.ready and syno.network.ready
stop on runlevel [0123456]
...
This last step is to ensure that the httpd-user service does actually start, but then automatically stops. This is because there are otherwise a number of services that depend upon it actually starting. Reboot your machine and you will now see that nothing is listening (or forwarding) on Port 80.
Done ! It was tricky, but now I have it working just fine. Here is how I did it.
What follows requires to connect to the NAS with ssh, and may not be recommended if you want to keep warranty on your product (even though it's completely safe IMHO)
TL;DR : In the following files, replace all occurences of port 80 by a non standard port (for example, 8080). This will release the port 80 and make it available to use by whatever you want.
/etc/httpd/conf/httpd.conf
/etc/httpd/conf/httpd.conf-user
/etc/httpd/conf/httpd.conf-sys
/etc.defaults/httpd/conf/httpd.conf-user
/etc.defaults/httpd/conf/httpd.conf-sys
Note that modifying a subset of these files is probably sufficient (I could observe that the first one is actually computed from several others). I guess modifying the files in /etc.defaults/ would be enough, but if not, worst-case scenario is to modify all those files and you will be just fine.
Once this is done, don't forget to restart your NAS !
For those interested in how I found out
I'm not that familiar with the Linux filesystem, and even less with Apache configuration. But I knew that scripts dealing with startup processes are located in /etc/init. The Apache server that was performing the redirection would be certainly launched from there.
This is where I had to get my hands dirty. I performed some cat <filename> | grep 80 for the files in that directory I considered relevant, hoping to find a configuration line that would set a port number to 80.
That intuition paid off : /etc/init/httpd-user.conf contained the line echo "DocumentRoot \"/usr/syno/synoman/phpsrc/web\"" >> "${HttpdConf}" #port 80 to 5000. Bingo !
Looking at the top of the file, I discovered that the HttpdConf variable was referring to /etc/httpd/conf/httpd.conf. This is where the actual configuration was taking place.
From there it is relatively straightforward, even for those John Snow out there that know nothing about Apache configuration. The trick was to notice that httpd.conf was instantiated from some template at startup (and changing this file was therefore not enough). Performing a find / -name "*httpd.conf*", combined with some grep 80 gave me the list of files to modify.
When you look back all this looks obvious of course.
However I wish Synology gave us more flexibility, so we don't have to perform dirty hacks like that...

Connecting to a Progress Openedge database from ABL

This code works fine if I run it in the Progress Editor. If I save this as a .p file and click on right button "RUN", it gives me an error that database doesn't exist. I understand that maybe I should insert some code to connect to a database.
Does anybody know what statement I should use?
DEF STREAM st1.
OUTPUT STREAM st1 TO c:\temp\teste.csv.
FOR EACH bdName.table NO-LOCK:
PUT STREAM st1 UNFORMATTED bdName.Table.attr ";" SKIP.
END.
OUTPUT STREAM st1 CLOSE.
Exactly as you say you need to connect to your database. This can be done in a couple of different ways.
Connect by CONNECT statement
You can connect a database using the CONNECT statement. Basically:
CONNECT <database name> [options]
Here's a simple statement that is connecting to a local database named "database" running on port 43210.
CONNECT database.db -H localhost -S 43210.
-H specifies the host running the database. This can be a name or an IP-address. -S specifies the port (or service) that the database uses for connections. This can be a number or a service-name (in that case it must be specified in /etc/services or similar)
However you cannot connect to a database and work with it's tables in the same program. Instead you will need to connect in one program and then run the logic in a second program
/* runProgram.p */
CONNECT database -H dbserver -S 29000.
RUN program.p.
DISCONNECT database.
/* program.p */
FOR EACH exampletable NO-LOCK:
DISPLAY exampletable.
END.
Connect by command line parameters
You can simple add parameters in your startup command so that the new session connects to one or more databases from start.
Windows:
prowin32.exe -db mydatabase -H localhost -S 7777
Look at the option below (parameter file) before doing this
Connect by command line parameter (using a parameter file)
Another option is to use a parameter file, normally with the extension .pf.
Then you will have to modify how you start your session so instead of just doing prowin32.exe (if your on windows) you add the -pf parameter:
prowin32.exe -pf myparameterfile.pf
The parameterfile will then contain all your connection parameters:
# myparameterfile.pf
-db database -S localhost -P 12345
Hashtag (#) is used for comments in parameter files.
On Linux/Unix you would run:
pro -pf myparameterfile.pf
You can also mix the different ways for different databases used in the same session.

Reading netstat Output to Install Printer

Long story behind it, but here's what I'm trying to do:
I am working on a remote virtual machine implementation, where depending on the client device's location, the appropriate network printer will be installed via batch file (no VBS or PowerShell).
So, my idea is this:
Run netstat -an -p tcp to find the line containing port 49404.
Filter that output to grab the second IP address that will be
returned
Replace last octet of that IP with "250" (the printer IP
for each network)
Run nslookup on newly calculated IP to obtain
the name of that printer
Install printer by name.
Here's what I do have so far, pieced together from older posts around the web (I haven't gotten to steps 4 or 5 yet):
#echo off
netstat -p tcp -an | FIND "49404" > %temp%\TEMPIP.txt
FOR /F "tokens=2 delims=:" %%a in (%temp%\TEMPIP.txt) do set IP=%%a
del %temp%\TEMPIP.txt
set IP=%IP:~9%
set "ip=%IP%"
for /f "tokens=1-4 delims=. " %%a in ("%ip%") do (
set octetA=%%a
set octetB=%%b
set octetC=%%c
set octetD=232
)
I'm sure there are cleaner or more efficient ways to perform this task, so I'm hoping you all can point me in the right direction. Thanks!

lost logout functionality for grails app using spring security

I have a grails app that moved to a new subnet with a change to the DNS. As a result, the logout functionality stopped working. When I inspect the network using chrome, I get this message under request headers: CAUTION: Provisional headers are shown.
This means request to retrieve that resource was never made, so the headers being shown are not the real thing.
The logout function is executing this action
package edu.example.performanceevaluations
import org.codehaus.groovy.grails.plugins.springsecurity.SpringSecurityUtils
class LogoutController {
def index = {
// Put any pre-logout code here
redirect uri: SpringSecurityUtils.securityConfig.logout.filterProcessesUrl // '/j_spring_security_logout'
}
}
Would greatly appreciate a direction to look towards.
As suggested by that link run chrome://net-internals and see if you get anywhere
If you are still lost, I would suggest a two way debugging if you have Linux find something related to your traffic and run either something like tcpdump or if thats too complex install and run ngrep -W byline -d any port 8080 -q. and look for the pattern see what is going on.
ngrep/tcpdump and look for that old ip or subnet on entire traffic see if anything is still trying get through - (this all be best on grails app server ofcourse
(unsure possibly port 8080 or any other clear text port that your app may be running on)
Look for your ip in the apache logs does it hit the actual server when you log out etc?
Has the application been restarted since subnet change since it could have cached the next point from application in the running Java process:
pgrep java|awk '{print "netstat -plant "$1" |grep "$1 }'|/bin/sh
or
pgrep java|awk '{print " lsof -p "$1" |grep -i listen"}'|/bin/sh
I personally think something somewhere needs to be restarted since its hooking on to a cache of something .
Also check the hosts files of any end machines involved ensure nothing has previous subnet physically configured in there.

RAILS, CUCUMBER: Getting the testing server address

While running a cucumber test, I need to know the local testing server address. It will be something like "localhost:47632". I've searched the ENV but it isn't in there, and I can't seem to find any other variables that might have it. Ideas?
I believe that the port is generated is dynamically generated on test runs. You can use OS level tools to inspect what connections are opened by process and glean the port that way. I do this on my ubuntu system infrequently so I can't tell you off the top of my head what tool does that. Netstat maybe? I always have to go out and google for it so consider this more of a hint than a complete answer.
Ah, to be more clear...I put a debug breakpoint in, and when it breaks THEN I use the OS level tools to see what port the test server is running on at that moment in time. How to discover it predictively? No idea, sorry.
here's what I use:
netstat -an | grep LISTEN
(Answering my own question just so that the code formatting will be correct)...
Using jaydel's idea to use netstat, here's the code. I extract the line from netstat that has the current pid. (Probably not the most elegant way to do this, but it works)
value = %x( netstat -l -p --tcp )
pid = $$.to_s
local_port = ""
value.split( "\n" ).each do |i|
if i.include?( pid )
m = i.match( /\*:(\d+)/ )
local_port = m[1].to_s
end
end

Resources