lost logout functionality for grails app using spring security - grails

I have a grails app that moved to a new subnet with a change to the DNS. As a result, the logout functionality stopped working. When I inspect the network using chrome, I get this message under request headers: CAUTION: Provisional headers are shown.
This means request to retrieve that resource was never made, so the headers being shown are not the real thing.
The logout function is executing this action
package edu.example.performanceevaluations
import org.codehaus.groovy.grails.plugins.springsecurity.SpringSecurityUtils
class LogoutController {
def index = {
// Put any pre-logout code here
redirect uri: SpringSecurityUtils.securityConfig.logout.filterProcessesUrl // '/j_spring_security_logout'
}
}
Would greatly appreciate a direction to look towards.

As suggested by that link run chrome://net-internals and see if you get anywhere
If you are still lost, I would suggest a two way debugging if you have Linux find something related to your traffic and run either something like tcpdump or if thats too complex install and run ngrep -W byline -d any port 8080 -q. and look for the pattern see what is going on.
ngrep/tcpdump and look for that old ip or subnet on entire traffic see if anything is still trying get through - (this all be best on grails app server ofcourse
(unsure possibly port 8080 or any other clear text port that your app may be running on)
Look for your ip in the apache logs does it hit the actual server when you log out etc?
Has the application been restarted since subnet change since it could have cached the next point from application in the running Java process:
pgrep java|awk '{print "netstat -plant "$1" |grep "$1 }'|/bin/sh
or
pgrep java|awk '{print " lsof -p "$1" |grep -i listen"}'|/bin/sh
I personally think something somewhere needs to be restarted since its hooking on to a cache of something .
Also check the hosts files of any end machines involved ensure nothing has previous subnet physically configured in there.

Related

socat struggle to create serial ports

For testing purposes I want to use socat to create virtual serial ports to use in my Python program.
I have limited success, but struggle again and again with the many options in socat. I use this command in Ubuntu Linux:
sudo socat -d -d pty,b9600,raw,echo=0,link=/dev/ttyS90 pty,b9600,raw,echo=0,link=/dev/ttyS91
As it should, it creates the virtual ports like /dev/pts/2 and 4, and links them to /dev/ttyS90 and *91. It does not work without sudo (it fails with unable to unlink for the *90, *91 ports, although the regular user is in the dialout group).
But as you see the permissions 'lrwxrwxrwx' look like reading/writing for everybody. However, this is NOT true: I CANNOT use these devices unless I am root. The file manager (=Nemo) gives this result:
The permissions are significantly different. Huh?
After issuing 'sudo chmod 777 /dev/ttyS90' (and same for *91) nothing changes in the terminal output, because it is already, but incorrectly, showing 777 permissions, but the Nemo output changes to
And now I can use the ports as regular user! How comes? Am I doing something wrong?
And one more socat problem: the above socat command gives an 8-bit, no-parity connection, but I really need a 7-bit, even-parity connection. My attempts to implement this by juggling some of the many options all failed. I am lost; any insight?
Try changing the permission on /dev/pts/2 and /dev/pts/4 instead of on the link

How to debug an Elixir application in production?

This is not particularly about my current problem, but more like in general. Sometimes I have a problem that only happens in production configuration, and I'd like to debug it there. What is the best way to approach that in Elixir? Production runs without a graphical environment (docker).
In dev I can use IEX.pry, but since mix is unavailable in production, that does not seem to be an option.
For Erlang https://stackoverflow.com/a/21413344/1561489 mentions dbg and redbug, but even if they can be used, I would need help on applying them to Elixir code.
First, start a local node running iex on your dev machine using iex -S mix. If you don't want the application that's running locally to cause breakpoints to be activated, you need to disable the app from starting locally. To do this, you can simply comment out the application function in mix.exs or run iex -S mix run --no-start.
Next, you need to connect to the remote node running on docker from iex on your dev node using Node.connect(:"remote#hostname"). In order to do this, you have to make sure both the epmd and the node ports on the remote machine are reachable from your local node.
Finally, once your nodes are connected, from the local iex, run :debugger.start() which opens the debugger with the GUI. Now in the local iex, run :int.ni(<Module you want to debug>) and it will make the module visible to the debugger and you can go ahead and add breakpoints and start debugging.
You can find a tutorial with steps and screenshots here.
In the case that you are running your production on AWS, then you should first and foremost leverage CloudWatch to your advantage.
In your elixir code, configure your logger like this:
config :logger,
handle_otp_reports: true,
handle_sasl_reports: true,
metadata: [:application, :module, :function, :file, :line]
config :logger,
backends: [
{LoggerFileBackend, :shared_error}
]
config :logger, :shared_error,
path: "#{logging_dir}/verbose-error.log",
level: :error
Inside your Dockerfile, configure an environment variable for where exactly erl_crash.dump gets written to, such as:
ERL_CRASH_DUMP=/opt/log/erl_crash.dump
Then configure awslogs inside a .config file under .ebextensions as follows:
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[erl_crash.dump]
log_group_name=/aws/elasticbeanstalk/your_app/erl_crash.dump
log_stream_name={instance_id}
file=/var/log/erl_crash.dump
[verbose-error.log]
log_group_name=/aws/elasticbeanstalk/your_app/verbose-error.log
log_stream_name={instance_id}
file=/var/log/verbose-error.log
And ensure that you set a volume to your docker under Dockerrun.aws.json
"Logging": "/var/log",
"Volumes": [
{
"HostDirectory": "/var/log",
"ContainerDirectory": "/opt/log"
}
],
After that, you can inspect your error messages under CloudWatch.
Now, if you are using ElasticBeanstalk(which my example above implicitly implies) with Docker deployment as opposed to AWS ECS, then the logs of std_input are redirected by default to /var/log/eb-docker/containers/eb-current-app/stdouterr.log inside CloudWatch.
The main purpose of erl_crash.dump is to at least know when your application crashed, thereby taking the container down. AWS EB will normally restart the container, thus keeping you ignorant about the restart. This understanding can also be obtained from other docker related logs, and you can configure alarms to listen for them and be notified accordingly when your docker had to restart. But another advantage of logging erl_crash.dump to CloudWatch is that if need be, you can always export it later to S3, download the file and import it inside :observer to do analysis of what went wrong.
If after consulting the logs, you still require a more intimate interaction with your production application, then you need to leverage remsh to your node. If you use distillery, you would configure the cookie and the node name of your production application with your release like this:
inside rel/confix.exs, set cookie:
environment :prod do
set include_erts: false
set include_src: false
set cookie: :"my_cookie"
end
and under rel/templates/vm.args.eex you set variables:
-name <%= node_name %>
-setcookie <%= release.profile.cookie %>
and inside rel/config.exs, you set release like this:
release :my_app do
set version: "0.1.0"
set overlays: [
{:template, "rel/templates/vm.args.eex", "releases/<%= release_version %>/vm.args"}
]
set overlay_vars: [
node_name: "p#127.0.0.1",
]
Then you can directly connect to your production node running inside docker by first ssh-ing inside the EC2-instance that houses the docker container, and run the following:
CONTAINER_ID=$(sudo docker ps --format '{{.ID}}')
sudo docker exec -it $CONTAINER_ID bash -c "iex --name q#127.0.0.1 --cookie my_cookie"
Once inside, you can then try to poke around or if need be, at your own peril inject modified code dynamically of the module you would like to inspect. An easy way to do that would be to create a file inside the container and to invoke a Node.spawn_link target_node, fn Code.eval_file(file_name, path) end
In the case your production node is already running and you do not know the cookie, you can go inside your running container and do a ps aux > t.log and do a cat t.log to figure out what random cookie has been applied and use accordingly.
Docker serves as an impediment to the way epmd is able to communicate with other nodes. The best therefore would be to rather create your own AWS AMI image using Packer and do bare metal deployments instead.
Amazon has recently released a new feature to AWS ECS, AWS VPC Networking Mode, which perhaps may facilitate inter-container epmd communication and thus connecting to your node directly. I have not tried it out as yet, I may be wrong.
In the case that you are running on a provider other than AWS, then figuring out how to get easy access to your remote logs with some SSM agent or some other service is a must.
I would recommend using some sort of exception handling tools, so far I am having great experiences on Sentry.

Prevent default redirection from port 80 to 5000 on Synology NAS (DSM 5)

I would like to use a nginx front server on my Synology NAS for reverse-proxying pruposes. The goal is to provide a facade for the non-standard port numbers used by diverse webservers hosted the NAS. nginx should be listening on port 80, otherwise all this wouldn't make any sense.
However DSM comes out of the box with an Apache server that is already listening on port 80. What it does is really silly : it simply redirects to port 5000, which is the entry point to the NAS web manager (DSM).
What I would like to do is disable this functionality, making the port 80 available for my nginx server. How can I do this ?
Since Google redirects to here also for recent Synology DSM, I answer for DSM6 (based on http://tonylawrence.com/posts/unix/synology/freeing-port-80/)
From DSM6, nginx is used as HTTP server and redirection place. The following commands will leave ngingx in place, put run it at port 8880 instead of 80.
ssh into your Synology
sudo -s
cd /usr/syno/share/nginx
Make a backup of server.mustache, DSM.mustache, WWWService.mustache
cp server.mustache server.mustache.bak
cp DSM.mustache DSM.mustache.bak
cp WWWService.mustache WWWService.mustache.bak
sed -i "s/80/8880/g" server.mustache
sed -i "s/80/8880/g" DSM.mustache
sed -i "s/80/8880/g" WWWService.mustache
Optionally, you can also move 443 to 8881:
sed -i "s/443/8881/g" server.mustache
sed -i "s/443/8881/g" DSM.mustache
sed -i "s/443/8881/g" WWWService.mustache
Quit the shell (e.g., via Ctrl+D)
Go to the Control Panel and change any setting (e.g. the Application portal -> Reverse Proxy to forward http://YOURSYNOLOGYHOSTNAME:80 to http://localhost:8181 - 8181 is the port suggested by the pi-hole on DSM tutorial).
tl;dr Edit /usr/syno/etc/synoservice.d/httpd-user.cfg to look like:
{
"init_job_map":{"upstart":["httpd-user"]},
"user_controllable":"no",
"mtu_sensitive":"yes",
"auto_start":"no"
}
Then edit the stop on runlevel to be [0123456] in /etc/init/httpd-user.conf:
Syno-Server> cat /etc/init/httpd-user.conf
description "start httpd-user daemon"
author "Development Infrastructure Team"
console log
reload signal SIGUSR1
start on syno.share.ready and syno.network.ready
stop on runlevel [0123456]
...
... then reboot.
Background infrormation
The answer given by Backslash36 is not the easiest solution and it may also be more difficult to maintain. Here, I give a solution that also doesn't involve starting webstation, which most other solutions demand. Note, for updated documentation see here, which gives a lot of info in general about the synology systems.
It is important to note that the new DSM (> 5.x) use upstart now, so much of the previous documentation is not correct. There are two httpd jobs which run by default on the synology machines:
httpd-sys : serves the administration page(s) and is located on 5000/5001 by default.
httpd-user : this, somewhat confusingly, always runs even if the webstation program is not enabled.
If webstation:
is enabled: then this program serves the user webpages.
is not enabled: then this program sets /usr/syno/synoman/phpsrc/web as its DocumentRoot (/usr/syno/synoman/phpsrc/web/index.cgi -> /usr/syno/synoman/webman/index.cgi), meaning that a call to http://address.of.my.dsm will call the index.cgi file. This cgi file is what drives the redirect to 5000 (or whatever you have set the admin_port to be).
From the command line, you can check what the [secure_]admin_port is set to:
Syno-Server> get_key_value /etc/synoinfo.conf admin_port
5184
Syno-Server> get_key_value /etc/synoinfo.conf secure_admin_port
5185
where I have set mine differently.
Ok, now to the solution. The best solution is simply to stop the httpd-user daemon from starting. This is presumably what you want anyways (e.g. to start another server like `nginx' in a docker). To do this, edit the relevant upstart configuration file:
Syno-Server> cat /usr/syno/etc/synoservice.d/httpd-user.cfg
{
"init_job_map":{"upstart":["httpd-user"]},
"user_controllable":"no",
"mtu_sensitive":"yes",
"auto_start":"no"
}
so that the "auto_start" entry is "no" (as it is above). It will presumably be "yes" on your machine and by default. Then edit the stop on runlevel to be [0123456] in /etc/init/httpd-user.conf:
Syno-Server> cat /etc/init/httpd-user.conf
description "start httpd-user daemon"
author "Development Infrastructure Team"
console log
reload signal SIGUSR1
start on syno.share.ready and syno.network.ready
stop on runlevel [0123456]
...
This last step is to ensure that the httpd-user service does actually start, but then automatically stops. This is because there are otherwise a number of services that depend upon it actually starting. Reboot your machine and you will now see that nothing is listening (or forwarding) on Port 80.
Done ! It was tricky, but now I have it working just fine. Here is how I did it.
What follows requires to connect to the NAS with ssh, and may not be recommended if you want to keep warranty on your product (even though it's completely safe IMHO)
TL;DR : In the following files, replace all occurences of port 80 by a non standard port (for example, 8080). This will release the port 80 and make it available to use by whatever you want.
/etc/httpd/conf/httpd.conf
/etc/httpd/conf/httpd.conf-user
/etc/httpd/conf/httpd.conf-sys
/etc.defaults/httpd/conf/httpd.conf-user
/etc.defaults/httpd/conf/httpd.conf-sys
Note that modifying a subset of these files is probably sufficient (I could observe that the first one is actually computed from several others). I guess modifying the files in /etc.defaults/ would be enough, but if not, worst-case scenario is to modify all those files and you will be just fine.
Once this is done, don't forget to restart your NAS !
For those interested in how I found out
I'm not that familiar with the Linux filesystem, and even less with Apache configuration. But I knew that scripts dealing with startup processes are located in /etc/init. The Apache server that was performing the redirection would be certainly launched from there.
This is where I had to get my hands dirty. I performed some cat <filename> | grep 80 for the files in that directory I considered relevant, hoping to find a configuration line that would set a port number to 80.
That intuition paid off : /etc/init/httpd-user.conf contained the line echo "DocumentRoot \"/usr/syno/synoman/phpsrc/web\"" >> "${HttpdConf}" #port 80 to 5000. Bingo !
Looking at the top of the file, I discovered that the HttpdConf variable was referring to /etc/httpd/conf/httpd.conf. This is where the actual configuration was taking place.
From there it is relatively straightforward, even for those John Snow out there that know nothing about Apache configuration. The trick was to notice that httpd.conf was instantiated from some template at startup (and changing this file was therefore not enough). Performing a find / -name "*httpd.conf*", combined with some grep 80 gave me the list of files to modify.
When you look back all this looks obvious of course.
However I wish Synology gave us more flexibility, so we don't have to perform dirty hacks like that...

Icinga check_jboss "NRPE: unable to read output"

I'm using Icinga to monitor some servers and services. Most of them run fine. But now I like to monitor a JBoss-AS on one server via NRPE. Therefore I'm using the check_jboss-Plugin from MonitoringExchange. Although each time I try running a test-command from my Icinga-Server via NRPE I'm getting a NRPE: unable to read output error. When I try executing the command directly on the monitored server it runs fine. It's strange that the execution on the monitored server takes around 5 seconds to return a acceptable result but the NRPE-Exceution returns immediately the error. Trying to set up the NRPE-timeout didn't solve the problem. I also checked the permissions of the check_jboss-plugin and set them to "777" so that there should be no error.
I don't think that there's a common issue with NRPE, because there are also some other checks (e.g. check_load, check_disk, ...) via NRPE and they are all running fine. The permissions of these plugins are analog to my check_jboss-Plugin.
Following one sample exceuction on the monitored server which runs fine:
/usr/lib64/nagios/plugins/check_jboss.pl -T ServerInfo -J jboss.system -a MaxMemory -w 3000: -c 2000: -f
JBOSS OK - MaxMemory is 4049076224 | MaxMemory=4049076224
Here are two command-executions via NRPE from my Icinga-Server. Both commands are correctly
./check_nrpe -H xxx.xxx.xxx.xxx -c check_hda1
DISK OK - free space: / 47452 MB (76% inode=97%);| /=14505MB;52218;58745;0;65273
./check_nrpe -H xxx.xxx.xxx.xxx -c jboss_MaxMemory
NRPE: Unable to read output
Does anyone have a hint for me? If further config-information needed please ask :)
Try to rule out SELinux either by disabling it globally or by changing the SELinux type to nagios_unconfined_plugin_exec_t.

RAILS, CUCUMBER: Getting the testing server address

While running a cucumber test, I need to know the local testing server address. It will be something like "localhost:47632". I've searched the ENV but it isn't in there, and I can't seem to find any other variables that might have it. Ideas?
I believe that the port is generated is dynamically generated on test runs. You can use OS level tools to inspect what connections are opened by process and glean the port that way. I do this on my ubuntu system infrequently so I can't tell you off the top of my head what tool does that. Netstat maybe? I always have to go out and google for it so consider this more of a hint than a complete answer.
Ah, to be more clear...I put a debug breakpoint in, and when it breaks THEN I use the OS level tools to see what port the test server is running on at that moment in time. How to discover it predictively? No idea, sorry.
here's what I use:
netstat -an | grep LISTEN
(Answering my own question just so that the code formatting will be correct)...
Using jaydel's idea to use netstat, here's the code. I extract the line from netstat that has the current pid. (Probably not the most elegant way to do this, but it works)
value = %x( netstat -l -p --tcp )
pid = $$.to_s
local_port = ""
value.split( "\n" ).each do |i|
if i.include?( pid )
m = i.match( /\*:(\d+)/ )
local_port = m[1].to_s
end
end

Resources