Grails withForm and AWS interactions - grails

The following issue is using the following:
AWS
PostgreSQL
Grails 2.3.3
Redis
On our account creation page, we are having some REALLY obscure functionality with the Grails withForm{}.invalidToken{} closure.
Upon hitting the page for the first time, everything works fine. You can post back to the server fine as long as you do not leave this page.
Upon leaving this page, either through navigation links or logging off, returning to the page (Again through navigation links or logging on and heading there), we can no longer submit, it hits the invalidToken closure every time.
I know AWS is involved as we took the project and deployed it to local machines both with IntelliJ and Tomcat by itself and both work fine. This issue only occurs upon deploying the WAR to AWS. (This occurs both with local builds and automated builds. They work locally but not on AWS)
We have spent almost a week on this issue trying to figure out why this is occuring, and all we have to show for it is we know AWS is somehow involved, but that's as far as we have gotten.
Does anyone have any insight into what would be causing our session to act like this?

After a LOT of searching about this issue, me and my team finally figured it out. Taken directly from our JIRA:
"This issue is caused by the implementation of tomcat-redis-session-manager used on AWS. As per their documentation (https://github.com/jcoleman/tomcat-redis-session-manager#session-change-tracking), there are "unintended consequence of hiding writes if you implicitly change a key in the session or if the object's equality does not change even though the key is updated." Specifically, the "useToken" implementation is Grails 2.3.8 is: "String generateToken(String url) { final UUID uuid = UUID.randomUUID() getTokens(url).add(uuid) return uuid }" The combination of these native implementations are there for incompatible.
The tomcat-redis-session-manager does support a manual dirty tracking mode by setting: RedisSession.setManualDirtyTrackingSupportEnabled(true); but this would require a forked build of SynchronizerTokensHolder in grails-core."

Related

Umbraco: Restoring content. Could not get parent with Id

I'm working on an Umbraco cloud project. I pulled the website from the git repositories and built it. First thing to do there when you run the site is to restore the content that's in the development environment to the local project so we can create new features. Yet Umbraco fails to do so with the following error:
The source environment has thrown a Umbraco.Deploy.Exceptions.ProcessArtifactException
with message: Process pass #3 failed for artifact
umb://document/xxthexguidxofxsomexpagexxxxxxxxx. It might have been
caused by an inner Umbraco.Deploy.Exceptions.EnvironmentException with
message: Could not get parent with id xxthexxx-guid-xofx-xthe-xxhomepagexx.
The following artifacts might be involved:
umb://document/xxthexxxguidxofxxthexxhomepagexx
The technical details may contain more information.
I've noticed that I some strange errors occur if not everything is deployed in the development site in the cloud. So I made sure everything is published.. Still errors though... I'm kinda lost here.
Has anyone come across simular issues? And how did you fix it?
Thanks in advance?
This can happen for a number of reasons, so it's a bit hard to say what exactly the problem is in your case.
Most of the time this happens due to either a circular reference of some sort causing a state that can't really be restored. For example that could be a datatype having a dependency on a node - but the node doesn't exist in a blank new environment. The content restore then refuses to start until the structural data (datatypes, contenttypes and such) is completely in sync, but the datatypes will never be able to be in sync until the content node exists. It's a sort of catch22 situation that might need to be resolved manually.
I would suggest you contact support through the Cloud portal and they will assist you in getting your problem resolved.

XCode Server CI Bot Integrate error (Swift)

I am trying to setup a CI server on my Macbook, I have followed the documentation on the apple website up to the point of creating a bot and integrating my build. When I attempt to integrate the build I repeatedly get the following error:
Bot Issue: error. Build Service Error.
Issue: '/Library/Developer/XcodeServer/Integrations/Caches/14a8ea2a72904f1abcecd38b1c02196b' exists and is not an empty directory (-4).
Integration Number: 13.
Integration URL: https://DavidMcQueens-MacBook-Pro-2.local/xcode/bots/BF817C9/integrations
Description: '/Library/Developer/XcodeServer/Integrations/Caches/14a8ea2a72904f1abcecd38b1c02196b' exists and is not an empty directory (-4).
I have manually deleted the folders in this location, as well as changing the permissions incase the server was having issues writing. Each time I run, I receive the same error. Even after I have deleted the folder so it is empty before the integration.
Does anyone have any ideas on how to solve this issue? I have built my iOS application in Swift (which I believe should still work with the CI server)
I am running OSX Server 4.0 and the latest version of XCode.
I followed Apple's documentation for creating bots
Thanks,
EDIT:
After some experimenting and trying different things to see what the issue is, I disabled 2-factor authentication on my GitHub hosting. This appeared to solve the issue, despite the fact that I was generating a specific application key to get around 2-factor. It solved the issue for a small amount of time, and I managed to successfully get the bot to integrate a few times. However it appears to have gone back to its old tricks.
If anyone has any other knowledge on this, or has managed to get it working on their own machines it will be good to know.
So I believe I have solved this issue, the GitHub 2-factor authentication issue looks to be a red herring.
When setting up the bot, there is a section that says "Checkout the repository", I did not do this step because I already had the repository on my local machine and presumed that it would simply create the repository in another location, and server no other purpose.
However, after some investigation this step is very necessary. From what I understand, checking out the repository does create it again in another location, however this is necessary as this new repository is where the Bot's will pull changes into and build in order to perform the tests. I was trying to use the same repo for development and for the Bots, which it did not like.
Creating a clean checkout of the project (on the server), and configuring the bots in that project then allowed me to progress and get everything setup correctly. It comes down to user error. In hindsight, it makes perfect sense to have a separate repo for the bots (this is my first CI server setup), however the error messages were not helpful and I can't remember seeing this emphasised in the setup guide.

Rails public files only showing with cookie on Heroku

Recently I've run into the issue that the public files of a rails application only load if there is a cookie present. I originally noticed this because Google reported that it couldn't find our robots.txt file. Later I realized that it seems to apply to all of our public files for some reason.
For instance, upon visiting this site, the content is blank. http://80000hours.org/robots.txt
(If it's not blank, remove the cookies from the website).
However, when I load the main page at http://80000hours.org/, and then go back to /robots.txt, the page loads correctly.
I'm quite confused what could create this issue and how to go upon debugging it. Looking back on my commits it doesn't seem like I changed anything substantial during the period where it broke. The Memcache add-on for the website shut down around a week before this happened; I never set up a replacement, but wouldn't think this would have caused the issue.
The issue also does not exist locally, only on the production and staging Heroku instances. The full codebase is here, the issue occurred around November 14th.
Any advice is much appreciated.
Sure enough, it was Memcache. I added the new Heroku Memcache add-on service, Memcachier, and it worked fine without the cookie. I'll check tomorrow if Google successfully find the /robots.txt file or not, but I'm assuming it will.

DNS issue with Azure - gives error when using DNS

I am facing a strange issue with azure -
After uploading a new version of my app about an hour back, the public facing url is throwing a runtime error. However the app works fine when I access it through the azure internal domain -- app.cloudapp.net.
Before upgrading it was working fine.
I have rechecked the cname records with my hosting provider (bluehost) but I cannot find any problem. Even otherwise, the problem seems like an ASP.net issue (due to the typical error page rendered) but something that just does not make sense.
Anyone has any ideas as to what I can do?
EDIT: This started working just as mysteriously as it had stopped - I have no clue whether it is due to DNS propagation delay (although in that case it should not have thrown an error page like described above). However if someone knows why this might happen, I will still appreciate it.
There is a DNS service available for Windows Azure called "DNS Azure" that changes your cloudapp's IP address automatically. See dnsazure.com
This solutions helps to avoid the CNAME record because the A record is kept up-to-date.

WSS caches old Workflow version

I'm currently developing three workflows that are supposed to handle the status of items in different lists.
Each Workflow is attached to a separate list.
When I'm deploying and debugging in my development Environment, everything works fine.
Except for the case, when an item is created via an incoming mail.
I already figured out, that I have to restart some services and then it'll work, but I'm still not sure wich of the services is caching the workflow.
Afterwards I build a .wsp file which I deploy on a server.
Each time I deploy the solution, I do a retract and delete solution first.
After deployment I'll recreate the workflows on the lists
It seems to me that this has no effect. An older version of the workflow is still triggered, if I create a new instance in the list.
I already restarted the whole server and still no result.
Has anyone an idea what else I could try in order to get this working?
Thanks in advance.
If Timer Service is the one that calls your code, then restart Windows SharePoint Services Timer (OWSTIMER.EXE).
When workflow waits on something, it gets serialized (hydrated). When event happens, OWSTIMER.EXE deserializes (dehydrates) and continues workflow execution.
So timer is the one that wakes workflow up.
So this problem kind of resolved itself.
I was reading an article on Kirk Evanns Blog on an issue with the development of workflows in VS2008 for WSS.
I had not realized that I still had an illeagle reference in my Project properties.
I removed the reference. The second thing I tried was deploying with -upgradesolution rather than doing a retract-delete-add-deploy...
I don't know which of both did the trick, but I can finally see the new workflows kicking in.
Thanks for your help.

Resources