How to append to the URL Path Grails - url

I'm using Grails 1.3.7 and I'm trying to figure out how to append a state (region) abbreviation (NM, AZ, UT) in the path of my URL. So it looks like this:
localhost/NM/action/
or
localhost/UT/action2/3/
This has to be done for all urls on the site. I also don't want to change the URLMappings.groovy file, would a filter help with this?
I created this filter definition:
def filters = {
...
regionAppender(uri:'/**') {
before = {
if (request.region) {
if (!request.forwardURI.contains(request.region)) {
String[] split = s.split("/");
String redirectUri = split[0] + "/" + request.region
if (split.length > 1) {
for (int i = 1; i < split.length; i++){
redirectUri = redirectUri + "/" + split[i]
}
}
redirect(uri: redirectUri)
return false
}
return true
}
return true
}
}
}
But it doesn't seem to be working. Is there a way to do this, or do I have change all my URLMappings?

I haven't test this but maybe it is to do with your declaration of request... try params.region instead
Take a look here
def filters = {
....
before = {
if(params?.id) {

Related

How to replace Servlet Filters already defined (replacing legacy doWithWebDescriptor with doWithSpring)

I'm trying to upgrade a Grails plugin from version 2.3.4 to 4.0.11. It uses a syntax that is no longer supported to replace filters with names sitemesh and urlMapping with its own filters.
The code below uses a DSL for xml. It replaces xml nodes in the final generated web.xml.
def doWithWebDescriptor = { xml ->
def pageFilter = xml.filter.find { it.'filter-name'.text() == 'sitemesh' }
def urlMappingFilter = xml.filter.find { it.'filter-name'.text() == 'urlMapping' }
def grailsVersion = GrailsUtil.grailsVersion
// Grails 1.3.x & Grails 2.0.x
def pageFilterClass = "org.zkoss.zk.grails.web.ZKGrailsPageFilter"
def urlMappingFilterClass = "org.zkoss.zk.grails.web.ZULUrlMappingsFilter"
if(grailsVersion.startsWith("2")) {
pageFilter.'filter-class'.replaceNode {
'filter-class'(pageFilterClass)
}
urlMappingFilter.'filter-class'.replaceNode {
'filter-class'(urlMappingFilterClass)
}
//
// Require a legacy config for servlet version
//
if(application.metadata.getServletVersion() >= '3.0') {
pageFilter.'filter-class' + {
'async-supported'('true')
}
urlMappingFilter.'filter-class' + {
'async-supported'('true')
}
}
} else {
pageFilter.'filter-class'.replaceBody(pageFilterClass)
urlMappingFilter.'filter-class'.replaceBody(urlMappingFilterClass)
}
}
What I tried so far
The code below uses Grails plugin configuration to register filters with spring's FilterRegistrationBean. I'm following Grails official documentation.
Closure doWithSpring() { { ->
boolean supportsAsync = this.grailsApplication.metadata.getServletVersion() >= "3.0"
pageFilter(FilterRegistrationBean) {
name = "sitemesh"
filter = bean(org.zkoss.zk.grails.web.ZKGrailsPageFilter)
urlPatterns = ["/*"]
order = Ordered.HIGHEST_PRECEDENCE
asyncSupported = supportsAsync
}
urlMappingFilter(FilterRegistrationBean) {
name = "urlMapping"
filter = bean(org.zkoss.zk.grails.web.ZULUrlMappingsFilter)
urlPatterns = ["/*"]
order = Ordered.HIGHEST_PRECEDENCE
asyncSupported = supportsAsync
}
}}
How can I replicate the legacy code with RegistrationBeans?
Also, I don't know if any of these filters got deprecated by Grails. I would like to know if there are any other replacements, if possible.
Here's the project in case anyone wants more context.
Debugging the older version of the plugin I came up with the following:
pageFilter(FilterRegistrationBean) {
name = "sitemesh"
filter = bean(ZKGrailsPageFilter)
urlPatterns = ["/*"]
order = OrderedFilter.REQUEST_WRAPPER_FILTER_MAX_ORDER + 50
asyncSupported = supportsAsync
dispatcherTypes = EnumSet.of(DispatcherType.REQUEST, DispatcherType.ERROR)
}
urlMappingFilter(FilterRegistrationBean) {
name = "urlMapping"
filter = bean(ZULUrlMappingsFilter)
urlPatterns = ["/*"]
order = OrderedFilter.REQUEST_WRAPPER_FILTER_MAX_ORDER + 60
asyncSupported = supportsAsync
dispatcherTypes = EnumSet.of(DispatcherType.REQUEST, DispatcherType.FORWARD)
}
Added the dispatcherTypes and changed the order assuming these would be the last filters where the 'pageFilter' should be placed before the 'urlMappingFilter' in the filter chain.

Crawler4j With Grails App

I am making a crawler application in Groovy on Grails. I am using Crawler4j and following this tutorial.
I created a new grails project
Put the BasicCrawlController.groovy file in controllers->package
Did not create any view because I expected on doing run-app, my crawled data would appear in my crawlStorageFolder (please correct me if my understanding is flawed)
After that I just ran the application by doing run-app but I didn't see any crawling data anywhere.
Am I right in expecting some file to be created at the crawlStorageFolder location that I have given as C:/crawl/crawler4jStorage?
Do I need to create any view for this?
If I want to invoke this crawler controller from some other view on click of a submit button of a form, can I just write <g:form name="submitWebsite" url="[controller:'BasicCrawlController ']">?
I asked this because I do not have any method in this controller, so is it the right way to invoke this controller?
My code is as follows:
//All necessary imports
public class BasicCrawlController {
static main(args) throws Exception {
String crawlStorageFolder = "C:/crawl/crawler4jStorage";
int numberOfCrawlers = 1;
//int maxDepthOfCrawling = -1; default
CrawlConfig config = new CrawlConfig();
config.setCrawlStorageFolder(crawlStorageFolder);
config.setPolitenessDelay(1000);
config.setMaxPagesToFetch(100);
config.setResumableCrawling(false);
PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);
controller.addSeed("http://en.wikipedia.org/wiki/Web_crawler")
controller.start(BasicCrawler.class, 1);
}
}
class BasicCrawler extends WebCrawler {
final static Pattern FILTERS = Pattern
.compile(".*(\\.(css|js|bmp|gif|jpe?g"+ "|png|tiff?|mid|mp2|mp3|mp4" +
"|wav|avi|mov|mpeg|ram|m4v|pdf" +"|rm|smil|wmv|swf|wma|zip|rar|gz))\$")
/**
* You should implement this function to specify whether the given url
* should be crawled or not (based on your crawling logic).
*/
#Override
boolean shouldVisit(WebURL url) {
String href = url.getURL().toLowerCase()
!FILTERS.matcher(href).matches() && href.startsWith("http://en.wikipedia.org/wiki/Web_crawler/")
}
/**
* This function is called when a page is fetched and ready to be processed
* by your program.
*/
#Override
void visit(Page page) {
int docid = page.getWebURL().getDocid()
String url = page.getWebURL().getURL()
String domain = page.getWebURL().getDomain()
String path = page.getWebURL().getPath()
String subDomain = page.getWebURL().getSubDomain()
String parentUrl = page.getWebURL().getParentUrl()
String anchor = page.getWebURL().getAnchor()
println("Docid: ${docid} ")
println("URL: ${url} ")
println("Domain: '${domain}'")
println("Sub-domain: ' ${subDomain}'")
println("Path: '${path}'")
println("Parent page:${parentUrl} ")
println("Anchor text: ${anchor} " )
if (page.getParseData() instanceof HtmlParseData) {
HtmlParseData htmlParseData = (HtmlParseData) page.getParseData()
String text = htmlParseData.getText()
String html = htmlParseData.getHtml()
List<WebURL> links = htmlParseData.getOutgoingUrls()
println("Text length: " + text.length())
println("Html length: " + html.length())
println("Number of outgoing links: " + links.size())
}
Header[] responseHeaders = page.getFetchResponseHeaders()
if (responseHeaders != null) {
println("Response headers:")
for (Header header : responseHeaders) {
println("\t ${header.getName()} : ${header.getValue()}")
}
}
println("=============")
}
}
I'll try to translate your code into a Grails standard.
Use this under grails-app/controller
class BasicCrawlController {
def index() {
String crawlStorageFolder = "C:/crawl/crawler4jStorage";
int numberOfCrawlers = 1;
//int maxDepthOfCrawling = -1; default
CrawlConfig crawlConfig = new CrawlConfig();
crawlConfig.setCrawlStorageFolder(crawlStorageFolder);
crawlConfig.setPolitenessDelay(1000);
crawlConfig.setMaxPagesToFetch(100);
crawlConfig.setResumableCrawling(false);
PageFetcher pageFetcher = new PageFetcher(crawlConfig);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(crawlConfig, pageFetcher, robotstxtServer);
controller.addSeed("http://en.wikipedia.org/wiki/Web_crawler")
controller.start(BasicCrawler.class, 1);
render "done crawling"
}
}
Use this under src/groovy
class BasicCrawler extends WebCrawler {
final static Pattern FILTERS = Pattern
.compile(".*(\\.(css|js|bmp|gif|jpe?g"+ "|png|tiff?|mid|mp2|mp3|mp4" +
"|wav|avi|mov|mpeg|ram|m4v|pdf" +"|rm|smil|wmv|swf|wma|zip|rar|gz))\$")
/**
* You should implement this function to specify whether the given url
* should be crawled or not (based on your crawling logic).
*/
#Override
boolean shouldVisit(WebURL url) {
String href = url.getURL().toLowerCase()
!FILTERS.matcher(href).matches() && href.startsWith("http://en.wikipedia.org/wiki/Web_crawler/")
}
/**
* This function is called when a page is fetched and ready to be processed
* by your program.
*/
#Override
void visit(Page page) {
int docid = page.getWebURL().getDocid()
String url = page.getWebURL().getURL()
String domain = page.getWebURL().getDomain()
String path = page.getWebURL().getPath()
String subDomain = page.getWebURL().getSubDomain()
String parentUrl = page.getWebURL().getParentUrl()
String anchor = page.getWebURL().getAnchor()
println("Docid: ${docid} ")
println("URL: ${url} ")
println("Domain: '${domain}'")
println("Sub-domain: ' ${subDomain}'")
println("Path: '${path}'")
println("Parent page:${parentUrl} ")
println("Anchor text: ${anchor} " )
if (page.getParseData() instanceof HtmlParseData) {
HtmlParseData htmlParseData = (HtmlParseData) page.getParseData()
String text = htmlParseData.getText()
String html = htmlParseData.getHtml()
List<WebURL> links = htmlParseData.getOutgoingUrls()
println("Text length: " + text.length())
println("Html length: " + html.length())
println("Number of outgoing links: " + links.size())
}
Header[] responseHeaders = page.getFetchResponseHeaders()
if (responseHeaders != null) {
println("Response headers:")
for (Header header : responseHeaders) {
println("\t ${header.getName()} : ${header.getValue()}")
}
}
println("=============")
}
}

Redirect to external site does not terminate current execution flow

I'm using grails 1.3.7.
I have the following filter setup:
class MyFilters {
def userService
def springSecurityService
def filters = {
all(controller: '*', action: '*') {
before = {
String userAgent = request.getHeader('User-Agent')
int buildVersion = 0
// Match "app-/{version}" where {version} is the build number
def matcher = userAgent =~ "(?i)app(?:-\\w+)?\\/(\\d+)"
if (matcher.getCount() > 0)
{
buildVersion = Integer.parseInt(matcher[0][1])
log.info("User agent is from a mobile with build version = " + buildVersion)
log.info("User agent = " + userAgent)
String redirectUrl = "https://anotherdomain.com"
if (buildVersion > 12)
{
if (request.queryString != null)
{
log.info("Redirecting request to anotherdomain with query string")
redirect(url:"${redirectUrl}${request.forwardURI}?${request.queryString}",params:params)
}
return
}
}
}
after = { model ->
if (model) {
model['currentUser'] = userService.currentUser
}
}
afterView = {
}
}
}
}
A problem occurs in that the redirect does not happen at the point I would have thought.
I want all execution to stop and redirect to the exaact url I have given it at this point.
When i debug to the "redirect" line, it continues past this line exectuting other lines and jumping to another controller.
In order to prevent the normal processing flow from continuing, you need to return false from your before filter:
if (buildVersion > 12)
{
if (request.queryString != null)
{
log.info("Redirecting request to anotherdomain with query string")
redirect(url:"${redirectUrl}${request.forwardURI}?${request.queryString}",params:params)
return false
}
}
This is mentioned in passing at the very end of section 6.6.2 of the user guide, but it isn't particularly prominent:
Note how returning false ensure that the action itself is not executed.

Where's located the declaration of messageSource in Grails?

Background
We have some legacy internationalization for field labels that are stored in the database, so I tried to make a "merged" messageSource. If the code exists in database, return, if not, use PluginAwareResourceBundleMessageSource to look in the i18n.
Problem
For some reason the cachedMergedPluginProperties is caching the wrong file for the locale. For example, if I search for en_US, I receive pt_BR messages (the key of the Map is en_US, but the properties are pt_BR).
I declared my messageSource as follows:
messageSource(DatabaseMessageSource) {
messageBundleMessageSource = { org.codehaus.groovy.grails.context.support.PluginAwareResourceBundleMessageSource m ->
basenames = "WEB-INF/grails-app/i18n/messages"
}
}
The inner bean is beacause of Grails won't let me have two beans of type MessageSource.
Am I declaring PluginAwareResourceBundleMessageSource different from the default of Grails? In which file of Grails I can see this bean declaration?
I found the declaration inside I18nGrailsPlugin, and it's a bit more detailed then mine:
String baseDir = "grails-app/i18n"
String version = GrailsUtil.getGrailsVersion()
String watchedResources = "file:./${baseDir}/**/*.properties".toString()
...
Set baseNames = []
def messageResources
if (application.warDeployed) {
messageResources = parentCtx?.getResources("**/WEB-INF/${baseDir}/**/*.properties")?.toList()
}
else {
messageResources = plugin.watchedResources
}
if (messageResources) {
for (resource in messageResources) {
// Extract the file path of the file's parent directory
// that comes after "grails-app/i18n".
String path
if (resource instanceof ContextResource) {
path = StringUtils.substringAfter(resource.pathWithinContext, baseDir)
}
else {
path = StringUtils.substringAfter(resource.path, baseDir)
}
// look for an underscore in the file name (not the full path)
String fileName = resource.filename
int firstUnderscore = fileName.indexOf('_')
if (firstUnderscore > 0) {
// grab everyting up to but not including
// the first underscore in the file name
int numberOfCharsToRemove = fileName.length() - firstUnderscore
int lastCharacterToRetain = -1 * (numberOfCharsToRemove + 1)
path = path[0..lastCharacterToRetain]
}
else {
// Lop off the extension - the "basenames" property in the
// message source cannot have entries with an extension.
path -= ".properties"
}
baseNames << "WEB-INF/" + baseDir + path
}
}
LOG.debug "Creating messageSource with basenames: $baseNames"
messageSource(PluginAwareResourceBundleMessageSource) {
basenames = baseNames.toArray()
fallbackToSystemLocale = false
pluginManager = manager
if (Environment.current.isReloadEnabled() || GrailsConfigUtils.isConfigTrue(application, GroovyPagesTemplateEngine.CONFIG_PROPERTY_GSP_ENABLE_RELOAD)) {
def cacheSecondsSetting = application?.flatConfig?.get('grails.i18n.cache.seconds')
if (cacheSecondsSetting != null) {
cacheSeconds = cacheSecondsSetting as Integer
} else {
cacheSeconds = 5
}
}
}
Since Grails don't let you have two beans of type MessageSource I had to copy this code and adapt to mine "merged" messageSource.

displaying errors in grails without refreshing the page

I have a page with dynamic list boxes(selecting value from the first list populates the values in the second list box).
The validation errors for the list boxes are working fine, but while displaying the error messages the page is getting refreshed and the selected values are been set to initial status(need to select the values again in the list boxes)
The page is designed to add any number of list boxes using ajax calls, so adding and selecting the values again is going to be a rework.
Could you help me in displaying the validation errors and keeping the selected values as they are(previously I faced a similar situation which was resolved by replacing local variables of preprocess and postprocess with a global variable, this time no luck with that approach)
Any hints/help would be great
static constraints = {
deviceMapping(
validator: {val, obj ->
Properties dm = (Properties) val;
def deviceCheck = [:];
if (obj.customErrorMessage == null) {
for (def device : dm) {
if (device.key == null || "null".equalsIgnoreCase(device.key)) {
return ["notSelected"];
}
deviceCheck.put(device.key, "");
}
if (deviceCheck.size() != obj.properties["numberOfDevices"]) {
return ["multipleDevicesError"];
}
}
}
)
customErrorMessage (
validator: {
if ("sameDeviceMultipleTimes".equals(it)) {
return ['sameDeviceMultipleTimes']
}
}
)
}
public LinkedHashMap<String, Object> preProcess(sessionObject, params, request) {
Submission submission = (Submission) sessionObject;
def selectedFileName = sessionObject.fileName;
logger.debug("submission.deviceMapping :"+submission.deviceMapping)
try {
Customer customer = Customer.get(submission.customerId);
OperatingSystem operatingSystem = OperatingSystem.get(submission.operatingSystemId)
def ftpClientService = new FtpClientService();
def files = ftpClientService.listFilesInZip(customer.ftpUser, customer.ftpPassword, customer.ftpHost, customer.ftpToPackageDirectory, selectedFileName, operatingSystem, customer.ftpCustomerTempDirectory);
def terminalService = new TerminalService();
OperatingSystem os = OperatingSystem.get(submission.getOperatingSystemId());
def manufacturers = terminalService.getAllDeviceManufacturersForType(os.getType());
logger.debug("manufacturers after os type :"+manufacturers)
logger.debug("files in preprocess :"+files)
def devicesForFiles = [:]
files.each { file ->
def devicesForThisFile = [];
submission.deviceMapping.each { device ->
if (device.value == file.fileName) {
String manufacturer = terminalService.getManufacturerFromDevice("${device.key}");
def devicesForManufacturer = terminalService.getDevicesForManufacturerAndType(manufacturer, os.getType());
devicesForThisFile.push([device:device.key, manufacturer: manufacturer, devicesForManufacturer: devicesForManufacturer]);
}
}
devicesForFiles.put(file.fileName,devicesForThisFile);
}
logger.debug("devicesForFiles :"+devicesForFiles)
return [command: this, devicesForFiles: devicesForFiles, files: files, manufacturers: manufacturers];
} catch (Exception e) {
logger.warn("FTP threw exception");
logger.error("Exception", e);
this.errors.reject("mapGameToDeviceCommand.ftp.connectionTimeOut","A temporary FTP error occurred");
return [command: this];
}
}
public LinkedHashMap<String, Object> postProcess(sessionObject, params, request) {
Submission submission = (Submission) sessionObject;
Properties devices = params.devices;
Properties files = params.files;
mapping = devices.inject( [:] ) { map, dev ->
// Get the first part of the version (up to the first dot)
def v = dev.key.split( /\./ )[ 0 ]
map << [ (dev.value): files[ v ] ]
}
deviceMapping = new Properties();
params.files.eachWithIndex { file, i ->
def device = devices["${file.key}"];
if (deviceMapping.containsKey("${device}")) {
this.errors.reject("You cannot use the same device more than once");
return [];
//customErrorMessage = "sameDeviceMultipleTimes";
}
deviceMapping.put("${device}", "${file.value}");
}
if (params.devices != null) {
this.numberOfDevices = params.devices.size(); //Used for the custom validator later on
} else {
this.numberOfDevices = 0;
}
//logger.debug("device mapping :"+deviceMapping);
submission.deviceMapping = mapping;
return [command: this, deviceMapping: mapping, devicesForFiles: devicesForFiles ];
}
}
The problem is in your gsp page. Be sure that all field are initialised with a value
<g:text value="${objectInstance.fieldname}" ... />
Also the way it is selecting values is through id, so be sure to set it as well:
<g:text value="${objectInstance.fieldname}" id=${device.manufacturer.id} ... />

Resources