I am entering values(URLs) in the text box and when I click the next button, validation runs whether the entered url values are valid or not.
if the urls are not valid then i display the error as below
if(valid == false){
this.errors.reject("Incorrect URL is entered for "+countries.get(countryCode)+" - please ensure to use a correct URL for More Games.")
return [];
}
Once the error is shown, its clearing the entered values and need to enter the values again. How can I restrict clearing the values(i think the page is getting loaded again) or to restrict the page to load again.
here is the command -
class DeliveryMoreGamesURLCommand implements Command {
private static final LOG = LogFactory.getLog(DeliveryController.class)
def wrapperService = ApplicationHolder.application.getMainContext().getBean("wrapperService");
def customerService = ApplicationHolder.application.getMainContext().getBean("customerService");
def countryService = ApplicationHolder.application.getMainContext().getBean("countryService");
def helperService = ApplicationHolder.application.getMainContext().getBean("helperService");
def ascService = ApplicationHolder.application.getMainContext().getBean("ascService");
Hashtable<String, String> countries;
public LinkedHashMap<String, Object> preProcess(sessionObject, params, request) {
Delivery delivery = (Delivery) sessionObject;
def customer = Customer.get(delivery.customerId);
def contentProvider = ContentProvider.get(delivery.contentProviderId);
def deployment = Deployment.get(delivery.deploymentId);
def operatingSystem = OperatingSystem.get(delivery.operatingSystemId);
countries = wrapperService.getCountries(deployment, operatingSystem);
def sortedCountries = countries.sort { a, b -> a.value <=> b.value };
def urls = ascService.getMoreGamesUrlsPerTemplates(deployment, operatingSystem);
def moreGamesUrls = new Hashtable<String,String>();
countries.each { country ->
String countryCode = countryService.getTwoLetterCountryAbbreviation(country.key);
String url = customerService.getMoreGamesUrl(customer, contentProvider, countryCode);
if ("".equals(url)) {
url = urls.get(country.key);
if (url == null) {
url = "";
}
}
moreGamesUrls.put(country.key, url); // We need to use the existing country code if the channels are deployed with three letter country codes
}
return [command: this, countries: sortedCountries, moreGamesUrls: moreGamesUrls]
}
public LinkedHashMap<String, Object> postProcess(sessionObject, params, request) {
Delivery delivery = (Delivery) sessionObject;
def urls = params.gamesUrls;
LOG.debug("urls from gsp :"+urls)
try{
urls.eachWithIndex { u, i ->
String countryCode = u.key;
String url = urls["${u.key}"]
if(url != ""){
Boolean valid = helperService.isURLValid(url)
if(valid == false){
this.errors.reject("Incorrect URL is entered for "+countries.get(countryCode)+" - please ensure to use a correct URL for More Games.")
return [];
}
}
}
}catch (Exception ex) {
logger.warn("Incorrect URL is entered", ex)
return [];
}
def moreGamesUrls = new Hashtable<String, String>();
urls.eachWithIndex { u, i ->
String countryCode = u.key;
String url = urls["${u.key}"]
moreGamesUrls.put(countryCode, url);
}
delivery.countries = countries;
delivery.moreGamesUrls = moreGamesUrls;
LOG.debug("moreGamesUrls after edit=${delivery.moreGamesUrls}");
return null;
}
}
from the command preprocess the data will be rendered and after clicking the next button, the postprocess will be invoked and the validation for the url...
Resolved this issue by implementing moreGamesUrls as a global variable(which stored the values even after the error is thrown)
Related
I'm wanting to 'Create copy of work item' which is available via the UI, ideally via the API.
I know how to create a new work item, but the feature in the UI to connect all current parent links / related links, and all other details is quite useful.
Creating via this API is here: https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work%20items/create?view=azure-devops-rest-5.1
Any help would be greatly appreciated.
We cannot just copy a work item because it contains system fields that we should skip. Additionally your process may have some rules that may block some fields on the creation step. Here is the small example to clone a work item through REST API with https://www.nuget.org/packages/Microsoft.TeamFoundationServer.Client:
class Program
{
static string[] systemFields = { "System.IterationId", "System.ExternalLinkCount", "System.HyperLinkCount", "System.AttachedFileCount", "System.NodeName",
"System.RevisedDate", "System.ChangedDate", "System.Id", "System.AreaId", "System.AuthorizedAs", "System.State", "System.AuthorizedDate", "System.Watermark",
"System.Rev", "System.ChangedBy", "System.Reason", "System.WorkItemType", "System.CreatedDate", "System.CreatedBy", "System.History", "System.RelatedLinkCount",
"System.BoardColumn", "System.BoardColumnDone", "System.BoardLane", "System.CommentCount", "System.TeamProject"}; //system fields to skip
static string[] customFields = { "Microsoft.VSTS.Common.ActivatedDate", "Microsoft.VSTS.Common.ActivatedBy", "Microsoft.VSTS.Common.ResolvedDate",
"Microsoft.VSTS.Common.ResolvedBy", "Microsoft.VSTS.Common.ResolvedReason", "Microsoft.VSTS.Common.ClosedDate", "Microsoft.VSTS.Common.ClosedBy",
"Microsoft.VSTS.Common.StateChangeDate"}; //unneeded fields to skip
const string ChildRefStr = "System.LinkTypes.Hierarchy-Forward"; //should be only one parent
static void Main(string[] args)
{
string pat = "<pat>"; //https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate
string orgUrl = "https://dev.azure.com/<org>";
string newProjectName = "";
int wiIdToClone = 0;
VssConnection connection = new VssConnection(new Uri(orgUrl), new VssBasicCredential(string.Empty, pat));
var witClient = connection.GetClient<WorkItemTrackingHttpClient>();
CloneWorkItem(witClient, wiIdToClone, newProjectName, true);
}
private static void CloneWorkItem(WorkItemTrackingHttpClient witClient, int wiIdToClone, string NewTeamProject = "", bool CopyLink = false)
{
WorkItem wiToClone = (CopyLink) ? witClient.GetWorkItemAsync(wiIdToClone, expand: WorkItemExpand.Relations).Result
: witClient.GetWorkItemAsync(wiIdToClone).Result;
string teamProjectName = (NewTeamProject != "") ? NewTeamProject : wiToClone.Fields["System.TeamProject"].ToString();
string wiType = wiToClone.Fields["System.WorkItemType"].ToString();
JsonPatchDocument patchDocument = new JsonPatchDocument();
foreach (var key in wiToClone.Fields.Keys) //copy fields
if (!systemFields.Contains(key) && !customFields.Contains(key))
if (NewTeamProject == "" ||
(NewTeamProject != "" && key != "System.AreaPath" && key != "System.IterationPath")) //do not copy area and iteration into another project
patchDocument.Add(new JsonPatchOperation()
{
Operation = Operation.Add,
Path = "/fields/" + key,
Value = wiToClone.Fields[key]
});
if (CopyLink) //copy links
foreach (var link in wiToClone.Relations)
{
if (link.Rel != ChildRefStr)
{
patchDocument.Add(new JsonPatchOperation()
{
Operation = Operation.Add,
Path = "/relations/-",
Value = new
{
rel = link.Rel,
url = link.Url
}
});
}
}
WorkItem clonedWi = witClient.CreateWorkItemAsync(patchDocument, teamProjectName, wiType).Result;
Console.WriteLine("New work item: " + clonedWi.Id);
}
}
Link to full project: https://github.com/ashamrai/AzureDevOpsExtensions/tree/master/CustomNetTasks/CloneWorkItem
I am trying to implement a let's say "change my account email address" fonctionality.
I want to keep backup of all user emails in (R_EmailAddressHistory table).
Here are some of my project's code.
public bool ChangeEmailAddress(string username, string newEmailAddress, string callbackUrl)
{
DateTime currentUtcTime = DateTime.UtcNow;
R_User currentUser = UserRepo.GetSingle(whereCondition: w=>w.Username == username);
currentUser.UpdateDate = currentUtcTime;
if (currentUser.HasPendingNewEmail)
{
R_EmailAddressHistory currentPendingRequest = EmailHistoRepo.GetSingle(whereCondition: w => w.StatusID == (int)Reno.Common.Enums.RecordStatus.Pending && w.R_User.GId == currentUser.GId);
currentPendingRequest.NewEmail = newEmailAddress;
currentPendingRequest.UpdateDate = currentUtcTime;
EmailHistoRepo.Update(currentPendingRequest);
}
else
{
currentUser.HasPendingNewEmail = true;
R_EmailAddressHistory newEmail = new R_EmailAddressHistory();
newEmail.UserId = currentUser.GId;
newEmail.R_User = currentUser;
newEmail.NewEmail = newEmailAddress;
newEmail.InsertDate = currentUtcTime;
newEmail.StatusID = (int) Reno.Common.Enums.RecordStatus.Pending;
currentUser.R_EmailAddressHistory.Add(newEmail);
}
IdentityResult idtResult = UserRepo.Update(currentUser);
if(idtResult == IdentityResult.Succeeded)
{
//Send notification to current email address for validation before proceeding change email process
bool sendResult = Communication.EmailService.SendChangeEmailValidation(username,currentUser.Email, newEmailAddress, callbackUrl);
return sendResult;
}
else
{
return false;
}
}
The previous method is use to change an email address. Each of my tables (R_User and EmailAddressHistory ) has Repository (UserRepo and EmailHistoRepo). The implement the same IRepositoryBase class, here is the Update methode
public IdentityResult Update(T entity)
{
try
{
if (_currentContext.DbContext.Entry(entity).State == EntityState.Detached)
{
_currentContext.DbContext.Set<T>().Attach(entity);
}
_currentContext.DbContext.Entry(entity).State = EntityState.Modified;
return IdentityResult.Succeeded;
}
catch
{
return IdentityResult.Failed;
}
}
When a user has already a non validate new email address, when he request to change his current email address, I show him the pending new email address and he can change it, in this case I whant to update my historical table instead of creating a new one, cause only one pending new email address is allow. In such a case, my code failed in the line EmailHistoRepo.Update(currentPendingRequest) throwing the error : An entity object cannot be referenced by multiple instances of IEntityChangeTracker.
Can anyone help me?
Thanks
EDIT
I am using MVC(4) with a unitOfWork. My UOW is initialized in a the Controller the first time the DB is queried and the Commit is done in the global.asax file in Appalication_EndRequest (see below).
protected void Application_EndRequest(Object sender, EventArgs e)
{
CommitChanges();
}
private void CommitChanges()
{
Reno.BLL.Services.Singleton.UnitOfWork unitOfWork = Reno.BLL.Services.Singleton.UnitOfWork.GetCurrentInstance(false);
if (unitOfWork != null)
{
unitOfWork.Commit();
unitOfWork.Dispose();
}
}
Your currentUser is modified before updating the emailaddress. Save the changes to currentUser first.
Something like this:
R_User currentUser = UserRepo.GetSingle(whereCondition: w=>w.Username == username);
currentUser.UpdateDate = currentUtcTime;
bool pendingNewEmail = currentUser.HasPendingNewEmail;
UserRepo.Update(currentUser);
if (pendingNewEmail)
{
R_EmailAddressHistory currentPendingRequest = EmailHistoRepo.GetSingle(whereCondition: w => w.StatusID == (int)Reno.Common.Enums.RecordStatus.Pending && w.R_User.GId == currentUser.GId);
currentPendingRequest.NewEmail = newEmailAddress;
currentPendingRequest.UpdateDate = currentUtcTime;
EmailHistoRepo.Update(currentPendingRequest);
}
else
I finally found the answer. The problem was that when I first get the user in line
R_User currentUser = UserRepo.GetSingle(whereCondition: w=>w.Username == username);
The currentUser variable hold a refrence of all of its R_EmailAddressHistory.
And then after, I queried the DB (2nd time) to get the pending email change request (or type R_EmailAddressHistory) to modify its new email and its update date, in line
R_EmailAddressHistory currentPendingRequest = EmailHistoRepo.GetSingle(whereCondition: w => w.StatusID == (int)Reno.Common.Enums.RecordStatus.Pending && w.R_User.GId == currentUser.GId);
currentPendingRequest.NewEmail = newEmailAddress;
currentPendingRequest.UpdateDate = currentUtcTime;
But te last code updates only currentPendingRequest while another reference of the same object which is in currentUser.R_EmailAddressHistory is not update and was already tracked by the context. Therefore, by doing an update on the new instance (EmailHistoRepo.Update(currentPendingRequest)), the code failed: the same object if referenced in 2 places.
So, the solution was (the only thing I modified):
R_User currentUser = UserRepo.GetSingle(whereCondition: w=>w.Username == username);
currentUser.UpdateDate = currentUtcTime;
if (currentUser.HasPendingNewEmail)
{
R_EmailAddressHistory currentPendingRequest = currentUser.R_EmailAddressHistory.Where(h => h.StatusID == (int)Reno.Common.Enums.RecordStatus.Pending).First(); // EmailHistoRepo.GetSingle(whereCondition: w => w.StatusID == (int)Reno.Common.Enums.RecordStatus.Pending && w.R_User.GId == currentUser.GId);
currentPendingRequest.NewEmail = newEmailAddress;
currentPendingRequest.UpdateDate = currentUtcTime;
}
I decided to modify the instance in currentUser variable.
I am making a crawler application in Groovy on Grails. I am using Crawler4j and following this tutorial.
I created a new grails project
Put the BasicCrawlController.groovy file in controllers->package
Did not create any view because I expected on doing run-app, my crawled data would appear in my crawlStorageFolder (please correct me if my understanding is flawed)
After that I just ran the application by doing run-app but I didn't see any crawling data anywhere.
Am I right in expecting some file to be created at the crawlStorageFolder location that I have given as C:/crawl/crawler4jStorage?
Do I need to create any view for this?
If I want to invoke this crawler controller from some other view on click of a submit button of a form, can I just write <g:form name="submitWebsite" url="[controller:'BasicCrawlController ']">?
I asked this because I do not have any method in this controller, so is it the right way to invoke this controller?
My code is as follows:
//All necessary imports
public class BasicCrawlController {
static main(args) throws Exception {
String crawlStorageFolder = "C:/crawl/crawler4jStorage";
int numberOfCrawlers = 1;
//int maxDepthOfCrawling = -1; default
CrawlConfig config = new CrawlConfig();
config.setCrawlStorageFolder(crawlStorageFolder);
config.setPolitenessDelay(1000);
config.setMaxPagesToFetch(100);
config.setResumableCrawling(false);
PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);
controller.addSeed("http://en.wikipedia.org/wiki/Web_crawler")
controller.start(BasicCrawler.class, 1);
}
}
class BasicCrawler extends WebCrawler {
final static Pattern FILTERS = Pattern
.compile(".*(\\.(css|js|bmp|gif|jpe?g"+ "|png|tiff?|mid|mp2|mp3|mp4" +
"|wav|avi|mov|mpeg|ram|m4v|pdf" +"|rm|smil|wmv|swf|wma|zip|rar|gz))\$")
/**
* You should implement this function to specify whether the given url
* should be crawled or not (based on your crawling logic).
*/
#Override
boolean shouldVisit(WebURL url) {
String href = url.getURL().toLowerCase()
!FILTERS.matcher(href).matches() && href.startsWith("http://en.wikipedia.org/wiki/Web_crawler/")
}
/**
* This function is called when a page is fetched and ready to be processed
* by your program.
*/
#Override
void visit(Page page) {
int docid = page.getWebURL().getDocid()
String url = page.getWebURL().getURL()
String domain = page.getWebURL().getDomain()
String path = page.getWebURL().getPath()
String subDomain = page.getWebURL().getSubDomain()
String parentUrl = page.getWebURL().getParentUrl()
String anchor = page.getWebURL().getAnchor()
println("Docid: ${docid} ")
println("URL: ${url} ")
println("Domain: '${domain}'")
println("Sub-domain: ' ${subDomain}'")
println("Path: '${path}'")
println("Parent page:${parentUrl} ")
println("Anchor text: ${anchor} " )
if (page.getParseData() instanceof HtmlParseData) {
HtmlParseData htmlParseData = (HtmlParseData) page.getParseData()
String text = htmlParseData.getText()
String html = htmlParseData.getHtml()
List<WebURL> links = htmlParseData.getOutgoingUrls()
println("Text length: " + text.length())
println("Html length: " + html.length())
println("Number of outgoing links: " + links.size())
}
Header[] responseHeaders = page.getFetchResponseHeaders()
if (responseHeaders != null) {
println("Response headers:")
for (Header header : responseHeaders) {
println("\t ${header.getName()} : ${header.getValue()}")
}
}
println("=============")
}
}
I'll try to translate your code into a Grails standard.
Use this under grails-app/controller
class BasicCrawlController {
def index() {
String crawlStorageFolder = "C:/crawl/crawler4jStorage";
int numberOfCrawlers = 1;
//int maxDepthOfCrawling = -1; default
CrawlConfig crawlConfig = new CrawlConfig();
crawlConfig.setCrawlStorageFolder(crawlStorageFolder);
crawlConfig.setPolitenessDelay(1000);
crawlConfig.setMaxPagesToFetch(100);
crawlConfig.setResumableCrawling(false);
PageFetcher pageFetcher = new PageFetcher(crawlConfig);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(crawlConfig, pageFetcher, robotstxtServer);
controller.addSeed("http://en.wikipedia.org/wiki/Web_crawler")
controller.start(BasicCrawler.class, 1);
render "done crawling"
}
}
Use this under src/groovy
class BasicCrawler extends WebCrawler {
final static Pattern FILTERS = Pattern
.compile(".*(\\.(css|js|bmp|gif|jpe?g"+ "|png|tiff?|mid|mp2|mp3|mp4" +
"|wav|avi|mov|mpeg|ram|m4v|pdf" +"|rm|smil|wmv|swf|wma|zip|rar|gz))\$")
/**
* You should implement this function to specify whether the given url
* should be crawled or not (based on your crawling logic).
*/
#Override
boolean shouldVisit(WebURL url) {
String href = url.getURL().toLowerCase()
!FILTERS.matcher(href).matches() && href.startsWith("http://en.wikipedia.org/wiki/Web_crawler/")
}
/**
* This function is called when a page is fetched and ready to be processed
* by your program.
*/
#Override
void visit(Page page) {
int docid = page.getWebURL().getDocid()
String url = page.getWebURL().getURL()
String domain = page.getWebURL().getDomain()
String path = page.getWebURL().getPath()
String subDomain = page.getWebURL().getSubDomain()
String parentUrl = page.getWebURL().getParentUrl()
String anchor = page.getWebURL().getAnchor()
println("Docid: ${docid} ")
println("URL: ${url} ")
println("Domain: '${domain}'")
println("Sub-domain: ' ${subDomain}'")
println("Path: '${path}'")
println("Parent page:${parentUrl} ")
println("Anchor text: ${anchor} " )
if (page.getParseData() instanceof HtmlParseData) {
HtmlParseData htmlParseData = (HtmlParseData) page.getParseData()
String text = htmlParseData.getText()
String html = htmlParseData.getHtml()
List<WebURL> links = htmlParseData.getOutgoingUrls()
println("Text length: " + text.length())
println("Html length: " + html.length())
println("Number of outgoing links: " + links.size())
}
Header[] responseHeaders = page.getFetchResponseHeaders()
if (responseHeaders != null) {
println("Response headers:")
for (Header header : responseHeaders) {
println("\t ${header.getName()} : ${header.getValue()}")
}
}
println("=============")
}
}
I have a page with dynamic list boxes(selecting value from the first list populates the values in the second list box).
The validation errors for the list boxes are working fine, but while displaying the error messages the page is getting refreshed and the selected values are been set to initial status(need to select the values again in the list boxes)
The page is designed to add any number of list boxes using ajax calls, so adding and selecting the values again is going to be a rework.
Could you help me in displaying the validation errors and keeping the selected values as they are(previously I faced a similar situation which was resolved by replacing local variables of preprocess and postprocess with a global variable, this time no luck with that approach)
Any hints/help would be great
static constraints = {
deviceMapping(
validator: {val, obj ->
Properties dm = (Properties) val;
def deviceCheck = [:];
if (obj.customErrorMessage == null) {
for (def device : dm) {
if (device.key == null || "null".equalsIgnoreCase(device.key)) {
return ["notSelected"];
}
deviceCheck.put(device.key, "");
}
if (deviceCheck.size() != obj.properties["numberOfDevices"]) {
return ["multipleDevicesError"];
}
}
}
)
customErrorMessage (
validator: {
if ("sameDeviceMultipleTimes".equals(it)) {
return ['sameDeviceMultipleTimes']
}
}
)
}
public LinkedHashMap<String, Object> preProcess(sessionObject, params, request) {
Submission submission = (Submission) sessionObject;
def selectedFileName = sessionObject.fileName;
logger.debug("submission.deviceMapping :"+submission.deviceMapping)
try {
Customer customer = Customer.get(submission.customerId);
OperatingSystem operatingSystem = OperatingSystem.get(submission.operatingSystemId)
def ftpClientService = new FtpClientService();
def files = ftpClientService.listFilesInZip(customer.ftpUser, customer.ftpPassword, customer.ftpHost, customer.ftpToPackageDirectory, selectedFileName, operatingSystem, customer.ftpCustomerTempDirectory);
def terminalService = new TerminalService();
OperatingSystem os = OperatingSystem.get(submission.getOperatingSystemId());
def manufacturers = terminalService.getAllDeviceManufacturersForType(os.getType());
logger.debug("manufacturers after os type :"+manufacturers)
logger.debug("files in preprocess :"+files)
def devicesForFiles = [:]
files.each { file ->
def devicesForThisFile = [];
submission.deviceMapping.each { device ->
if (device.value == file.fileName) {
String manufacturer = terminalService.getManufacturerFromDevice("${device.key}");
def devicesForManufacturer = terminalService.getDevicesForManufacturerAndType(manufacturer, os.getType());
devicesForThisFile.push([device:device.key, manufacturer: manufacturer, devicesForManufacturer: devicesForManufacturer]);
}
}
devicesForFiles.put(file.fileName,devicesForThisFile);
}
logger.debug("devicesForFiles :"+devicesForFiles)
return [command: this, devicesForFiles: devicesForFiles, files: files, manufacturers: manufacturers];
} catch (Exception e) {
logger.warn("FTP threw exception");
logger.error("Exception", e);
this.errors.reject("mapGameToDeviceCommand.ftp.connectionTimeOut","A temporary FTP error occurred");
return [command: this];
}
}
public LinkedHashMap<String, Object> postProcess(sessionObject, params, request) {
Submission submission = (Submission) sessionObject;
Properties devices = params.devices;
Properties files = params.files;
mapping = devices.inject( [:] ) { map, dev ->
// Get the first part of the version (up to the first dot)
def v = dev.key.split( /\./ )[ 0 ]
map << [ (dev.value): files[ v ] ]
}
deviceMapping = new Properties();
params.files.eachWithIndex { file, i ->
def device = devices["${file.key}"];
if (deviceMapping.containsKey("${device}")) {
this.errors.reject("You cannot use the same device more than once");
return [];
//customErrorMessage = "sameDeviceMultipleTimes";
}
deviceMapping.put("${device}", "${file.value}");
}
if (params.devices != null) {
this.numberOfDevices = params.devices.size(); //Used for the custom validator later on
} else {
this.numberOfDevices = 0;
}
//logger.debug("device mapping :"+deviceMapping);
submission.deviceMapping = mapping;
return [command: this, deviceMapping: mapping, devicesForFiles: devicesForFiles ];
}
}
The problem is in your gsp page. Be sure that all field are initialised with a value
<g:text value="${objectInstance.fieldname}" ... />
Also the way it is selecting values is through id, so be sure to set it as well:
<g:text value="${objectInstance.fieldname}" id=${device.manufacturer.id} ... />
I have what I think is a simple problem but have been unable to solve...
For some reason I have a controller that uses removeFrom*.save() which throws no errors but does not do anything.
Running
Grails 1.2
Linux/Ubuntu
The following application is stripped down to reproduce the problem...
I have two domain objects via create-domain-class
- Job (which has many notes)
- Note (which belongs to Job)
I have 3 controllers via create-controller
- JobController (running scaffold)
- NoteController (running scaffold)
- JSONNoteController
JSONNoteController has one primary method deleteItem which aims to remove/delete a note.
It does the following
some request validation
removes the note from the job - jobInstance.removeFromNotes(noteInstance).save()
deletes the note - noteInstance.delete()
return a status and remaining data set as a json response.
When I run this request - I get no errors but it appears that jobInstance.removeFromNotes(noteInstance).save() does nothing and does not throw any exception etc.
How can I track down why??
I've attached a sample application that adds some data via BootStrap.groovy.
Just run it - you can view the data via the default scaffold views.
If you run linux, from a command line you can run the following
GET "http://localhost:8080/gespm/JSONNote/deleteItem?job.id=1¬e.id=2"
You can run it over and over again and nothing different happens. You could also paste the URL into your webbrowser if you're running windows.
Please help - I'm stuck!!!
Code is here link text
Note Domain
package beachit
class Note
{
Date dateCreated
Date lastUpdated
String note
static belongsTo = Job
static constraints =
{
}
String toString()
{
return note
}
}
Job Domain
package beachit
class Job
{
Date dateCreated
Date lastUpdated
Date createDate
Date startDate
Date completionDate
List notes
static hasMany = [notes : Note]
static constraints =
{
}
String toString()
{
return createDate.toString() + " " + startDate.toString();
}
}
JSONNoteController
package beachit
import grails.converters.*
import java.text.*
class JSONNoteController
{
def test = { render "foobar test" }
def index = { redirect(action:listAll,params:params) }
// the delete, save and update actions only accept POST requests
//static allowedMethods = [delete:'POST', save:'POST', update:'POST']
def getListService =
{
def message
def status
def all = Note.list()
return all
}
def getListByJobService(jobId)
{
def message
def status
def jobInstance = Job.get(jobId)
def all
if(jobInstance)
{
all = jobInstance.notes
}
else
{
log.debug("getListByJobService job not found for jobId " + jobId)
}
return all
}
def listAll =
{
def message
def status
def listView
listView = getListService()
message = "Done"
status = 0
def response = ['message': message, 'status':status, 'list': listView]
render response as JSON
}
def deleteItem =
{
def jobInstance
def noteInstance
def message
def status
def jobId = 0
def noteId = 0
def instance
def listView
def response
try
{
jobId = Integer.parseInt(params.job?.id)
}
catch (NumberFormatException ex)
{
log.debug("deleteItem error in jobId " + params.job?.id)
log.debug(ex.getMessage())
}
if (jobId && jobId > 0 )
{
jobInstance = Job.get(jobId)
if(jobInstance)
{
if (jobInstance.notes)
{
try
{
noteId = Integer.parseInt(params.note?.id)
}
catch (NumberFormatException ex)
{
log.debug("deleteItem error in noteId " + params.note?.id)
log.debug(ex.getMessage())
}
log.debug("note id =" + params.note.id)
if (noteId && noteId > 0 )
{
noteInstance = Note.get(noteId)
if (noteInstance)
{
try
{
jobInstance.removeFromNotes(noteInstance).save()
noteInstance.delete()
message = "note ${noteId} deleted"
status = 0
}
catch(org.springframework.dao.DataIntegrityViolationException e)
{
message = "Note ${noteId} could not be deleted - references to it exist"
status = 1
}
/*
catch(Exception e)
{
message = "Some New Error!!!"
status = 10
}
*/
}
else
{
message = "Note not found with id ${noteId}"
status = 2
}
}
else
{
message = "Couldn't recognise Note id : ${params.note?.id}"
status = 3
}
}
else
{
message = "No Notes found for Job : ${jobId}"
status = 4
}
}
else
{
message = "Job not found with id ${jobId}"
status = 5
}
listView = getListByJobService(jobId)
} // if (jobId)
else
{
message = "Couldn't recognise Job id : ${params.job?.id}"
status = 6
}
response = ['message': message, 'status':status, 'list' : listView]
render response as JSON
} // deleteNote
}
I got it working... though I cannot explain why.
I replaced the following line in deleteItem
noteInstance = Note.get(noteId)
with the following
noteInstance = jobInstance.notes.find { it.id == noteId }
For some reason the jobInstance.removeFromNotes works with the object returned by that method instead of .get
What makes it stranger is that all other gorm functions (not sure about the dynamic ones actually) work against the noteInstance.get(noteId) method.
At least it's working though!!
See this thread: http://grails.1312388.n4.nabble.com/GORM-doesn-t-inject-hashCode-and-equals-td1370512.html
I would recommend using a base class for your domain objects like this:
abstract class BaseDomain {
#Override
boolean equals(o) {
if(this.is(o)) return true
if(o == null) return false
// hibernate creates dynamic subclasses, so
// checking o.class == class would fail most of the time
if(!o.getClass().isAssignableFrom(getClass()) &&
!getClass().isAssignableFrom(o.getClass())) return false
if(ident() != null) {
ident() == o.ident()
} else {
false
}
}
#Override
int hashCode() {
ident()?.hashCode() ?: 0
}
}
That way, any two objects with the same non-null database id will be considered equal.
I just had this same issue come up. The removeFrom function succeeded, the save succeeded but the physical record in the database wasn't deleted. Here's what worked for me:
class BasicProfile {
static hasMany = [
post:Post
]
}
class Post {
static belongsTo = [basicProfile:BasicProfile]
}
class BasicProfileController {
...
def someFunction
...
BasicProfile profile = BasicProfile.findByUser(user)
Post post = profile.post?.find{it.postType == command.postType && it.postStatus == command.postStatus}
if (post) {
profile.removeFromPost(post)
post.delete()
}
profile.save()
}
So it was the combination of the removeFrom, followed by a delete on the associated domain, and then a save on the domain object.