I use ArcGIS Server to serve a map of points from a database. When I create and publish the mxd as a WMS service in AGS everything looks fine. But after a while, the day after or something - the map shows nothing. Every request to the WMSServer for that layer comes back empty. Opening the mxd in ArcMap shows the correct data as expected, only the WMS calls are faulty.
What could be the problem?
Details:
I create an mxd file, and add data to it from a non-spatial database. To create the layers I right-click on the data source and select "Display XY data..." and select the X and Y columns from the data.
In AGS Manager I select "Add new service" and point to that mxd file, using all default settings from that. I have also tried the simpler "Publish GIS resource" and got the same results.
It appears as it was the way I set up the data connections in the mxd file that caused the problem. ArcGIS server uses a system account to run all services ("ArcGISWS" in our instance), and that account didn't have access to all data that I referenced in the mxd. Changing to an mxd that was set up using the ArcGISWS account, everything works as expected. I guess that the solution for anyone doing this is to log in to the ArcGIS Server with the intended account (ArcGISWS) and create the mxd, in that case all problems with data access will be obvious already in ArcMap, and the user can solve those issues before publishing the service.
At least, that is what I'll recommend. :-)
The reason behind the strange behaviour of the map working at first must have been a connection cache or something, so when the AGS recycled the connections or pools during the night, that connection was removed, leaving the ArcGISWS account to do the connection, which it couldn't due to lack of permissions.
Hope I can help someone with this attempt of a solution.
Related
I have an application in which one opens many windows. I added copy/paste, using XA_PRIMARY.
That works fine within my application. It also works fine when copying from other applications (pluma, FireFox, mate terminal) into a window of my application.
When I call XSetSelectionOwner(), with time stamp etc as explained in the documentation, server acknowledges new owner. That is, XGetSelectionOwner() returns the owner I just set.
However, when copying from my app to other applications I do not receive SelectionRequest.
From what I see, server only sets the owner for the Display used in the call XSetSelectionOwner().
Is this how it is supposed to work? Then, is there something else to do so server sets owner for all apps?
Given the behavior of the Server, I had to assume that other clients are not making their request for "PRIMARY". So, I added "CLIPBOARD", and now everything is working great.
The documentation appears to say that every client will use PRIMARY. Further reading seems to indicate that there is a difference between making a "selection", and "copying text". They seem to skirt this as a useful feature. I see nothing but confusion. Anyway, there really was no bug in my app. I think documentation should have a line saying: You must implement both, PRIMARY and CLIPBOARD. That was the problem.
I have been extensively using a custom protocol on all our internal apps to open any type of document (CAD, CAM, PDF, etc.), to open File Explorer and select a specific file, and to run other applications.
Years ago I defined one myprotocol protocol that executes C:\Windows\System32\wscript.exe passing the name of my VBScript and whatever argument each request has. The first argument passed to the script describe the type of action (OpenDocument, ShowFileInFileExplorer, ExportBOM, etc.), the following arguments are passed to the action.
Everything worked well until last year, when wscript.exe stopped working (see here for details). I fixed that problem by copying it to wscript2.exe. Creating a copy is now a step in the standard configuration of all our computers and using wscript2.exe is now the official configuration of our custom protocol. (Our anti-virus customer support couldn't find anything that interacts with wscript.exe).
Today, after building a new computer, we found out that:
Firefox doesn't see wscript2.exe. If I click on a custom protocol link, then click on the browse button and open the folder, I only see a small subset of .exe files, which includes wscript.exe, but doesn't include wscript2.exe (I don't know how recent this problem is because I don't personally use FireFox).
Firefox sees wscript.exe, but it still doesn't work (same behavior as described in my previous post linked above)
Chrome works with wscript2.exe, but now it always asks for confirmation. According to this article this seems to be the new approach, and things could change again soon. Clicking on a confirmation box every time is a big no-no with my users. This would slow down many workflows that require quickly clicking hundreds of links on a page and, for example, look at a CAD application zooming to one geometry in a large drawing.
I already fixed one problem last year, I am dealing with another one now, and reading that article scares me and makes me think that more problems will arise soon.
So here is the question: is there an alternative to using custom protocols?
I am not working on a web app for public consumption. My custom protocol requires the VBScript file, the applications that the script uses and tons of network shared folders. They are only used in our internal network and the computers that use them are manually configured.
First of all, that's super risky even if it's on internal network only. Unless computers/users/browsers are locked out of internet, it is possible that someone guesses or finds out your protocol's name, sends link to someone in your company and causes a lot of trouble (possibly loss too).
Anyway...
Since you are controlling software on all of the computers, you could add a mini-server on every machine, listening to localhost only, that simply calls your script. Then define host like secret.myprotocol to point to that server, e.g., localhost:1234.
Just to lessen potential problems a bit, local server would use HTTPS only, with proper certificate, HSTS and HPKP set to a very long time (since you control software, you can refresh those when needed). The last two, just in case someone tries to setup the same domain and, for whatever reason, host override doesn't work and user ends up calling a hostile server.
So, links would have to change from myprotocol://whatever to https://secret.myprotocol/whatever.
It does introduce new attack surface ("mini-server"), but should be easy enough to implement, to minimize size of that surface :). "Mini-server" does not even have to be real www server, a simple script that can listen on socket and call wscript.exe would do (unless you need to pass more info to it).
Real server has more code that can have bugs in it, but also allows to add more things, for example a "pass through" page, that shows info "Opening document X in 3 seconds..." and a "cancel" button.
It could also require session login of some kind (just to be sure it's user who requests action, and not something else).
The title of this blog post says it all: Browser Architecture: Web-to-App Communication Overview.
It describes a list of Web-to-App Communication techniques and links to dedicated posts for some of them.
The first in the list is Application Protocols, which I have been using for years already, and it started to crumble in the last year or so (hence my question).
The fifth is Local Web Server, which is the one described by ahwayakchih.
UPDATE (this update follows the update on the blog post above mentioned)
Apparently I wasn't the only one thinking that this change in behavior was a regression, so a workaround has been issued: the old behavior (showing a checkbox that allows to remember the answer) can be restored by adding these keys to the registry:
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Chromium]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001
I created two transport requests (TR) for same project while making changes in CDS views after that a duplicate resource error with error code 400 is showing and I'm unable to get any data in my UI5 table.
I transferred the changes which was locked in new TR to old TR but it is still giving the same error.
HTTP request failed400,Bad Request,{"error":{"code":"/IWBEP/CM_MGW_RT/030","message":{"lang":"en","value":"Duplicate resource"},"innererror":{"application":...
First of all: Double check if there's actually no duplicate key by reading the underlying SQL view (annoted in the CDS definition #AbapCatalog.sqlViewName) using the transaction se16 (n/h).
If there are really no duplicates in the SQL view, the error can be caused by various bugs in the ABAP CDS framework. These bugs mostly do occur after you changed a CDS source/definition. Here a few of them:
Open transaction segw and refresh the entity structure by right clicking "refresh all".
.
Afterwards click on the red white beachball to regenerate the MPC/DPC classes.
What the red white beachball actually does is kind of merging a the changed structure with the existing classes. Right click on the project and choose "Generate runtime" to really re-generate all of the runtime objects.
Sometimes there's a clean up button in the entities overview. Click it.
In transaction /iwfnd/gw_client choose Metadata→Cleanup Cache→On both systems
Cleaning the cache works quite well for OData views that have been manually created from ABAP types in segw but Core Data Services might still be cached. In case none of the above helped:
logout and login again
restart the transaction
wait for an hour or two
Try to manually test the failing OData request directly in /iwfnd/gw_client. You can activate logging in /iwfnd/traces to double check what the requests from your client actually look like.
Check your OData client. Does it maybe internally cache the $metadata?
Check that the transport request was successfully processed, using e.g. transaction se10. Transports/Imports to another system might be blocked by long running SADL queries. Kill them using sm50 if necessary.
I am working on Arcgis runtime 100 sdk and I have some layers urls provided by client. For now I'm using these url's to to create an AGSLayer and add in the operations layers of map to show it on screen.
Its working great till now.
Now I want to save these layers and their data. so to make user access the map offline.
I went through the Arcgis guids. But i am not sure i understand anything there. And I didnt find any appropriate solution for this
Please help me out.
You can follow this example. GenerateOfflineMapViewController.swift
//instantiate offline map task
self.offlineMapTask = AGSOfflineMapTask(portalItem: self.portalItem)
Use the AGSOfflineMapTask to take maps offline. The sample creates a
portal item object using a web map’s ID. This portal item is used to
initialize an AGSOfflineMapTask object.
Can you specify what mean "Now I want to save these layers and their data" ?
You also need to edit the map? If yes this is called redlining
A feature collection provides a way of grouping logically-related
feature collection tables. Redlining information (called "Map Notes"
in ArcGIS Online), for example, may contain points, lines, polygons,
and associated text to describe things in the map.
Also remember that your services should be enabled for offline use. Client side you will need to create a GeoDatabase
Obtain a job to generate and download the geodatabase by passing the
AGSGenerateGeodatabaseParameters to the generateJob method on the
AGSGeodatabaseSyncTask. Run the job to generate and download the
geodatabase to the device.
I have an application which uses the data from web server. When you first launch the app, it downloads the data and then work with it. But what if the data on web site was changed. How can I know from the application that the data was changed, and if so, what data should I download?
My first idea was each time when you run the application to check the number of entries in the local database on your phone and the number of entries on web server, and if they are not equal, delete all data in local database and then download all data again. But I think that it will take more time than if the application just loads 5-10 needed records instead of all data.
The second idea was when the information on the site changes, website somehow has to inform the application to load some records. But I don’t know if it is possible to do(
Another idea was to compare the id of the last entry in the application database with last id on website. And if they are not equal download the information from the next id.
Are there any suggestions how can I accomplish this?
I am not sure that you have any database or web services but my suggestion is parsing data from the web with JSON or XML.
https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Classes/NSXMLParser_Class/
this class reference is will be clear for you.
Also in my opinion, if you are new in swift and want to choose easy way for this operation search for iOS package managers.
If you want to use a package manager for your project, e.g Pod
https://cocoapods.org/pods/Alamofire
would be a good startig point.
Alamofire is an HTTP networking library written in Swift.
Hope to helped you