Geocoding error in ArcMap - OS Open Names Locator - mapping

I am trying to geocode a set of 723 UK postcodes using OS_Codepointopen_postcode_locator.
My database is in the following format:
ID | SAON | PAON | Street | Locality | Town | Postcode
I am attempting to geocode just the postcodes as precise locations are not neccesary.
The process I have used so far goes like this:
Import table.xslx > Open table attributes (in Table of Contents) > Right click file and select Geocode > Select OS_Codepointopen_postcode_locator > Select single field and direct to 'postcode' column.
After starting the geocode process the program immediately presents the error message "Create Feature Class: There was an error trying to process this table."
I have then tried to export the .xlsx table in ArcMap to give the table an OID column. This presented the same error. After this I converted the export to a dBASE file, alas this presented the same error again.
I have also attempted to take a smaller snippet of the database, using only 20 items and have removed all other information leaving just the postcode and an identification numbered column as suggestions state that it may be a processing power bottleneck. However, this presents exactly the same error.
Has anyone else encountered this issue and found a solution?

After weeks of trying to find information online and merely minutes after posting this question I have found a solution.
Posting here so that others in the same situation have an answer.
After importing your .xlsx table into ArcMap open the table attributes > Export to a new table by following instructions > Open ArcToolbox > Expand Geocoding tab > Geocode Addresses > Use OS_open_names_locator > Follow instructions ensuring that the Multiple field option is selected and ensure 'none' is selected for any fields you do not have in your database.
Running this initially only matched around half the addresses because it was still looking for full_address, so unmarking that and ensuring the program was only looking for relevant information yielded 100% matches.
I hope this helps, will happily run anyone through the process if necessary.
Have a lovely day!

Related

Automatic Lookup Across Tables in AppSheet

Hi I'm having a problem with looking up data on another table with AppSheet
Basically, when I scan in a barcode, I want the product name to lookup such that in SQL
SELECT productName
FROM productLookupTable
WHERE productLookupTable.barcode == the_scanned_barcode
However, I can't seem to do this in appsheet. I tried
LOOKUP([barcode],'productLookupTable','barcode','productName')
as the formula for the product name field in the add view and that didn't work either. Scanning a barcode didn't trigger a lookup, or it says match not found. (Two different scenarios of many combinations of things I tried.)
Any help would be appreciated.

Collecting crowd-sourced data in tabular or spreadsheet format

Full disclosure: I originally posted this to the SE/Web application site but garnered zero comments amidst 15 views. Hoping for a better outcome here.
I'm involved in a citizen-science project polling recreational anglers about their preferred ocean fishing locations (lat-lon), a few characteristics about the location (depth, what species they catch, etc.), and some voluntary contact information. In spreadsheet form with each row being a unique location, there would be about 10 columns (each column being the response to a question).
I did a trial run with a small number of respondents using a Google Form that compiles all the responses to a Google Sheet, but due to limitations in Google Forms, respondents must submit a new form response for each fishing location they wish to provide. Every respondent said it was tedious and would prefer entering the data directly into a spreadsheet versus scrolling through 10 questions and submitting multiple forms to provide multiple locations.
Is there a process where I can distribute a link to potentially hundreds of people (who can in turn share that with whomever they wish) where the respondent is presented with an empty spreadsheet they populate with their responses? It would require that the field headers can't be edited and no one can see anyone else's responses. The spreadsheet would just look empty to each respondent. On the back end, the responses would be compiled into a single spreadsheet, much like how a Google Forms/Sheets works now. Google Forms is close - if they would just allow users to embed a Google Sheet in the form itself, I'd be set, but that's not possible at this time.
Edit - this is what the spreadsheet would include. Sorry I don't know how to properly embed or format this in tabular form. What each respondent would see is these column headers in a completely clean spreadsheet. They'd enter their data and submit, and on the back end, I'd have a master version of this that would append add each new location row-wise as they are submitted.
RowID | Latitude | Longitude | Target species 1 |Target species 2|Target species 3 | Habitat type| Home port | Name |Email address
click on this: https://docs.google.com/spreadsheets/create
copy-paste this fx in A1
=SPLIT("RowID|Latitude|Longitude|Target species 1|Target species 2|Target species 3|Habitat type|Home port|Name|Email address", "|")
copy the url
change edit#gid=0 to copy
take that URL and send it to your buddies and ask them to send you their URL of the sheet and enable sharing
or you can create those sheets for them and give to each of your people one spreadsheet
then create a new spreadsheet (master sheet) and use this in A2:
=QUERY({
IMPORTRANGE("url1"; "A2:J");
IMPORTRANGE("url2"; "A2:J");
IMPORTRANGE("url3"; "A2:J")}; "where Col1 is not null"; )

How do i trace multiple XML elements with same name & without any id?

I am trying to scrape a website for financials of Indian companies as a side project & put it in Google Sheets using XPATH
Link: https://ticker.finology.in/company/AFFLE
I am able to extract data from elements that have a specific id like cash, net debt, etc. however I am stuck with extracting data for labels like Sales Growth.
tried
Copying the full xpath from console, //*[#id="mainContent_updAddRatios"]/div[13]/p/span - this works, however, i am reliable on the index of the div (13) and that may change for different companies, hence i am unable to automate it.
Please assist with a scalable solution
PS: I am a Product Manager with basic coding expertise as I was a developer few years ago.
At some point you need to "hardcode" something unless you have some other means of mapping the content of the page to your spreadsheet. In your example you appear to be targeting "Sales Growth" percentage. If you are not comfortable hardcoding the index of the div (13), you could identify it by the id of the "Sales Growth" label which is mainContent_lblSalesGrowthorCasa.
For example, change your
//*[#id="mainContent_updAddRatios"]/div[13]/p/span
to:
//*[#id = "mainContent_updAddRatios"]/div[.//span/#id = "mainContent_lblSalesGrowthorCasa"]/p/span
which is selecting the div based on the div containing a span with id="mainContent_lblSalesGrowthorCasa". Ultimately, whether you "hardcode" the exact index of the div or "hardcode" the ids of the nodes, you are still embedding assumptions regarding the structure of page.
Thanks #david, that helped.
Two questions
What if the structure of the page would change? Example: If the website decided to remove the p tag then would my sheet fail? How do we avoid failure in such cases?
Also, since every id is unique, the probability of that getting changed is lesser than the index being changed. Correct me, if I am wrong?
What do we do when the elements don't have an id like Profit Growth, RoE, RoCE etc

Wikipedia pageviews analysis

I've been challenged with wikipedia pageviews analysis. For me this is the first project with such amount of data and I'm a bit lost. When I download the file from the link and unpack it, I can see that it has a table-like structure with rows looking like this:
1 | 2 |3|4
en.m The_Beatles_in_the_United_States 2 0
I struggle with finding out what exactly can be found in each column. My guesses:
language version and additional info (.m = mobile?)
name of the article
The biggest concern I have with two last columns. The last one has only "0" values in it and I have no idea what it represents. I'd assume then that the third one show number of views but I'm not sure.
I'd be grateful if someone could help me to understand what exactly can be found in each column or recommend some reading on this subject. Thanks!
After more time spent on this, I've finally found solution. I'm posting this in case someone has the same problem in the future. Wikipedia explains what can be found in database. These explanations were painful to find but you can access theme here and here.
Based on that you can see that rows have following structure:
domain code
page_title
count_views
total_response_size (no longer maintained)
Some explanations for each column:
Column 1:
Domain name of the request, abbreviated. (...) Domain_code now can
also be an abbreviation for mobile and zero domain names, in which
case .m or .zero is inserted as second part of the domain name (just
like with full domain name). E.g. 'en.m.v' stands for
"en.m.wikiversity.org".
Column 2:
For page-level files, it holds the title of the unnormalized part
after /wiki/ -in the request Url (E.g.: Main_Page Berlin). For
project-level files, it is - .
Column 3:
The number of times this page has been viewed in the respective hour.
Column 4:
The total response size caused by the requests for this page in the
respective hour. If I understand it correctly response size is
discontinued due to low accuracy. That's why there are only 0s. The
pagecounts and projectcounts files also include total response byte
sizes at their respective aggregation level, but this was dropped from
the pageviews and projectviews files because it wasn't very accurate.
Hope someone finds it useful.
Line format:
wiki code (subproject.project)
article title
monthly total (with interpolation when data is missing)
hourly counts
(From pagecounts-ez which is the same dataset just with less filtering.)
Apparently buggy though; it takes the first two parts of the domain name for wiki code, which does not work for mobile domains (which are in the form <language>.m.<project>.org).

Append Query That Also Selects A Lookup Table Value Based On Text Parsing?

I've posted a demo Access db at http://www.derekbeck.com/Database0.accdb . I'm using Access 2007.
I am importing an excel spreadsheet, which my organization gets weekly, importing it into Access. It gets imported the table [imported Task list]. From there, an append query reformats it and appends it to my [Master Task List] table.
Previously, we have had a form, where we would manually go through the newest imports, and manually select whether our department was the primary POC for a tasking. I want to automate this.
What syntax do I require, such that the append query will parse the text from [imported Task list].[Department], searching for those divisions listed on [OurDepartments] table (those parts of our company for which we are tracking these tasks), and then select the appropriate Lookup field (connected to [OurDepartments] table) in our [Master Task List] table?
I know that's a mouth full... Put another way, I want the append query update the [Master Task List].[OurDepartments], which is a lookup, based on parsing the text of [imported Task list].[Department].
Note the tricky element: we have to parse the text for "BA" as well as "BAD", "BAC", etc. The shorter "BA" might be an interesting issue for this query.
Hoping for a Non-VBA solution.
Thanks for taking a look!
Derek
PS: Would be very helpful if anyone might be able to respond within the work week. Thx!
The answer is here: http://www.utteraccess.com/forum/Append-Query-Selects-L-t1984607.html

Resources