During the Gathering Node data event, I want to run some code and based on the result I want to change the order of nodes in the Umbraco back office.
Is this possible? I'm using V6
In code there is a sort order method against Document
Document doc = new Document(<nodeId>);
doc.SortOrder = <int>;
This manipulates the database tables including cmsContentXml & UmbracoNode:
Related
I am trying to extract a large amount of details out of our Eloqua system using it's API and got this API to work perfectly for single IDs: https://docs.oracle.com/en/cloud/saas/marketing/eloqua-rest-api/op-api-rest-1.0-data-contact-id-get.html
The problem is that I need to run this for a large number of IDs and it will require alot in order to run it for the entire population. Is there any bulk APIs that can extract all of the following details out of Eloqua/Contact for the entire population? I don't see any on that pages documentation that meet this need under the Bulk section.
contactid, company, employees, company_revenue, business_phone, email_address, web_domain, date_created, date_modified, address_1, address_2, city, state_or_province, zip_or_postal_code, mobile_phone, first_name, last_name, title
It's a multi-step process with the Bulk API, typically in the following fashion:
Get a list of the current internal field names - useful for creating your export definition
Create an export definition and post it here. There is a useful example on the page, you do not need a filter criteria. Store the export ID somewhere
Using your export definition id, create a sync. It will gather the data in the background and prepare it for you. Take note of the sync ID provided in the initial response.
Check on the sync status with your sync ID here. It should only take a couple of minutes - and there is a callback url option as well in the previous step, if you don't want to keep polling.
Once your data is ready, use that sync id and request the data. Depending on how many rows were retrieved, you might need to paginate through the results using the offset query param. By default it will give you JSON, but I usually choose CSV (specify in the header).
If you need updated data, feel free to create a new sync using the same export definition id. You do not need to create a new export definition each time.
Good Day!
I am pretty new to ADF and need some guidance on how to best accomplish using the ADF Web Activity for each record in a query/view. I have a system where we need to add new users daily. I have built a query that returns what users are new and I would like to call a rest API to add their accounts in other systems. Today, I accomplish this in a java program where we get a result set and then iterate through each row and call the API.
I am attempting to replicate this in ADF and running into not understand how to accomplish this. The ForEach activity does not appear to be able to connect to a dataset or query. I have seen other examples of using the ForEach when you are building a parameter list and understand how that would work. (for example: https://learn.microsoft.com/en-us/azure/data-factory/tutorial-bulk-copy-portal.)
Can anyone give me some direction on how you run an activity for each row in a dataset?
I was able to find a way to do this using the Lookup Activity and then passing the output of the lookup to the foreach activity. Here is a my pipeline:
When passing the data from the Lookoup to the ForEach look you wan to set the ForEach items to "#activity('Lookup1').output.value" If you use the 'add dynamic content' selection and pick the output data sent ADF sets the value to #activity('Lookup1').output which will through a weird error about the length function when you run it.
In my research I found this demo to be very helpful: https://www.youtube.com/watch?v=ROq5mVrZPY0
I work with Wonderware software. One of the objects used to perform communication between Wonderware and the PLC is called Suitelink. In it, I have a table defined that has the name of one of my application fields on the left side and the name of the PLC tag providing its value on the right.
Once this saved and activated (deployed) the PLC tags will feed values in the field attributes to Wonderware.
Does anyone know where is this list saved in the system?
I am working at a web page and want to retrieve this list dynamically so I can have the page updated based on the current live value of the PLC tag being used.
I have looked in the database but could not find it.
C:\ProgramData\Wonderware\DAServer
Then within there you'll have several subfolders for your DA Servers. Open the subfolder to find a *.AAcfg file and your contents are in there in what looks like an XML format. You'll be hunting for all the <DeviceItem> tags
Anybody who knows if it possible to configure the queries view to show more than the current iteration.
Yes, it possible. You can do next:
Create new folder for queries.
Edit existing query.
Save that query to new destination.
Also you can use Group clauses
Additional information you can find here: https://learn.microsoft.com/en-us/vsts/work/track/using-queries
I'm using Delphi 10 with Firedac , I have with two tables in a Master-Detail configuration , A(Document) and B(DocDetail) . In my Form I have two DBgrids, linked to each Datasource respectively. I want a configuration, if possible, that delete all records items in table DocDetail when I delete its corresponding Master record in the master DBgrid. Is there any configuration in the table FDTable component that does such action ? Or there any other way to do delete master - delete detail in Delphi side ? (I know that it is possible to do it in database side through a constraint delete cascade). Thanks for your help.
The properties you are looking for on the TFDQuery component are FetchOptions.DetailCascade and/or FetchOptions.DetailServerCascade
From the help on DetailServerCascade
When DetailServerCascade is False, then FireDAC posts client-side
cascading changes to the database. The client-side cascading changes
are performed when DetailCascade is True. So DetailServerCascade
should be used together with DetailCascade.
If you are using CachedUpdates you may also need a TFDSchemaAdapter component. This CentralizedCachedUpdates Sample page list all of the steps need to setup the components when using cached updates.
I am using this on one form with good results. It was a little bit picky to get it all setup correctly. Basically every DataSet involved in the update needs to point to one common TFDSchemaAdapter component. Then any Master datasets need to have their FetchOptions.DetailCascade set to true to ensure that the rows are correctly deleted from the child datasets