Walmart Orders API: No matter what limit I set I always get back 40 results - walmart-api

I successfully managed to authenticate and get results from the api using the following API endpoint and params (orders created after 2019-01-01 with a limit of 200 results per page)
https://marketplace.walmartapis.com/v3/orders?createdStartDate=2019-01-01&limit=200&shipNodeType=SellerFulfilled
Here is the API documentation: https://developer.walmart.com/#/apicenter/marketPlace/latest#getAllOrders
The results meta data indicated a total of 303 orders, with a page size (limit) of 200, however only 40 results were listed (and not ).
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ns3:list xmlns:ns2="http://walmart.com/mp/orders" xmlns:ns3="http://walmart.com/mp/v3/orders" xmlns:ns4="com.walmart.services.common.model.money" xmlns:ns5="com.walmart.services.common.model.name" xmlns:ns6="com.walmart.services.common.model.address" xmlns:ns7="com.walmart.services.common.model.address.validation" xmlns:ns8="http://walmart.com/">
<ns3:meta>
<ns3:totalCount>303</ns3:totalCount>
<ns3:limit>200</ns3:limit>
</ns3:meta>
<ns3:elements>
<ns3:order>
Any ideas on what I am doing wrong, and what I can do to get the rest of the results.

Related

Bing Ads API Reporting Service returns error code 2015 "No Dimension Selected"

I'm trying to pull some data from the Bing Ads API but I keep getting error code 2015. I'm using Savon with Ruby on Rails. Here is the request:
<env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tns="https://bingads.microsoft.com/Reporting/v13" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns="https://bingads.microsoft.com/Reporting/v13">
<env:Header>
<AuthenticationToken>blahblahblah</AuthenticationToken>
<CustomerAccountId>blahblahblah</CustomerAccountId>
<CustomerId>blahblahblah</CustomerId>
<DeveloperToken>blahblahblah</DeveloperToken>
</env:Header>
<env:Body>
<tns:SubmitGenerateReportRequest>
<ReportRequest xsi:nil="false" xsi:type="AccountPerformanceReportRequest">
<ExcludeColumnHeaders>true</ExcludeColumnHeaders>
<ExcludeReportFooter>true</ExcludeReportFooter>
<ExcludeReportHeader>true</ExcludeReportHeader>
<Format>Csv</Format>
<Language>English</Language>
<ReportName>AccountPerformanceReportRequest</ReportName>
<ReturnOnlyCompleteData>false</ReturnOnlyCompleteData>
<Aggregation>Summary</Aggregation>
<Columns>
<AccountPerformanceReportColumn>Spend</AccountPerformanceReportColumn>
<AccountPerformanceReportColumn>Clicks</AccountPerformanceReportColumn>
<AccountPerformanceReportColumn>Conversions</AccountPerformanceReportColumn>
<AccountPerformanceReportColumn>Revenue</AccountPerformanceReportColumn>
</Columns>
<Filter xsi:nil="true"/>
<Scope>
<AccountIds xmlns:a1="http://schemas.microsoft.com/2003/10/Serialization/Arrays">
<a1:long>blahblahblah</a1:long>
</AccountIds>
</Scope>
<Time>
<CustomDateRangeEnd>
<Day>02</Day>
<Month>04</Month>
<Year>2019</Year>
</CustomDateRangeEnd>
<CustomDateRangeStart>
<Day>01</Day>
<Month>04</Month>
<Year>2019</Year>
</CustomDateRangeStart>
<PredefinedTime xsi:nil="true"/>
<ReportTimeZone>EasternTimeUSCanada</ReportTimeZone>
</Time>
</ReportRequest>
</tns:SubmitGenerateReportRequest>
</env:Body>
</env:Envelope>
As you can see, I'm pulling the report with 'Summary' as the aggregation type. If I use 'Monthly' as the aggregation type and include a 'TimePeriod' column, it works perfectly, but if I do that then the data returned is for the whole month of April as opposed to the date range I've selected 04-01-2019..04-02-2019.
Here is the response:
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Body>
<s:Fault>
<faultcode>s:Server</faultcode>
<faultstring xml:lang="en-US">Invalid client data. Check the SOAP fault details for more information</faultstring> <detail>
<ApiFaultDetail xmlns="https://bingads.microsoft.com/Reporting/v13" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<TrackingId xmlns="https://adapi.microsoft.com">52665fe5-3eb8-42e0-ad8e-942e848297ce</TrackingId>
<BatchErrors/>
<OperationErrors>
<OperationError>
<Code>2015</Code>
<Details>No Dimension selected.</Details>
<ErrorCode>RequiredColumnsNotSelected</ErrorCode>
<Message>The specified report request does not specify all the required columns for this report type. Please submit a report request with the required columns for this report type, and optionally additional columns that are to be included in the report.</Message>
</OperationError>
</OperationErrors>
</ApiFaultDetail>
</detail>
</s:Fault>
</s:Body>
</s:Envelope>
The error code returned would lead me to believe that a TimePeriod column is required even for Summary, but that contradicts what is documented here.
All other aggregation types work with this setup. Any help would be greatly appreciated.
Good catch. We will update documentation to clarify that at least one attribute (non performance stat) e.g., AccountId must be included. Most of the other reports have specific attributes that must be included, whereas the account report does not. I hope this helps!

Microsoft Graph NextLink Not Working

I'm having problems using the server-side paging, utilizing the #odata.nextlink to fetch the next page of data from Microsoft Graph, based on the information in this page. I'm using raw GETs, with the authorization token set in the header (ie, I'm not using a language API, I'm trying this from Powershell using curl). I've scrubbed sensitive data from the following snippets, replacing them with x's, but hopefully the problematic info comes across.
For the first GET, I query with
https://graph.microsoft.com/beta/drives/b!Gxxxxx-xxxxxxge/root:/ReallyBigFolder:/children?top=200
and I get a response with 200 items, as expected. The #odata.nextlink field in this response is
https://graph.microsoft.com/beta/drives/b!Gxxxxx-xxxxxxge/root/children?top=200&$skiptoken=Paged%3dTRUE%26p_SortBehavior%3d0%26p_FileLeafRef%3d279%252ezip%26p_ID%3d208%26p_FileDirRef%3dMaintenance%2520Department%252fReallyBigFolder%26RootFolder%3dMaintenance%2520Department%252fReallyBigFolder
For the examples in the Microsoft Graph documentation linked above, the $skiptoken=... part has random-looking numbers, but mine has $skiptoken=Paged=TRUE&etc. Perhaps the API has changed the response since the documentation was written, or mine is completely incorrect.
My understanding from the documentation is that I should be able to use this URL as an opaque value, and GET it from the Graph API (with auth token of course) without modification. However, when I do this, the response is
{"#odata.context":"https://graph.microsoft.com/beta/$metadata#drives('b%21Gxxxxx-xxxxxxge')/root/children","value":[]}
Where I'm expecting to get another 200 files listed, there are no files returned at all, and it appears the path is gone, pointing to the root rather than the subfolder like it should have been.
I've also tried this in Graph Explorer with both the /beta and /v1.0 endpoints, and it fails in the same way there as well.
Where am I going wrong?
Edit with details for debugging: Note: Graph Explorer doesn't seem to display the Date field from headers, so I'm using Postman Chrome Plugin for these values.
First GET request is to
beta/drives/b!xxx-xxxge/root:/Really%20Big%20Folder/ReallyBigFolder:/children
With response headers
Cache-Control →private
Content-Encoding →gzip
Content-Type →application/json;odata.metadata=minimal;odata.streaming=true;IEEE754Compatible=false;charset=utf-8
Date →Fri, 26 May 2017 19:07:54 GMT
Duration →2033.3889
OData-Version →4.0
Transfer-Encoding →chunked
Vary →Accept-Encoding
client-request-id →6faf5d1d-a291-410a-b269-f4667187d7cb
request-id →6faf5d1d-a291-410a-b269-f4667187d7cb
x-ms-ags-diagnostic →{"ServerInfo":{"DataCenter":"North Central US","Slice":"SliceB","ScaleUnit":"002","Host":"AGSFE_IN_11","ADSiteName":"CHI"}}
and nextLink (obfuscated slightly for security)
https://graph.microsoft.com/beta/drives/b!xxx-xxxge/root/children?$skiptoken=Paged%3dTRUE%26p_SortBehavior%3d0%26p_FileLeafRef%3d279%252ezip%26p_ID%3d208%26p_FileDirRef%3dGSH%2520Test%252fMaintenance%2520Department%252fReally%2520Big%2520Folder%252fReallyBigFolder%26RootFolder%3d%252fGSH%2520Test%252fMaintenance%2520Department%252fReally%2520Big%2520Folder%252fReallyBigFolder
Following the nextLink produces headers (unchanged headers omitted):
Date →Fri, 26 May 2017 19:15:17 GMT
Duration →512.9537
client-request-id →6ba61712-a423-4bc8-9376-cc62bf854329
request-id →6ba61712-a423-4bc8-9376-cc62bf854329
x-ms-ags-diagnostic →{"ServerInfo":{"DataCenter":"North Central US","Slice":"SliceA","ScaleUnit":"001","Host":"AGSFE_IN_7","ADSiteName":"CHI"}}
and resulting body:
{
"#odata.context": "https://graph.microsoft.com/beta/$metadata#drives('b%21xxxx-xxxxge')/root/children",
"value": []
}
You are correct that the nextLink should be an opaque URL that returns you the next set of results. The format of that string may change over time, so you should not try to parse or otherwise interpret the string, but the usage should be the same.
The response that you are getting back is consistent with an empty result -- meaning that there are no additional files to list.
How many results do you have in ReallyBigFolder? What happens if you set top to a different value (say, 5? 1000?)
Note that the #odata.context describes the result, but is not necessarily the same as the request URL. Is the #odata.context that you get back from nextLink different than that you got back from the initial request? It should be the same...

Error return from Google AdWords API

I'm trying to upgrade my Google AdWords API process to the latest version (from v201506 to v201603), and I'm getting an error from Google:
Invalid ReportDefinition Xml: cvc-complex-type.2.4.d: Invalid content was found starting with element 'includeZeroImpressions'. No child element is expected at this point
My XML is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<reportDefinition>
<selector>
<fields>AccountDescriptiveName</fields>
<fields>Date</fields>
<fields>CampaignName</fields>
<fields>AdGroupName</fields>
<fields>Clicks</fields>
<fields>CampaignId</fields>
<fields>AdGroupId</fields>
</selector>
<reportName>AdWord-Performance-Report-#570e9612587f9</reportName>
<reportType>ADGROUP_PERFORMANCE_REPORT</reportType>
<dateRangeType>TODAY</dateRangeType>
<downloadFormat>TSV</downloadFormat>
<includeZeroImpressions>true</includeZeroImpressions>
</reportDefinition>
I couldn't find any references in the Google AdWords api blogs referring to changes in includeZeroImpressions... any ideas?
Damn.. nevermind. I see it in the Migration guide now:
The includeZeroImpressions field in ReportDefinition is removed. Use the HTTP header to include zero impressions in your report results instead.

Request to server returns a timeout

I am trying to run the following YQL query:
select * from xml where url='LinkToMyServer/PerformSomeOperationAndGetXml'
However, I am getting the following result:
<?xml version="1.0" encoding="UTF-8"?>
<query xmlns:yahoo="http://www.yahooapis.com/v1/base.rng"
yahoo:count="0" yahoo:created="2013-01-03T23:17:06Z" yahoo:lang="en-US">
<diagnostics>
<publiclyCallable>true</publiclyCallable>
<url execution-start-time="1" execution-stop-time="4555"
execution-time="4554" proxy="DEFAULT"><![CDATA[LinkToMyServer/PerformSomeOperationAndGetXml]]></url>
<user-time>4555</user-time>
<service-time>4554</service-time>
<build-version>32943</build-version>
</diagnostics>
<results/>
</query>
Is there any way to increase the timeout somehow?
Thanks!
I don't think there's a way to increase the YQL request timeout. The only documentation I found related to this - Paging and Table Limits - mentions a 30 second overall time limit for a YQL statement, but doesn't specifically mention the request time.
For a comparison test, I tried select * from xml where url='http://blackhole.webpagetest.org' and got similar results as yours - YQL timeout just under 5 seconds with empty results set.
If you can't get your server response time down to under 5 seconds, you may need to find a different solution.

Is this normal behavior for Yahoo PlaceFinder API? Seems odd to me

So I have this lat/lng pair, 39.905983/116.459373. Forever, the PlaceFinder API has been returning WOE ID 2151399 for this. Then suddenly it stopped, and started returning null (empty) instead.
I thought maybe the service was remembering that it had already done this for my API key, so I switched to another one. Still, null WOE ID. It makes sense because it is still processing other lat/lng pairs which I have also queried excessively during development.
I changed the values sent to the PlaceFinder query to 39.9059830001/116.4593730001 (just added 0001 to the end of each), and it started returning the WOE ID again.
My question: What gives?
I tried the same query using PlaceFinder via the YQL Console:
select woeid from geo.placefinder where text="39.905983,116.459373" and gflags="R"
...and get the same WOEID result you mentioned:
<?xml version="1.0" encoding="UTF-8"?>
<query xmlns:yahoo="http://www.yahooapis.com/v1/base.rng"
yahoo:count="1" yahoo:created="2012-03-20T16:24:40Z" yahoo:lang="en-US">
<results>
<Result>
<woeid>2151399</woeid>
</Result>
</results>
</query>
I have not seen the behavior you mentioned, so I would consider it not normal. In the case where a null value is returned, do you have the normal HTTP status code? You might be running into a rate limit.

Resources