I am trying to get all the details of CLASS, I am getting response as empty i.e [], I am following this document to get the class details.
https://developer.intuit.com/docs/0025_quickbooksapi/0050_data_services/030_entity_services_reference/class
1) Is there any sample java code exists to get the CLASS details? At present I am using mule esb to get class details. I have created a sample application in intuit which is a 1 month free usage.
2) Can I get class details, create class, update and query class from the free version of application?
You can download the java devkit from here - https://developer.intuit.com/docs/0025_quickbooksapi/0055_devkits
https://developer.intuit.com/docs/0025_quickbooksapi/0055_devkits/0201_ipp_java_devkit_3.0/0001_synchronous_calls/0001_data_service_apis
actual code(using above devkit) will be something like -
this.service.getAll(new Class());
2) Can I get class details, create class, update and query class from the free version of application?
If your free version is a QBO plus ? Then you can.
I've not tried this in simpleStart(most probably it doesn't support class tracking) or essential.
Thanks
Related
I'm new to Eiffel and I'm trying to create an instance of Linked_List. I'm not really sure of how to do this with this class because I receive an syntax error whenever I try to do it that way. This is what I have:
class
APPLICATION
inherit
ARGUMENTS
create
make
feature {NONE} -- Initialization
make
--
local
lista:LINKED_LIST[MONOMIO]
do
lista.make
end
end
And the error I'm getting is:
Error code: VUEX(2)
Error: feature of qualified call is not available to client class.
What to do: make sure feature after dot is exported to caller.
I hope somebody can help me with this, thanks.
Objects are created with a creation instruction, so in your example you need to add a keyword create in front of lista.make to indicate that this is not a plain feature call:
create lista.make
Currently using neo4j-community-2.1.7
I understand that the facility has been included in this version.
Have been unable to find any reference to it in the ruby docs.
Would appreciate it very much if I may have some direction on how to reset the timeout using neo4jrb.
Regards
Ross
I am unaware of a way to reset the transaction timeout of an open transaction. Maybe someone more familiar with transactions in the Java API can clarify.
If you want to change the transaction timeout length at boot, that's handled in neo4j-server.properties as described at http://neo4j.com/docs/stable/server-configuration.html.
Within Neo4j-core, if using Neo4j-community or Neo4j-enterprise (and therefore Neo4j Embedded) the code suggests that you can specify a config file by giving a third argument to Neo4j::Session.open, a hash that contains config options. That method, if given :embedded_db as its first arg, will call Neo4j::Embedded#initialize and give that hash as an argument. If you do something like this:
Neo4j::Session.open(:embedded_db, 'path_to_db', properties_file: 'path_and_filename_to_neo4j-server.properties')
It will eventually use that properties file:
db_service.loadPropertiesFromFile(properties_file) if properties_file
This is not demonstrated in any of the specs, unfortunately, but you can see it in the initialize and start methods at https://github.com/neo4jrb/neo4j-core/blob/230d69371ed6bf39297786155ef4f3b1831dac08/lib/neo4j-embedded/embedded_session.rb.
RE: COMMENT INFO
If you're using :server_db, you don't need to include the neo4j-community gem. It isn't loaded, it isn't compatible with Neo4j in Server mode.
That's the first time I've seen the link you provided, good to know that's there. We don't expose a way to do that in Neo4j.rb and won't because it would require some threading magic that we can't support. If you want to do it manually, the best I can tell you is that you can get a current transaction ID this way:
tx = Neo4j::Transaction.new
# do stuff and before your long-running query...
tx.resource_data[:commit].split('/')[-2]
That will return the transaction number that you can use in POST as described in their support doc.
If you'd like help troubleshooting your long-running Cypher query, I'm sure people on SO will help.
TMDB.org recently made a change to their API which removes the capability to browse their database.
My Rails app used to use the tmdb-ruby gem to browse the TMDB database, but this gem only worked with v2.0 of the API, which is now defunct.
TMDB.org recommends using this gem, and since it is forked from the gem I previously used, it makes it a bit easier.
My PostgreSQL database is already populated with data imported from TMDB when v2.0 was still extant and when I could use the browse feature.
How can I now use the find feature (ie: #movie = TmdbMovie.find(:title => "Iron Man", :limit => 1) ) to find a random movie, without supplying the title of the Movie.
This is my rake file which worked with the older gem.
I would like to know how to have it work the same way but whilst using the find instead of the browse.
Thanks
I don't think find is what you need in order to get what you want (getting the oldest movies in the database and working its way up to the newest movie). Looking at the TMDb API documentation, it looks like they now have discover that may have replaced the browse that you used to use.
I don't see discover anywhere in Irio's ruby-tmdb fork, but it looks like most of the specific methods they have (like TmdbMovie.find) call a generic method Tmdb.api_call.
You should be able to use the generic method to do something like:
api_return = Tmdb.api_call(
"discover/movie",
{
page: 1,
sort_by: 'release_date.asc',
query: '' # Necessary because Tmdb.api_call throws a nil error if you don't specify a query param value
},
"en"
)
results = api_return["results"]
results.flatten!(1)
results.uniq!
results.delete_if &:nil?
results.map!{|m| TmdbMovie.new(m, true)} # `true` tells TmdbMovie.new to expand results
If this works, you could even fork Irio's fork, implement a TmdbMovie.discover method supporting all the options and handling edge cases like TmdbMovie.find does, and send them a pull request since it just looks like they haven't gotten around to implementing this yet and I'm sure other people would like to have this method as well :)
I installed Sija's fork of garb and am having some issues. The documentation appears to be a bit outdated as some things have been deprecated.
I have the following code (ignore the fact that it's horribly unsecure):
extend Garb::Model
metrics :pageviews
dimensions :page_path
Garb::Session.login('XXXXXX#gmail.com', 'mypassword')
profile = Garb::Management::Profile.all.detect { |p| p.web_property_id == 'UA-XXXXX-1' }
puts profile.visits
When I run this, I get undefined method visits. I also tried this code on StackOverview, and it returned undefined method results. I'm guessing these are due to the new GA Management API v3 changes, but does anyone know the new way to access pageviews/visits?
I'm trying to query pageviews by date in the end.
Thanks for any help!
You need to create a class extending Garb::Model (https://github.com/Sija/garb#define-a-report-class). Btw, documentation has been updated to work with the newest version of the gem.
Here is an example:
class Report
extend Garb::Model
metrics :pageviews
dimensions :pagePath
end
Edit: Thanks for the edit! That was my first ever post :)
I want to integrate 2 JIRA instances through email, one uses Jira 4.2.1 and the other uses 4.3.3.
one instance has certain custom fields, another has certain custom fields, both of the JIRA instances has to interchange the issue details, updates of the issue, through email. i.e both has to be in sync.
For Example
1) if an issue is created in Instance 1, a mail will be triggered and using that email, Instance 2 will create an issue there.
2) Also, if there is a update for an issue in Insance1 then a mail will be triggered to Instance 2 which will update the same issue in Instance 2.
Hope it clears !!
If I got your intentions right, i believe that there is an easier way to do so using the Jira remote API. For example, you could easily write a Python script, using the XML-RPC library, comparing the two systems and updating them if needed.
The problem with the email method you suggested is that you could easily create an endless loop of issue creating...
First, create a custom field in both instances, and call it something like "Sync". This will be used to mark issues once we'll sync them.
Next, enable the RPC plugin.
Finally, write a script that will copy the issues via RPC, example:
#!/usr/bin/python
# Sample Python client accessing JIRA via XML-RPC. Methods requiring
# more than basic user-level access are commented out.
#
# Refer to the XML-RPC Javadoc to see what calls are available:
# http://docs.atlassian.com/software/jira/docs/api/rpc-jira-plugin/latest/com/atlassian/jira/rpc/xmlrpc/XmlRpcService.html
import xmlrpclib
s1 = xmlrpclib.ServerProxy('http://your.first.jira.url/rpc/xmlrpc')
auth1 = s1.jira1.login('user', 'password')
s2 = xmlrpclib.ServerProxy('http://your.second.jira.url/rpc/xmlrpc')
auth2 = s2.jira1.login('user', 'password')
# go trough all issues that appear in the next filter
filter = "10200"
issues = s1.jira1.getIssuesFromFilter(auth1, filter)
for issue in issues:
# read issues:
for customFields in issue['customFieldValues']:
if customFields['customfieldId'] == 'customfield_10412': # sync custome field
# cf exists , dont sync!
continue
# no sync field, sync now
proj = issue['project']
type = issue['type']
sum = issue['summary']
desc = issue['project']
newissue = s2.jira1.createIssue(auth2, { "project": issue['project'], "type": issue['type'], "summary": issue['summary'], "description": issue['description']})
print "Created %s/browse/%s" % (s.jira1.getServerInfo(auth)['baseUrl'], newissue['key'])
# mark issue as synced
s.jira1.updateIssue(auth, issue['key'], {"customfield_10412": ["yes"]})
The script wasn't tested but should work. You'll probably need to copy the rest of the fields you have, check out this link for more info. As well, this is just one way sync, you have to sync it the other way around as well.