I am working on an application that will automatically insert into my clients QuickBooks Online Edition database. I have the xml format but now I need to do some testing to make sure the inserts go into the QBOE database correctly. Is there a QuickBooks Online Edition test environment?
Getting onto the Intuit test environment requires submitting an application for review to become a premium IDN developer (much too hard considering I only need a couple of days for testing and more expensive than the solution below).
To be able to test a small application for QBOE, I just created a new QuickBooks Online Edition account/company - signed up for the 30 day free trial. 30 days is more than enough for me to complete my testing... and even if I need more time the basic version is $9.95/month which is completely acceptable in my mind if I want to keep the account around for future testing or contract work.
See this IDN thread for a little more details.
Related
I made a Software using Python then I convert it to an exe application using cx_Freeze anyways I'm trying to make an msi installer for my application using advanced installer, and use the time limitation trial in advanced installer on my application? And is there any alternative that can do this?
First, make sure that you have an Enterprise or higher license. Then, follow the instructions in this image (you may want to change the last three steps and maybe change the last step) by clicking on every specified button and correctly setting every specified field:
The Display Name and other fields will be automatically filled out. Feel free to customize further, but this should just work.
I have found it easier to provide this functionality in the application (especially if you have a company web site that can provide web service calls). One of the reasons is that it seems fairer to start the clock at first use of the app, not at install time. If you are worried about users hacking into the time trial it's also more secure to make a web service call to your company's web site. So if you want to build your own this is the general idea:
The best solution is when the install medium has its unique CD key or license. The app passes that to your company's web service and say "first run of the app". That starts the clock that is kept at your company's server, the clock for this copy of the app at this particular customer's system. Later runs of the app call in to see if the clock has run out.
If there is no license or CD key, another way is to generate a signature from the software and hardware of the machine and pass that to the web service.
Cache the estimated expiry date somewhere as a fallback.
When the app starts, pass the license (or machine signature) information to the company web service to see if it's expired. If that's not available then use the cached expiry date instead. If that's not available then someone has tried to hack it, so don't let the app run.
If the customer tries to install the product on another system, the license/cd key from the install medium has already been used so the app won't work when installed. That's why a unique key per install is useful. This prevents that install license from being installed anywhere else (typically until the customer pays and the company database says they are ok to use that license).
If the customer uninstalls and tries to reinstall on the same machine, that hardware signature has been seen before too, so if there is no unique license key the machine signature can be used to detect re-use. There isn't a way to stop it running on another system if the customer has more than one, and again that's why a unique license id per install medium works best.
In the end the issue is how much you care and how much you want to stop the customer running the app. In many cases a severe nagging message at start up and other points while the app runs can be enough.
Our company has started development of own systems "in-house". We already got couple of developers, who will be responsible for writing code in Ruby/RoR.
We are currently discussing about infrastructure and I would like to ask: should we develop everything on local machines, then put it to test server and later to production, or develop everything on development/test server, then publish it for testing and later to production?
Just an update to the description above: under "local machines" I meant developers' desktops and this test/development server is a machine in our office.
It's a valid question, and as such there's a trade-off to consider.
Generally; work locally. Web app development has a natural flow that leads developers to be saving and refreshing browsers many times an hour. All the time you save on network latency will actually add up, and be less frustrating for the developers.
There are downsides to working locally however, you'll need to make sure that your set-up is EXACTLY as it will be on the testing/production servers. That means everything down to your kernel version, apache version, ruby/rails version. DNS is easy, but again must mimic the live situation perfectly in order for AJAX calls etc to work seamlessly.
Even if you ensure all of the above, you will likely have to make a few minor changes when you move the app to a live server, there just always seems to be something in my experience.
Also, running on a live server isn't SO painful for a developer. Saving a source file from a text editor/IDE via FTP should take less than a second even over the internet, and refreshing a remote browser session will give your UI designers a better feel for the real user experience and flow. If you use SVN rather than FTP much the same applies.
Security isn't much of a concern, lock down FTP and SSH to the office IP, but have a backdoor available if a developer needs to edit a source from somewhere else, so they can temporarily open the firewall to their own IP.
I have developed PHP and Rails apps on a remote test server, on an in-office server and on a local machine. After many years doing each, I can say that as a developer, I don't mind any so much.
As a developer, my suggestion is that you need to 1st do all developing work on your local server. After testing, you need to send to client to make it live.
I'm working as a web developer on Ruby on Rails # andolasoft.com, we are following the same procedure. Hope you got the idea.
Thanks
Our company has a small development team in-house but we mostly outsource our customer projects to external consulting firms which we don't manage directly. We only interact with their project manager and maybe a team lead.
I'm implementing TFS 2010 and Scrum for our internal team for Project Management, Version Control and Sharepoint shared documents access.
My problem is how to to manage the external teams.
They won't use our TFS for Version control and I can't forced them to use Scrum and report as such (report on a task level adding remaining hours).
The solution I came with is this:
Use the “MSF for Agile Software Development v5.0” template in Team Foundation Server.
Break the project into user stories and then create a task for each.
The tasks have these fields:
Original Estimate
Since we’ll track percentage of completion, this will always be 100.
Remaining
This is the percentage of remaining work.
Completed
This is the percentage of competed work.
Their team lead will update the remaining work in percentage for each user story (on the task level).
If progress is reported correctly I can print a "Stories Overview" report periodically and see the percentage complete for each user story,
I'm sure it must be a better way out there and I'll appreciate any help on getting to the right direction.
Thanks
We are basically doing the same thing ... I have 10 in-house developers and teams around the world working on their projects. Most of the work we do overlaps between external and external. We are using TFS2010. We break a piece of development into user stories into lots of tasks and eventually bugs. We view the status of the external projects by looking at the breakdown of work on the individual work items.
Part of the development process flow is to get the code into TFS source control; and the control of the logs changes as it comes back into our system.
The external PM's then use the web interface spreadsheet upload update the data on these logs (Including the time spent / work remaining) so we can see the state of the work. You don't need code upload to set a work item to test / complete.
The process flow we have on the external work is; on a given user story item you can then see the state of development for all those tasks.
List item
To Spec
Specified
Spec Agreed
Open For Work
WIP
Development Complete
External Test
Source ADded to TFS
Delivered to Internal Test
Internal Test
Complete
We're attempting to generate payments in an Agresso 5.5 system. The mechanism we've been told to use is to write new payment data into table acrbatchinput where it will be picked up and processed by a regular job running in agrbibat.dll. We have code that worked on a previous version of Agresso but following the upgrade our payments get rejected by the agrbibat job. Sometimes it generates useful messages in the log, sometimes it doesn't, and working through failures without good information is becoming a bit of a slog.
Is there some documentation we're missing? In particular it would be useful to have a full list of validation rules the job is using so we can implement these ourselves rather than trying to infer them from the log. I can't find any - there's not a lot for acrbatchinput on Google. Does this list or some other documentation exist? Is agribibat something easily decompilable, e.g. .NET?
Thanks. The test system we have is running against Oracle on Solaris with the Agresso jobs hosted on Windows. We have limited access to the Oracle and Agresso systems because (I think!) the same Oracle server is hosting the live payment system, but I could probably talk finance into giving us agrbibat.dll if that might help. We're unlikely to get enough access to their servers to debug it in place.
It turns out that our problem is partly because the new test system we've been given access to wasn't set up correctly, so we might be able to progress this without extra information - we're waiting on the financial team here for input.
However we're still interested in acrbatchinput or agrbibat documentation or information. You've missed the bounty I set but ticks, votes and gratitude still available.
I know this is an ancient old question, but here's my response anyway for anyone else that finds it.
The only documentation is the usual Agresso help files from within the desktop client. Meaningful information is only gleaned through trial and error, however!
The required fields differs depending on whether a given record is a GL, AP/AR or tax transaction. (That much is, at least, explained in the help).
In addition to using the log file, it's often helpful to look at GL07's report output for errors.
I am curious on how others manage code promotion from DEV to TEST to PROD within an enterprise.
What tools or processes do you use to manage the "red tape", entry/exit criteria side of things?
My current organisation is half stuck between some custom online forms type functionality and paper based dependencies to submit documents, gather approvals and reviews.
All this is left in the project managers hands to track what has been submitted, passed review, approved and advise management if there are any roadblocks that may need approval to be "overlooked" before an application can be promoted to the next environment.
A browser based application would be ideal... so whats out there? please show me that you googlefu is better than mine.
It's hard to find one that's good via google. There is a vast array of tools out there for issue management so I'll mention what we use and what we woudl like to use.
We currently use serena products. They have worked well for us in the past. Team Track is our issue management and handles the life cycle of any issue we work on. Version Manager is our source control and has the feature of implementing promotional groups like DEV TEST And PROD. We use DEV, TSTAGE, TEST, PSTAGE and PROD to signify the movement from one to the other, but it's much the same. The two products integrate nicely so that the source associated with the issues is linked, but we have no build process setup in this environment. It's expensive, but it works well.
We are looking ot move to a more common system using Jira for issue management, Subversion for source control, Fisheye to link the two together and Cruise Control for build management. This is less expensive, totaling a few thousand for an enterprise lisence and provides all the same features but with the added bonus of SVN which is a very nice code version mangager.
I hope that helps.
There are a few different scenarios that I've experienced over the years:
Dev -> Test : There is usually a code freeze date that stops work on new features and gets a test environment the code that has been tagged/labelled/archived that gets built. This then gets copied onto the machines and the tests go fine. This is also usually the least detailed of any push.
Test->Prod : This requires the minor change that production has to go down which can mean that a "gone fishing" page goes up or IIS doesn'thave any sites running and the code is copied over again. There are special cases to this where a load balancer can act as a switch so that the promotion happens and none of the customers experience any down time as the ones on the older server will move once their session ends.
To elaborate on that switch idea, the set up is to have 2 potentially live servers with just one server taking requests that the load balancer just sends all the traffic to one machine that can be switched when the other server has the updated code to go live.
There can also be a staging environment which is between test and production where the process is similar in terms of there is a set date when the promotion happens.
Where I used to work there would be merge days where a developer spent most of a day in Perforce merging code so that it could be promoted from one environment to another.
Now there are a couple of cases where this isn't used:
"Hotfixes" or "Hot patches" would occur where I used to work and in this case the specific files were copied up into the staging and production environments on its own since the code change had to get into Production ASAP since something broke in production or some new thing that had to get done that takes 2 minutes gets done. In this case, the code change getting pushed in had to be reviewed and approved before going out.
Those are the different approaches I've seen used where generally there are schedules and timelines potentially have to be changed or additional resources brought in to make a hard date like if a conference is on a particular weekend that such and such is ready for that.
Of course in a few places there has been the, "Oh, was that broken? Let me see..." and a few minutes later, "No, see it isn't broken for me," where someone changed things without asking permission or anything where a company still has what they call "cowboy programming."
Another point is the scale of the release:
1) Tiny - This is the case where one web page goes up so that user X can do Y.
2) Small - A handful or so of files that isn't really complicated but isn't exactly trivial.
3) Medium - Where going from one environment to another requires changing a bunch of files and usually has scripts to move.
4) Big - Where there are scheduled promotions and various developers are asked for who is taking which shifts when the live push is done. I had this in a case where there was a data migration to do in addition to a release of some new e-commerce sites.
5) Mammoth - Where everything is brand new including how this would be used. I don't think I've ever seen one of this size but I'd imagine Microsoft or Google would have releases of this size.
Somewhere in that spectrum most releases fall and so how much planning and preparation can vary quite a bit and let's not forget that regulatory compliance can be its own pain in getting some things done.