I have a client running Quickbooks along with Quickbooks POS set up. We had to condense are Quickbooks file, and now we can't pull up transactions prior to the condensing. This makes sense to an extent for me, but is there a way that they could still pull up old transactions on POS from at least a decade ago without bloating the quickbooks file?
It wasn't terribly intuitive for the client to manually switch back and forth from an old & new POS file, and it was quite obvious what file they were in (the potential for accidentally recording a sale against an old file was high).
So my solution to this was to actually pull the information from and old copy with a ODBC connection tool (plenty out there on the web). I was then able to export that to Excel and organize it in a readable / searchable manner for the client.
FYI, Quickbooks wasn't helpful on this matter.
Related
We need to protect customer data and using FirebirdSQL 2.5(.8) with Delphi 7.
Also it is essential to do regular backups on "secondary" PC, or pen-drives if the "master" fails.
For that we used this method, calling Gbak.exe and 7z.exe with stdin/out.
Realized that was a bad idea because it's very easy to see the parameters (passwords) added to command line during the process, even with a simple Task-manager.
Is there a more secure way to do it?
(Using standard Interbase componenst OR UIB)
Upgrade to Firebird 3 which added Database Encryption capability. If you don't want or cannot, I believe you might run the GBAK tool from your application with the STDOUT option but instead of using 7-zip for compression you would read that output in your application, and encrypt such input by some encryption library on the fly.
I believe you may find many examples how to run an application and read its standard output over here (here is something related to start with), so the rest might be about finding a way of an on the fly stream encryption. Or just capturing STDOUT in one stream and encypting in another.
Firebird guys on SQL.ru forum say, that actually it is possible to use Services API to get backup stream remotely.
That does not mean that IBX or UIB or any other library readily support it though. Maybe it does, maybe not.
They suggested to read Release Notes for Firebird 2.5.2 or Part 4 of doc\README.services_extension.txt files of Firebird 2.5.2+ installation.
Below is a small excerpt from the latter:
The simplest way to use this feature is fbsvcmgr. To backup database
run approximately the following:
fbsvcmgr remotehost:service_mgr -user sysdba -password XXX action_backup -dbname some.fdb -bkp_file stdout >some.fbk
and to restore it:
fbsvcmgr remotehost:service_mgr -user sysdba -password XXX action_restore -dbname some.fdb -bkp_file stdin <some.fbk
Please notice - you can't use "verbose" switch when performing backup
because data channel from server to client is used to deliver blocks
of fbk files. You will get appropriate error message if you try to do
it. When restoring database verbose mode may be used without
limitations.
If you want to perform backup/restore from your own program, you
should use services API for it. Backup is very simple - just pass
"stdout" as backup file name to server and use isc_info_svc_to_eof in
isc_service_query() call. Data, returned by repeating calls to
isc_service_query() (certainly with isc_info_svc_to_eof tag) is a
stream, representing image of backup file.
Restore is a bit more tricky. Client sends new spb parameter
isc_info_svc_stdin to server in
isc_service_query(). If service needs some data in stdin, it returns
isc_info_svc_stdin in query results, followed by 4-bytes value -
number of bytes server is ready to accept from client. (0 value means
no more data is needed right now.) The main trick is that client
should NOT send more data than requested by server - this causes an
error "Size of data is more than requested". The data is sent in next
isc_service_query() call in the send_items block, using
isc_info_svc_line tag in traditional form: isc_info_svc_line, 2 bytes
length, data. When the server needs next portion, it once more returns
non-zero isc_info_svc_stdin value from isc_service_query().
A sample of how services API should be used for remote backup and
restore can be found in source code of fbsvcmgr.
One of the features of Erlang (and, by definition, Elixir) is that you can do hot code swap. However, this seems to be at odd with Docker, where you would need to stop your instances and restart new ones with new images holding the new code. This essentially seem to be what everyone does.
This being said, I also know that it is possible to use one hidden node to distribute updates to all other nodes over network. Of course, just like that is sounds like asking for trouble, but...
My question would be the following: has anyone tried and achieved with reasonable success to set up a Docker-based infrastructure for Erlang/Elixir that allowed Hot-code swapping? If so, what are the do's, don'ts and caveats?
The story
Imagine a system to handle mobile phone calls or mobile data access (that's what Erlang was created for). There are gateway servers that maintain the user session for the duration of the call, or the data access session (I will call it the session going forward). Those server have an in-memory representation of the session for as long as the session is active (user is connected).
Now there is another system that calculates how much to charge the user for the call or the data transfered (call it PDF - Policy Decision Function). Both systems are connected in such a way that the gateway server creates a handful of TCP connections to PDF and it drops users sessions if those TCP connections go down. The gateway can handle a few hundred thousand customers at a time. Whenever there is an event that the user needs to be charged for (next data transfer, another minute of the call) the gateway notifies PDF about the fact and PDF subtracts a specific amount of money from the user account. When the user account is empty PDF notifies the gateway to disconnect the call (you've run out of money, you need to top up).
Your question
Finally let's talk about your question in this context. We want to upgrade a PDF node and the node is running on Docker. We create a new Docker instance with the new version of the software, but we can't shut down the old version (there are hundreds of thousands of customers in the middle of their call, we can't disconnect them). But we need to move the customers somehow from the old PDF to the new version. So we tell the gateway node to create any new connections with the updated node instead of the old PDF. Customers can be chatty and also some of them may have a long-running data connections (downloading Windows 10 iso) so the whole operation takes 2-3 days to complete. That's how long it can take to upgrade one version of the software to another in case of a critical bug. And there may be dozens of servers like this one, each one handling hundreds thousands of customers.
But what if we used the Erlang release handler instead? We create the relup file with the new version of the software. We test it properly and deploy to PDF nodes. Each node is upgraded in-place - the internal state of the application is converted, the node is running the new version of the software. But most importantly, the TCP connection with the gateway server has not been dropped. So customers happily continue their calls or are downloading the latest Windows iso while we are upgrading the system. All is done in 10 seconds rather than 2-3 days.
The answer
This is an example of a specific system with specific requirements. Docker and Erlang's Release Handling are orthogonal technologies. You can use either or both, it all boils down to the following:
Requirements
Cost
Will you have enough resources to test both approaches predictably and enough patience to teach your Ops team so that they can deploy the system using either method? What if the testing facility cost millions of pounds (because of the required hardware) and can use only one of those two methods at a time (because the test cycle takes days)?
The pragmatic approach might be to deploy the nodes initially using Docker and then upgrade them with Erlang release handler (if you need to use Docker in the first place). Or, if your system doesn't need to be available during the upgrade (as the example PDF system does), you might just opt for always deploying new versions with Docker and forget about release handling. Or you may as well stick with release handler and forget about Docker if you need quick and reliable updates on-the-fly and Docker would be only used for the initial deployment. I hope that helps.
Has somebody real experience with firebird databases over the internet?
I have a typical windows accounting/ERP software (done with delphi) that works with the firebird database server pretty well.. Now my users (300 aprox. now, but should increment) also want to work "in the cloud" (connecting from the office, from the laptop, from the house, etc.). It is a lot of work of recreating everything to a standard web application (let's say for example, HTML+CSS+JS+PHP+MYSQL), so I'm considering keeping the win client (I don't care about other OSes) but instead of the server living in the clients LANs moving it to a pair of dedicated servers that I will contract (one primary and one secondary againts failures for starting).
Searching I've come across this faq http://www.firebirdfaq.org/faq53/ that explains that the fb protocol it isn't ideal for working in the internet, but still all my users today have at least a 1MBbit/sec ADSL internet connection (I don't think that to be slow as the faq denotes).
Somebody have done this? what was the experience? how secure are fb servers for being open to the internet? how well they scale?
I know that building a "middleware" with SOAP for example will be more normal, but still the solution I'm evaluating here is much more fast and easy (still I have some work with the replication, backup, hearbreath services, but it's much less than redoing everything for the web).
Thanks! Edit: FB version: 2.5.
I had being trying to "push" the Firebird Core developers to improve the Firebird protocol to get better speed with high latency network (aka. Internet). Recently, Dmitry Yemanov published some articles in his blog about this subject (dyemanov.blogspot.com). It seems that there is margin for optimizations, and I would really like to see this coming in FB 2.5.3 and FB 3.0, although there is no warranty for this happening in those versions or anytime soon. You can vote in such improvement here: http://tracker.firebirdsql.org/browse/CORE-2530
Safety? You may try to set up a VPN. It also may help with speed, since most of the VPNs software out there (Zebedee, etc) can compress the data being transfered, helping to speed up data transfer in some cases.
Some of my customers do use Firebird traditional C/S over the internet. It is much slower compared to local network, and of course, how much slower depends basically on the link speed and latency. You can do some optimization at the client side too, using metadata cache, etc. but don't expect miracles with the currently protocol. I would say that for whole day working, using Terminal Services would be a better option for now.
The response about the scaling question
Firebird runs in production on large big iron servers : 512G of ram 100.000 concurrent users
We run Firebird to power larger systems (for 12 government agencies
and 3 banks). It has approximately 100000 end users multiplexed
through 2500 (max) pooled connections
https://plus.google.com/111558763769231855886/posts/Q1ACy1yyTgP
The protocol in Firebird 2.5 is improoved there is still room left for 3.0 but you can check
what is already done
http://asfernandes.blogspot.com/2009/07/network-latency-influence-on-firebird.html
And the future enhancements in 3.0
http://www.firebirdnews.org/?p=6953
To protect your connection i guess the best bet is ssl/ssh tunnel (it can be a opnvpn)
with high compression option
http://mapopa.blogspot.com/2010/11/securing-firebird-using-ssh-tunnel.html
FB protocol problem isn't about bandwidth, but latency. In my experience, some operations can be very slow over internet/VPN compared to LAN or local connection. I haven't examined issue further since I don't really run applications over internet connection.
However, I suggest three-tier model for application. Create own application server, which runs on database server/same network. Let the clients talk with application server and you get maximum performance.
There are some N-tier application/middleware frameworks for Delphi:
RemObjects SDK and DataAbstract
RealThinClient
kbmMW
Delphi's own DataSnap
MidWare
With those you can get data compression, encryption, binary messages (faster than SOAP) etc.
You can implement TCP/IP packets encryption/decryption directly in the firebird engine itself.
Personnaly, i have downloaded the Firebird 2.5 source code and injected secure tunnelization code directly in his low level communication layer (the INET socket layer). Now, encryption/decryption is done directly by the firebird engine for each TCP/IP packet both at the server side and the client side (fbclient.dll).
Then there is no need to re-structure the client application except adding one line of code that provide the secret key you choose to crypt communication to the fbclient.dll. The same secret key must be declared in the firebird.conf file of your server installation.
I have also implemented a proxy negociation solution in the fbclient.dll in order to allow to TCP/IP packets to pass throught any proxy server (like Microsoft ISA Server for example).
For us, this architecture is functional for more than one year in a real production system.
kbmMW CodeGear Edition is free but without source. It can be used for commercial apps.
Download it after registering at: https://portal.components4developers.com
In case you see certificate errors (you shouldnt but I know we have heard that some actually do), accept and ignore them. The site is valid despite the cert.error.
kbmMW CodeGear Edition contains a subset of kbmMW Professional Edition, but supports the following Delphi database API's:
Borland Database Engine
DBExpress
kbmMemTable
SQLite3
It supports binary, binary over HTML, XML and SOAP protocols in communication with clients.
It contains everything you need incl.
unified remote custom method invocation
unified remote dataset query, execute and data change resolving
unified database meta data handling and creation (tables, fields, indexes, generators/sequencers)
optional automatic proxying of requests to another server and proxying results back to original requester
full native XML DOM and SAX support
full dataset briefcase support as CSV, or binary data
advanced but simple to use wizard for creating new application server services
THere is one caveat though. Newest version of kbmMW CodeGear Edition always only supports newest Delphi version. You can still download older kbmMW CodeGear Editions matching older Delphi releases.
kbMMW Professional Edition and kbmMW Enterprise Edition do not have such limitations, and currently supports D7, D2006, D2007, D2010, DXE, DXE2 along with Embarcadero C++ counterparts.
best regards
Kim Madsen
www.components4developers.com
I don't know if it sounds crazy, but here's the scenario -
I need to print a document over the internet. My pc ClientX initiates the process using the web browser to access a ServerY on the internet and the printer is connected to a ClientZ (may be yours).
1. The document is stored on ServerY.
2. ClientZ is purely a cliet; no IIS, no print server etc.
3. I have the specific details of ClientZ, IP, Port, etc.
4. It'll be completely a server side application (and no client-side on ClientZ) with ASP.NET & C#
- so, is it possible? If yes, please give some clue. Thanks advanced.
This is kind of to big of a question for SO but basically what you need to do is
upload files to the server -- trivial
do some stuff to figure out if they are allowed to print the document -- trivial to hard depending on scope
add items to a queue for printing and associate them with a user/session -- easy
render and print the document -- trivial to hard depending on scope
notify the user that the document has been printed
handling errors
the big unknowns here are scope, if this is for a school project you probably don't have to worry about billing or queue priority in step two. If its for a commercial product billing can be a significant subsystem in its self.
the difficulty in step 4 depends directly on what formats you are going to support as many formats are going to require document specific libraries or applications. There are also security considerations here if this is a commercial product since it isn't safe to try to render all types of files.
Notifications can be easy or hard depending on how you want to do it. You can post back to the html page, but depending on how long its going to take for a job to complete it might be nice to have an email option as well.
You also need to think about errors. What is going to happen when paper or toner runs out or when someone tries to print something on A4 paper? Someone has to be notified so that jobs don't just build up.
On the server I would run just the user interaction piece on the web and have a "print daemon" running as a service to manage getting the documents printed and monitoring their status. I would use WCF to do IPC between the two.
Within the print daemon you are going to need a set of components to print different kinds of documents. I would make one assembly per type (or cluster of types) and load them into your service as plugins using MEF.
sorry this is so general, but you are asking a pretty general and difficult to answer question.
I want to know which is the best architecture to adopt for this case :
I have many shops that connect to a web application developed using Ruby on Rails.
internet is not reachable all the time
The solution was to develop an offline system which requires installing a local copy of the distant database.
All this wad already developed.
Now what I want to do :
Work always on the local copy of the database.
Any change on the local database should be synchronized with distant database.
All the local copies should have the same data in other local copies.
To resolve this problem I thought about using a JMS like software eventually Rabbit MQ.
This consists on pushing any sql request into a JMS queue that will be executed on the distant instance of the application which will insert into the distant DB and push the insert or SQL statement into another queue that will be read by all the local instances. This seems complicated and should slow down the application.
Is there a design or recommendation that I must apply to resolve this kind of problem ?
You can do that but essentially you are developing your own replication engine. Those things can be a bit tricky to get right (what happens if m1 and m3 are executed on replica r1, but m2 isn't?) I wouldn't want to develop something like that unless you are sure you have the resources to make it work.
I would look into existing off-the shelf replication solution. If you are already using a SQL DB it probably has some support for it. Look here for more details if you are using MySQL
Alternatively, if you are willing to explore other backends, I heard that CouchDB has great support for replication. I also heard of people using git libraries to do that sort of thing.
Update: After your comment, I realize you already use MySql replication and are looking for solution for re-syncing the databases after being offline.
Even in that case RabbitMQ doesn't help you at all since it requires constant connection to work, so you are back to square one. Easiest solution would be to just write all the changes (SQL commands) into a text file at a remote location, then when you get connection back copy that file (scp, ftp, emaill or whatever) to master server, run all the commands there and then just resync all the replicas.
Depending on your specific project you may also need to make sure there are no conflicts when running commands from different remote location but there is no general technical solution to this. Again, depending on the project, you may want to cancel one of the transactions, notify the users that it happened and so on.
I would recommend taking a look at CouchDB. It's a non-SQL database that does exactly what you are describing automatically. It's used especially in phone applications that often don't have internet or data connectivity. The idea is that you have a local copy of a CouchDB database and one or more remote CouchDB databases. The CouchDB server then takes care of teh replication of the distributed systems and you always work off your local database. This approach is nice because you don't have to build your own distributed replication engine. For more details I would take a look at the 'Distributed Updates and Replication' section of their documentation.