Blockchain for document archival en secure transfer [closed] - storage

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
First of all, I'm sorry if I'm in a completely wrong place to ask this question.
I will be writing me thesis coming months and will be researching the solutions blockchain technology can offer. I will be writing the thesis for a big accounting company and have basic knowledge of the technology and the situation in the blockchain- and crypto industry.
Now, it would be great if someone could help me with suggestions you might have on this matter.
This accounting company is looking to use blockchain for data archival and transfer for mostly bookkeeping related documents. The storage at this point was something like 15 tb I believe and as they are looking for permanent storage this number will keep growing, so keep that in mind.
Another requirement is that the transfer of this data needs to be quick and easy for the clients. So I would imagine that in case of blockchain storage, the company holds the key to the data, and when the company wants to transfer the data it can give the key to the client, who can then download the data securely etc., (that would be the ideal scenario).
Is this something that could potentially be achieved with platforms like Storj, Sia and Swarm? And might there maybe be other ways to achieve this (the company mentioned Azure for example)?
As I said, I have a basic knowledge, and hope to learn a lot more during the writing of my thesis, so forgive me if I said something that cringes your blockchain-heart! This is just purely to get a little more information so that I know which directions I can head to when researching this.
And once again, sorry if I'm in the wrong place for this.
Thanks, and greetings from Finland.
I have read about storage protocols like Storj, Sia and FileCoin. They offer blockchain storage, but I'm not entirely sure if this could be used on an enterprise level.
I hope to get a little more insight into the possibilities by receiving suggestions from blockchain experts, or people that know more than me about blockchain technology

I am not aware of those storage protocols you mentioned but I am working on blockchain and we had a discussion for one of the projects to transfer files using blockchain.
As the blockchain is very resource incentive and requires a lot of storage and hardware, it is always suggested to have minimum data in it. So when it comes to files, instead of storing files in the blockchain, they can be stored in a cloud service like storage cloud or document cloud services and store the file reference id in blockchain for traceability if you want to trace the document updations. The reference id will be updated if any modification/updating happen to the file. Any client can use key and this reference id to download the file.
Hope you get some info from this.

Related

Making a Social Media app with Parse or developing my own backend? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Here is my dilemma, right now I am developing a social media app using Parse as my backend service and its working just fine. While doing some research today, I realized that if my app gains in popularity quickly using Parse will become very expensive or just stop requests all together that go over the limit.
1) Basically my question for you all is, in your experience with Parse how effective is it for handling apps with many users?
2) Also, do many users equate to many requests per second or is there an efficient way to develop my app that will keep the requests per second down?
3)And lastly would it just be easier/feasible to develop my own backend service for my app (I have no backend experience, so I would have to teach myself)? I am not opposed to doing this; I just know it will add development time but could be the best option in the long run.
Thanks for all your help.
1) We use Parse in our most of apps and Parse is handling things great. One of our app that uses Parse, has 3k monthly user and everything is going well
2) You should develop your app to make requests minimum. You must get lots of data as possible as you can. This will drop your request number.
3) I can recommend you that you should begin with Parse-like systems. We are in a time of hurry, so you must act lean. If Parse will not be enough for you in future, this is a thing that you must be happy about it. You can develop your own backend service meanwhile.
Though it is good you are planning ahead parse or something similar like amazon is gonna be way better for scalability. If you get a domain and have a mysql database (or whatever else) that you maintain the scalability isn't as good as using a service that handles all of that.
I created my own backend and I wish I hadn't wasted the time. I now will most likely have to find a service for scalability reasons so really just wasted my time making it. That is just my two cents other people may disagree
I think that building your own backend is very difficult and time consuming. Take a look at CloudKit, it gives you much better quotas than parse for free. Please note that you need an enrolled developer account to use it. Personally, I am making my app with Parse, and if I make it to be ready for release, I enroll into the programm and change the code to work with CloudKit, or leave it with Parse and if Parse quotas are nearly over, then I change to CloudKit. But Parse free quotas are quiet big as I experienced.

Experiences OrientDB vs Neo4j [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am looking for the right Graph DB for a project. I tested Neo4j and really liked it. But the AGPL licensing issues put me off a little (you can read about it here).
I then found a couple of articles claiming that OrientDB is actually much faster. But they aren't really up to date. You find them here and here. And it is licensed under Apache 2, which is good.
So I just want to ask the great people of stackoverflow for your opinion.
Scalability is important and OrientDB claims to be better at that (here)
Licensing should be open
I have a complex model of vertexes/edges and need to retrieve relationships up to 3 levels deep
The mixture of document-graph that OrientDB offers seems to be a benefit
Thanks for your feedback!
Note: I am on the OrientDB team, my opinion is definitely slanted. I am also replying in a decidedly casual tone.
On your points:
1) On the topic of clustered deployment, currently it's not even a comparison. Neo4j is master-slave replication, they state themselves that it is generally only suited to single digit node deployments and the entire graph must belong on one machine. Hear it from them directly: http://www.infoq.com/interviews/ian-robinson-neo4j?utm_source=infoq&utm_medium=videos_homepage&utm_campaign=videos_row1
OrientDB has a full ability to do multi-master replication (every node can accept reads and writes), has the ability to shard data, intelligently distribute data using clusters and automate distributed queries and transactions. Our CEO recently did an excellent webinar for hazelcast showing our setup in this area: http://hazelcast.com/resources/orientdb-hazelcast-memory-distributed-graph-database/
2) Apache 2.0 is our community license, this is extremely liberal. You can even embed OrientDB community edition at no cost. (A)GPL worries some that their closed source code will be polluted. This may or may not be a threat, but it's sometimes hard to determine. Our community license is very feature rich including full distributed, multi-master replication and sharding.
3) Traversing relationships is kind of the point of graph databases. So either Neo4j or OrientDB will suit you just fine here... go 2000 levels in depth and it will still be performant.
4) The document-graph capabilities are great, but you knew I would say that. The product we've built is a production grade system designed to be a full on database not a side database used as a supplement to a RDBMS or other datastore.
I am coming off strong here. But I have good reason. Over the past 3 weeks a full team of developers at a world leading tech company have been testing OrientDB against Neo4j. For their very demanding use case, we were the better choice for a variety of reasons. Money was not the issue, we earned the business with our technology.
Take it for what it's worth, I've stated my bias up front. From my experience, once you work with OrientDB, there will be no looking back. Let us know if you need any help!

How do you architect complex Rails systems [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have the following systems (and more) that we push/pull data from one app to another:
Hosted CRM (InsideSales.com)
Asterisk phone system (internal)
Banner ad system (openx, we host)
A lead generation system (homegrown)
Ecommerce store (spree, we host)
A job board (homegrown)
A number of job site scrapes + inbound job feeds
An email delivery system (like Mailchimp, homegrown)
An event management system (like eventbrite, homegrown)
A dashboard system (lots of charts and reports pulling info from all other systems)
With Rails 3 around the corner, I really want to pursue a micro-app strategy, but I'm trying to decide if I should have the apps talk via REST HTTP API or because I control them all, should I do something like shared models in the code which simplifies but also allows for stuff to leak across boundries much easier...
I've heard 37signals has lots of small apps, I'm curious how those apps communicate with each other... Or if you have any advice from your own multi-app experience.
Thanks! I tried asking this on my blog http://rywalker.com/chaos-2010 too a while back.
I actually got an email response from DHH...
We use a combination of both, but we default to REST integration. The only place where we use direct database integration is with 37signals ID user database. Because it needs to be so fast. REST is much more sane. Start there, then optimize later if need be.
Last time I had to crazy-glue a bunch of small applications together, I used a simple REST API.
Bonus points: it allows for integration with services / apps written in other languages.
Also helps if you've got a crazy buzz-word loving manager who likes to pivot technologies without warning.
i had the same with a plus: I had to talk as well with some daemons that were not exactly HTTP ready. So i followed the following pattern:
REST API using XML/JSON to exchange data and using memcache to exchange short messages. (you define some keys that you will update in memcache and the other piece of software, just pull memcache looking for those keys)
as security measure i added API KEY or HTTP client authentication using digital certificate.
Another option is AMQP messaging (via rabbitmq or other means).

Building a community photography site, where can I store my photos online? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am in the process of laying down the requirements for a photography community site. An important feature to investigate would be allowing more fotos/account than rival sites around my country's internet. What are the possibilities out there?
Should I go for something like amazon S3, or is there anything that offers more image-related features? I am mostly interested in low price per GB (storage and transfer out).
I used to work for a social networking website that hosts billions of images and we evaluated S3. Conclusion was that it is too expensive for heavy-traffic sites. The storage itself is pretty cheap, but the costs for accessing the content on S3 add up quickly. That makes S3 more suitable for applications like online backups. In my view, cost is the main con.
On the other hand, this is only a concern once your site gets large. The biggest advantages of S3 are that you don't have to worry about scalability and that it's pretty easy to set up and then forget about it because it just works. Many medium sized services use S3 with great success.
The solution we went for is an array of dedicated servers that host the images and also run webservers (don't use Apache, use webservers optimized for static content such as lighttpd or nginx), and in front of those, use a CDN (content delivery network, such as akamai or panther express). You will typically get high hit rates (depending on the access patterns of your site), so the end users will get most files directly out of the CDN and not cause any load on your servers (except for the first time a file is accessed). Thus you might be fine with just one server and a mirror for a while. As you scale, the challenges become how to distribute your images across the farm, how to manage redundancy etc.
I assume that time-to-market also plays a role. In that respect, a good strategy might be to start with S3 and be up-and-running quickly. Later on you can still migrate to a more sophisticated solution. In that case, make sure management keeps this in mind. Non-tech people tend to believe once a functionality works, you never have to touch it again. And be aware that migrating a lot of data takes time. When we changed our photo architecture, the copy jobs ran for months.
How about a Flickr/Picasa integration? The users use their own Flickr/Picasa account to store their photo and use the features in your site. In that case you pay for nothing for storing photo :P
I myself would like to have a single photo storing acc. instead of having individual acc. for each site.

Exporting to Quickbooks? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have an Access 2000 program handling the receiving of goods in the company.
I need to Export Inventory Items and Quantities to Quickbook (enterprise 2007?). I have a good handle on the Access program, but know nothing about quickbooks.
Can this be done? What would I need for it?
Look into the QuickBooks SDK, a free download from Intuit. It provides a COM object interface or XML interface to all of the QuickBooks data. Additionally, if you need to do this over a WAN, you can use the QuickBooks Web Connector to talk to QuickBooks asynchronously over a WAN.
QuickBooks integration is not a trivial thing to accomplish. There are many gotchas. Your best friend will be the Intuit Developer Network forums.
You do not want to use QIF, QFX, or IIF exports. They are deprecated formats which can cause data corruption. Additional information about various methods of integrating with QuickBooks can be found on this integration wiki page and on this QuickBooks integration wiki.
Take a look at the QuickBooks SDK. If you want something simpler to work with, QODBC (with the write capabilities), while not free, will probably be simpler to work with for Access.
Google is your friend here. Quickbooks has some primitive import capabilities, but there are a number of small products that do what you need. This guy seems to have some pretty good stuff. Essentially there are a couple of different text formats (QIF and OFX if memory serves) that are used for Quickbooks import, the problem with them is they don't do much error checking. There is also a Quickbooks SDK which allows you to make calls using COM (yum) to import, and that does full error checking (it actually calls into a running version of Quickbooks), but is probably overkill for what you want.
Take a look at Quickbooks SDK and Documentation. The SDK has two COM interface: QBFC and QBXML. The difference in using QBXML, you need to serialize and deserialize XML manually which isn't hard once you get the handle of it. And I find QBXML much more convenient since you can choose to include requests and responses you need.
Also, if you plan to use Quickbooks SDK, the Online Reference is your best friend.
+1 to Yishai. I've been using qodbc for about 10 years now and 4 or 5 different versions of quickbooks. qodbc utilizes a database-like syntax to interact with the company file.
UNLIKE any form of proper database interaction, do as little work as possible in your query itself, as the qodbc driver can take 10 seconds to 2 minutes to handle a dozen records from a table of roughly 1000 records. A process that can import 15 orders with 5 lines each means talking to customer, item, invoiceline, invoice tables and can take 5 minutes. Sadly, I often am reduced to building a MySql database based on mass exports while I sort out and understand the data. THEN I go back and try to make queries directly.
While being an ODBC data connection in windows is great, learn to distrust each link in your toolchain and figure out how to troubleshoot problems to prove aspects positively correct as well as positively wrong. My most recent problem was with QB11 on Win7 x64 computer. The php stack at the time was suspect and was causing errors. And please always try to perform error checking, which is somewhat painful in that environment, but becomes crucial when "something breaks later".
This very minute I'm researching the php stack for win7 to see if I can again trust it for use with qodbc and order importing. (exporting from magento)

Resources