From what I have read via google, it may be possible to do this with Rails? But I sadly don't know where to begin.
I have been asked to understand if it is possible to either utilising SQL Server or multiple CSV dumps to create a PDF of customer sales orders for the year.
An example of data I am working with is as follows;
What I need is a way to go through each ClientID in turn, Create a new page for each 'Category' Type of purchase as separate sequentially numbered PDF's so that Page 1 for client A1 would only have the motorcycle, page 2 would have the two helmets as that then completes that data entry for the client. I would need to save the two PDF pages as a single document entitled A1 and save this to C:\PDFOrderCreation
Constraints:
There can be no more than 10 lines of data per page,
There is a header and footer which have to remain static on the page(s)
I don't know where to begin with this, I have considered trying to do it with word but it would I think be extremely messy and resource heavy.
Anyone have any thoughts or able at least to push me in the right direction?
you would have to write a software that queries your SQL server and creates pdf files. There are many ways to do this, but afaik it cant be done using only SQL server features, SQL server is meant to store and serve your data, what you want to do with this data is out of it's scope. your software would have to do 3 things:1. To query the data from your server 2. To organize the data in the structure you need it to be presented 3. To write it to pdf filesYou would have to target each task individually, and it can be done using either a web application or a native application which could be written in ruby, c#, java, js,c++, objective c or any other programming language. Its important to note that TSQL is NOT a programming language, its a query language. Good luck!
Related
I need to do something like:
Show list of countries >> Select country -> show list of Cities.
So I have to get a list of cities by country but without using data in database.
Can anyone please suggest me a solution ?. I really appreciate your help.
You could use an API but the problem is that you will have a request everytime your page load. Not every API provider will allow this. For example, Here is an API that gets Country / Cities.
Another solution as you are using .NET technologies is to use a localDB. A localDB is in fact a database but within your app. Have a look to the definition on the MSDN :
It is very easy to install and requires no management, yet it offers the same T-SQL language, programming surface and client-side providers as the regular SQL Server Express. If the simplicity (and limitations) of LocalDB fit the needs of the target application environment, developers can continue using it in production, as LocalDB makes a pretty good embedded database too.
Finely, the last solution that comes in mind if you can't use XML or JSON files nor a LocalDB is to have your lists in classes but in my opinion you should avoid this solution, it will simply load everything in RAM until you application stops, as HDD cost less that RAM I really think the better option is to use XML or JSON files in your app.
You can store the info into a text file or even into a static class in your code (not exactly a great idea, but doable).
Then you just need to get the info from the container and build two SelectList items, one for countries and one for cities.
Use javascript to link change event of countries SelectList to a filtered reload of cities SelectList
Assuming you have a preset list of cities by country, and you really cannot use any sort of database, then perhaps just use text files? One text file for the list of countries and then one file per country with the list of cities. Read in the text file and display as needed.
so I developed a small Neo4j database with the aim of providing users with path-related information (shortest path from A to B and properties of individual sections of the path). My programming skills are very basic, but I want to make the database very user-friendly.
Basically, I would like to have a screen where users can choose start location and end location from dropdown lists, click a button, and the results (shortest path, distance of the path, properties of the path segments) will appear. For example, if this database had been made in MS Access, I would have made a form, where users could choose the locations, then click a control button which would have executed a query and produced results on a nice report.
Please note that all the nodes, relationships and queries are already in place. All I am looking for are some tips regarding the most user-friendly way of making the information accessible to the users.
Currently, all I can do is make the users install neo4j, run neo4j every time they need it, open the browser, run the cypher script and then edit the cypher script (write down strings as locations) and then execute the query. This makes it rather impractical for users and also I am worried that some user might corrupt the data,
I'd suggest making a web application using a web framework like Rails, especially if you're new to programming. You can use the neo4j gem for that to connect to your database and create models to access the data in a friendly way:
https://github.com/neo4jrb/neo4j
I'm one of the maintainers of that gem, so feel free to contact us if you have any questions:
neo4jrb#googlegroups.com
http://twitter.com/neo4jrb
Also, you might be interested in look at my newest project called meta model:
https://github.com/neo4jrb/meta_model
It's a Rails app that lets you define via the web app UI your database model (or at least part of it) and then browse/edit the objects via the web app. It's still very much preliminary, but I'd like to be able to things like what you're talking about (letting users examing data and the relationships between them in a user friendly way)
I general you would write an tiny (web/desktop/forms-)application that contains the form, takes the form values and issues the cypher requests with the form values as parameters.
The results can then be rendered as a table or chart or whatever.
You could even run this from Excel or Access with a Macro (using the Neo4j http endpoint).
Depending on your programming skills (which programming language can you write in) it can be anything. There is also a Neo4j .Net client (see http://neo4j.com/developer/dotnet).
And it's author Tatham Oddie showed a while ago how to do that with Excel
My situation is that i have an external database and i should present the information in an orchard hp. The customer wants also to set new information and both should be shown via projection.
Short with simple example type book:
Books are in an external db.
Books should be set in orchard, in the orchard-db not in the external-db.
The query - projection should get both books-entities and displayed.
Is there a simple orchard-way to implement my problem?
Unfortunately, I did not find any sample for what ... Any suggestions?
Thank you
From what you've described I understand that in short you want to manage some data from an external data source as Orchard content items. This is well possible, it needs the following to happen:
Create a content type for your data.
Periodically pull in changes from the external data source.
Programmatically create content items of your new type from the data pulled in.
Since this is not an everyday scenario I don't know of any tutorials out there for this although there is at least one sample: the External Pages module does pretty much this, by pulling in Markdown pages from a Mercurial repo.
I'm a newbie to web development (and development in general) and I'm building out a rails app which scrapes data from a third party website. I'm using Nokogiri to parse for specific html elements that I'm interested in and these elements are stored in a database.
However, I'd like to save the html of the whole page I'm scraping as a back-up in case I change my mind on what type of information I want and in case the website removes the site (or updates it).
What's the best practice for storing the archived html?
Should I extract it as a string and put it in a database, write it to a log or text file, or what?
Edit:
I should have clarified a bit. I am crawling on the order of 10K websites a week and anticipate only needing to access the back-ups on once-off basis if I redefine the type of data I want.
So as an example, if was crawling UN data on country population data and originally was looking at age distributions but later realized I wanted to get the gender distributions as well, I'd want to go back to all my HTML archives and pull the data out. I don't anticipate this happening much (maybe 1-3 times a month) but when it does I'll want to retrieve it across 10K-100K listings. The task should only take a few hours to do around 10K records so I guess each website fetch should take at most a second. I don't need any versioning capability. Hope this clarifies.
I'm not sure what the "best practice" for this case is (it will vary by the specifics of your project), but as a starting point I'd suggest creating a model with a string field for the URL and a text field for the HTML itself, and save the pages there. You might add a uniqueness validator for the URL, to make sure you don't store the same HTML twice.
You could then optionally add model methods to initiate a nokogiri document from the HTML text, thus using the HTML string as the "master" record (in the DB) and generating the nokogiri document on the fly when needed. But again, as #dave-newton points out, a lot of this will depend on what you're going to do with this HTML.
I would strongly suggest saving it into a table in the same DB as the data you are scraping. Why change what works? Keep it all as you normally would, or write it all to a separate database entirely just in case and keep some form or ref to link the scraped data to the backups just in case.
what are the steps involved in moving from TestTrack to JIRA?
I've done a few migrations from TestTrack to JIRA over the last 5 years. It's always more work than I expect (weeks not days). The migrations I did used a custom migration application that read the data from the TestTrack database (not the simplest of schemas) and then imported it into JIRA using a custom SOAP API. Nowadays I think I might go with a more focused approach and use a single customized script to create CSV suitable for the JIRA CSV importer. This will get you the issue fields, comments and attachments. Links are separate thing, but no issue history.
The JIRA CSV importer works, though the flattening of the data restricts what it can do significantly. The hardest thing is usually mapping of values from one system to another - are the userids really identical in both systems?
If you want to investigate doing the migration commercially, please contact info#customware.net (my employer)