View formatted sqlite results - ruby-on-rails

So I code using ruby on rails and the default database is sqlite, which is amazing for development. there is no setup time, no need for connection etc. What really sucks is the output on the command line. when i do a query to list all the contents of a table for example. I just get a dump of text, unlike when you use MySql, the CLI formats the results of the queries nicely (with headers in a table). I am also aware of "headers on" and the commands you can type to format the results of sqlite but those are temporary and I am looking for a more permanent way to format the results so i do not have to do it every-time.

It appears that you can specify SQL statements and metadata statements in an init file and pass this file as the -init parameter.

Related

how to enableJsonFunctions=1 in exasol database?

I am trying to search for exasol paramter value to enable json functions JSON_EXTRACT , JSON_VALUE etc. My exasol version is 6.2 , But unable to use the functions. Can someone quide me on how to enable it from Database ?
I have checked the values in EXA_METADATA and EXA_PARAMTER sys tables but could not find json parameter name.
enableJsonFunctions is a command line parameter. This means you have to specify it e.g. via EXAoperations (See "Extra DB Parameters" here) and this will require a database restart. Also make sure that you are at least on 6.2.7. Starting from 7.0, the json functions are available without the command line parameter.

Creating a DTS package that uses a stored procedure

We're trying to make a DTS package where it'll launch a stored procedure and capture the contents in a flat file. This will have to run every night, and the new file should overwrite the existing file.
This wouldn't normally be a problem, as we just plug in the query and it runs, but this time everything was complicated enough that we chose to approach it with a stored procedure employing temporary tables. How can I go about using this in a DTS package? I tried going the normal route with the Wizard and then plugging in EXEC BlahBlah.dbo... It did not care for that:
The Statement could not be parsed. Additional information: Invalid object name '#DestinyDistHS'. (Microsoft SQL Server Native Client 10.0)
Can anyone guide me in the right direction here?
Thanks.
Is it an option to simply populate a non-temp table in your SP, call it and select from the non temp table when exporting?
This is only an issue if you have multiple simultaneous calls to the stored procedure. In this case you can't save to a single table.
If you do have multiple simultaneous calls then you might be able to:
Create a temp table to hold results
Use INSERT INTO #TempTable EXEC YourProc
SELECT FROM #TempTable
You might need to do this in a more forgiving command line tool (like SQLCMD). It's not as fussy about metadata.

Deedle - what's the schema format for readCsv

I was using Deedle in F# to read a txt file (no header) to data frame, and cannot find any example about how to specify the schema.
let df= Frame.ReadCsv(datafile, separators="\t", hasHeaders=false, schema=schema)
I tried to give a string with names separated by ',', but seems don't work.
let schema = #"name, age, address";
I did some search on the doc, but only find following - don't know where I can find the info. :(
schema - A string that specifies CSV schema. See the documentation
for information about the schema format.
The schema format is the same as in the CSV type provider in F# Data.
The only problem (quite important!) is that the Deedle library had a bug where it completely ignores the schema parameter, so no matter what you provide, it would be ignored.
I just submitted a pull request that fixes the bug and also includes some examples (in the form of unit tests). See the pull request here (and click on "Files changed" to see the samples).
If you do not want to wait for a new release, just get the code from my GitHub fork and build it using build.cmd in the root (run this for the first time to restore packages). The complete build requires local installation of R (because it builds R plugin too), but it should build Deedle.dll and then fail... (After the first run of build.cmd, you can just use Deedle.sln solution).

Foreign character issue with CSV import to Heroku Postgres DB

I have a rails app where my users can manually set up products via a web form. This works fine and accepts foreign characters well, words like 'Svölk' for example.
I now have a need to bulk import products and am using FasterCSV to do so. Generally this works without issue, but when the CSV contains foreign characters it stalls at that point.
Am I correct to believe the file needs to be UTF-8 in the first instance?
Also, I'm running Ruby 1.8.7 so is ICONV my only solution for converting the file? This could be an issue as the format of the original file won't be known.
Have others encountered this issue and if so, how did you overcome it?
You have two alternatives:
Use ensure_encoding gem to find the actual encoding of the strings.
Use Ruby to determine the file encoding using:
File.open(source_file).read.encoding
I prefer the first approach as it tries to detected the encoding based on Strings, and tries to convert to your desired encoding (UTF-8) and then you can set the encoding on FasterCSV options.

How would you implement a db table of a list of all US zip codes in a rails application?

How would migrations be involved? Would you load them into the mysql db directly or have a ruby routine doing this.
To get the raw data I'd simply search on Google, you'll find lots of databases of zip codes, some of them are not free though.
Then I'd look at the data to get a clue on what columns I should include in the table, and build an appropriate migration and model.
Then I'd write an external ruby script which reads the data from whatever format the zip code database is in and writes it directly into the app's database. You could also do that as part of your Rails application, but I usually don't consider it necessary when dealing with external data only.
It's important, however, that the zip code table is not referenced by ID in some other table, since that makes it really complicated if you want to update it later (zip codes change). So I'd still store the zip code itself in, say, the user table, or wherever.
Here is a CSV link to all the zipcodes in the US: here . This file has 7 columns for each zipcode, with the zipcode being in the first column.
Now you could use the ruby CSV parser, or something like FasterCSV to parse the CSV, but I think it would be much faster if you simply parsed the CSV using a shell command. For example, I just ran this on my system and it instantly parses the file correctly:
cut -d ',' -f 1 zip_codes.csv > out.csv
At this point, it's a simple matter of reading in the file line by line in ruby like so:
File.open( out.csv ).each do |line|
Zipcode.create(:zip => line)
end
You will replace Zipcode.create.. with whatever model you are using, since you probably do not need a seperate Zipcode model.

Resources