How to load data from text file to table in odi 12c mapping? - mapping

I am trying to load data from text file to a table. The mapping executes successfully but no data is loaded to the staging table. I am using LKM FILE TO SQL and SQL Control Append as IKM. Staging table is created but not loading the rows to the table, insert rows is 0

As I observed you doesn't created the datastore properly, for a text file you have to manually give the proper conditions like fixed or delimited file and then what about delimited separator(,|,..etc) and then view the data . After giving all these you can view your data i.e columns and check whether the datatype is correct or not then creat source file in the model

Related

How to upload tab delimited text file to Big Query when string field for column receives a parse error?

I have a ~1 GB text file with 153 separate fields. I uploaded the file to GCS and then created a new table in BQ with file format as "CSV". For table type, I selected "native table". For schema, I elected to auto-detect. For the field delimiter, I selected "tab". Upon running the job, I received the following error:
Could not parse '15229-1910' as INT64 for field int64_field_19 (position 19) starting at location 318092352 with message 'Unable to parse'
The error is originating out of a "zip code plus 4" field. My question is if there is a way to prevent the field from parsing this value or if there's a way to omit these parse errors altogether so that the job can complete? From GCP's documentation, they advise "If BigQuery doesn't recognize the format, it loads the column as a string data type. In that case, you might need to preprocess the source data before loading it". The "zip code plus four" field in my file is already assigned as a string field type, therefore, I'm not quite sure where to go from here. Being that I selected the delimiter as "tab", does that indicate that the "zip code plus for" value contains a tab character?
BigQuery uses auto-detect schema to detect the schema of a table while loading data into the BigQuery. As per the sample data provided by you, pincode will be considered as string value by BigQuery due to the presence of dash”-” in between the integer values. If you want to provide schema, you can avoid using auto-detect and give schema manually.
As stated in the comment, you can try this to upload your 1 GB text file into Bigquery by following the steps :
As mentioned by you in the question assuming your data is in the CSV format. From the given sample data, I have mocked the data in excel sheet.
Excel Sheet
Save the file in .tsv format.
You can upload the file into BigQuery using auto-detect schema and setting tab as delimiter. It will automatically detect all the field types without any error as can be seen in the table in BigQuery in the screenshot.
BigQuery Table

Essbase Hyperion Add data inside Rule Files Bulk more than 100 rows

I already have a rule file (ex. Rule MM01), and I need to add more data rows in rule MM01 to one dimension like below.
For example I want to add more 100 rows of data in column "Replace" and column "With"
Do I have to add 100 rows one by one? Input manually? Or anything else to add bulk data into a rule file?
Nope, you just have to type them in.
If new items keep on popping-up in your source data, you might consider one of the following:
put your source text file into a SQL table and make your load rule read from the table (or even better, try to directly load from the tables that generated the text file)
(assuming you have the data load automated via MaxL) add a powershell script that does the rename before you load the data

Manual entries in google spreadsheet do not match when the data gets updated

I have a google spreadsheet which have some columns of data written through a python script. At the end of the last data column I have added three more columns manually and data for those three columns would be entered manually. Python script would run daily, thus updating the data in the spreadsheet. My issue is whenever I run the script to update the data, the data in the last three manual columns gets jumbled. This is because the order of the data returned by the sql query from the script is different everytime. We can use order by to keep the order same but if new rows are added or the existing rows are deleted from the db then this would also not work.
As stated in this related thread, I think it's an expected behavior because the imported data is dynamic and the data you are adding are static.
The idea is that you don't add any columns to the Sheet that receives the imported data as this data is dynamic.
You need to create a new Sheet and select the data from the Sheet that has the imported data.
The Notes Sheet will need you to select the imported data by the order number in this case. The other columns of data will then be extracted from the ImportedData Sheet using the =vlookup() function and displayed and then you would enter the required note for that record.
You may check the link above for more information.

WARNING: Collections containing mixed types can not be stored in properties

I am trying to upload data in to neo4j DB using LOAD CSV and facing below error.
WARNING: Collections containing mixed types can not be stored in
properties.
My csv file is containing around 10000 records . Now how do I find the problematic record.
I could not share the actual csv file due to privacy issue.
Please ensure that each field from CSV is enclosed in double quotes .
Sample File is mentioned here. First Line represents header and Second Line shows actual data.
"id","code","user_track"
"100","ABC123","USER_REGISTRATION"

Fusion Layer not showing data when querying against a text column

https://gist.github.com/2017706
The HTML file at the gist above will successfully load the data from the referenced fusion table into a map layer, yet when I try to query against the Name column (yes I know columns are case-sensitive in queries) I get the "Data may still be loading" error displayed, yet clearing the input box to reset the layer without a query works again.
I got this to work with a small fusion table created manually, the only difference with this data is that it was imported from an Excel file. Is there anything I'm missing?
I believe the problem falls at the following line:
select: 'Latitude,Longitude',
In a table with a 2-column location, you only need to select the one column that was marked as a Location. In this case, it appears to be your Latitude column. Try updating the above line to the following, and see if that works:
select: 'Latitude',

Resources