Retrieving datetime of a row of data base on Key words - keyword

How do I write a Python scrip to go to a folder in another directory and from that folder going through a list of text files. Within these text files I need to retrieve the date time of a certain row which has a certain key word and store these data for further analysis.

Related

How to upload tab delimited text file to Big Query when string field for column receives a parse error?

I have a ~1 GB text file with 153 separate fields. I uploaded the file to GCS and then created a new table in BQ with file format as "CSV". For table type, I selected "native table". For schema, I elected to auto-detect. For the field delimiter, I selected "tab". Upon running the job, I received the following error:
Could not parse '15229-1910' as INT64 for field int64_field_19 (position 19) starting at location 318092352 with message 'Unable to parse'
The error is originating out of a "zip code plus 4" field. My question is if there is a way to prevent the field from parsing this value or if there's a way to omit these parse errors altogether so that the job can complete? From GCP's documentation, they advise "If BigQuery doesn't recognize the format, it loads the column as a string data type. In that case, you might need to preprocess the source data before loading it". The "zip code plus four" field in my file is already assigned as a string field type, therefore, I'm not quite sure where to go from here. Being that I selected the delimiter as "tab", does that indicate that the "zip code plus for" value contains a tab character?
BigQuery uses auto-detect schema to detect the schema of a table while loading data into the BigQuery. As per the sample data provided by you, pincode will be considered as string value by BigQuery due to the presence of dash”-” in between the integer values. If you want to provide schema, you can avoid using auto-detect and give schema manually.
As stated in the comment, you can try this to upload your 1 GB text file into Bigquery by following the steps :
As mentioned by you in the question assuming your data is in the CSV format. From the given sample data, I have mocked the data in excel sheet.
Excel Sheet
Save the file in .tsv format.
You can upload the file into BigQuery using auto-detect schema and setting tab as delimiter. It will automatically detect all the field types without any error as can be seen in the table in BigQuery in the screenshot.
BigQuery Table

Converting datetime to date in ADF Mapping?

I have a requirement for the final output text delimited document to contain only dates, however the json which I am reading as the Source in my ADF Copy Activity has the datetime fields as "hireDate": "1988-03-28T00:00:00+00:00", for example; I need these to be "1988-03-28". Any suggestions on doing this in the ADF Mapping (and we don't have Data Flow Mapping in the government Azure Cloud).
Thank you
Mike Kiser
No choice, we need an intermediate process, eg: We can copy the json file into a Azure SQL table, then copy from Azure SQL into a txt file.
Create a table and set the column type to date.
create table dbo.data1(
hireDate date
)
Copy into the table. It automatically casts the type from datetime to date .
The debug result is as follows:

Essbase Hyperion Add data inside Rule Files Bulk more than 100 rows

I already have a rule file (ex. Rule MM01), and I need to add more data rows in rule MM01 to one dimension like below.
For example I want to add more 100 rows of data in column "Replace" and column "With"
Do I have to add 100 rows one by one? Input manually? Or anything else to add bulk data into a rule file?
Nope, you just have to type them in.
If new items keep on popping-up in your source data, you might consider one of the following:
put your source text file into a SQL table and make your load rule read from the table (or even better, try to directly load from the tables that generated the text file)
(assuming you have the data load automated via MaxL) add a powershell script that does the rename before you load the data

Recoding Area labels based on zip code data in excel

Basically, I'm using SPSS and have a variable named AREA that includes different major counties in California. Within this variable there is a value label for "Other" and I want to be able to relabel the data to go in their respective county. I have an excel sheet with all the zip codes that fall into those "Other" counties and do have a zip code data in the file as well. There are 400+ zip codes so I'm trying to see if there is an easy alternative to having to manually type in each zipcode into syntax to recode those values.
I've tried seeing if there was a way to reference the excel workbook, but have come up empty handed.
Any guidance or approaches to this problem would be appreciated!
The data in excel has unique zipcodes with corresponding Area value in cell to the right. In the data set there may be multiple instances of each zipcode.

Storing CSV files to a database?

I'm trying to create an app that will store multiple csv files to a database. I have a working app going, and it will store csv files of a particular format. It looks through the header of the csv and stores each row of the csv with the column of the csv corresponding to the property of that row. For example, maybe the columns are date, school, major
if the following row is like this: 05/16/16 | Arizona State | English
then we'll store an object with the date, school, and major corresponding to above.
However, I need to deal with csv files that have different names. Some csv files might have the major column be named "Undergraduate major" and others might have it named "Field of study." Nevertheless, I want to be able to store all the csv files into my database even though the csv might have different names for the columns. What's the best way to approach this?

Resources