I have a file with the following format:
/Users/devplayerx/Sandbox/pics/images/001012DG-161.JPG
pixelWidth: 1600
pixelHeight: 1050
filename: 001012DG-161.JPG
/Users/devplayerx/Sandbox/pics/images/001019DG-151 COPY.JPG
pixelWidth: 1600
pixelHeight: 1050
filename: 001019DG-151 COPY.JPG
and would like to, ultimately, have an iOS dictionary with the filename as key, and either a dictionary or array with the pixelWidth and pixelHeight as value. I was considering converting my text file into a JSON file, and then parse it using NSJSONSerialization, but I'm not sure how to convert my text file into JSON. Also, I'd like to remove the full path from the text file, since it's not needed.
Here is a perl script that seems to do the job:
#!/usr/bin/perl
use strict;
use warnings;
open FILE,"< yourfile.txt" or die "I/O error : $!\n";
my $w = 0;
my $h = 0;
my $f = "";
print "{\n";
while (my $line = <FILE>)
{
if ($f)
{
print ",\n";
$f = "";
}
if ($line =~ /pixelWidth: ([0-9]+)/)
{
$w = $1;
}
if ($line =~ /pixelHeight: ([0-9]+)/)
{
$h = $1;
}
if ($line =~ /filename: (.*)$/)
{
$f = $1;
print "\t\"$f\" : [ $w, $h ]"
}
}
print "\n}\n";
close FILE;
Note that I'm not an expert in perl so maybe it can be improved, but using it on your input file seems to produce your expected JSON as below:
prompt$ perl scr.pl
{
"001012DG-161.JPG" : [ 1600, 1050 ],
"001019DG-151 COPY.JPG" : [ 1600, 1050 ]
}
Note that once you have your JSON file, you may optionally convert it into a PLIST file using the plutil tool. For example perl scr.pl | plutil -convert binary1 -o yourfile.plist - will create yourfile.plist from the JSON produced by perl scr.pl (my script above). You can then easily read this file in your code using [NSDictionary dictionaryWithContentsOfFile:pathToYourFilePlist] and directly have access to your data as an NSDictionary in one line.
The json object could look like this:
{ picture: { path : "thePath", pixelWidth: 1600, pixelHeight: 1050, filename : "name" }}
I would then convert it by taking the rows and putting them in a list, looping through the list and eventually spitting everything out in another text file using the above object notation.
What language do you need to write the tooling to convert it to json in?
You can use NSInputStream to read the text file, on each iteration you can build your dictionary the way you want.
After that just use NSJSONSerialization.
Related
I have some data in file and need to print in output some format to the data in print.
Example content to parse:
012231-33339411.sxz.ree.fg*-*
U2FsdGVkX1+1pfXeR/h4u6P/BrItX75L0wHVIka4yA6tqS9a5CFUWvLu1AB4x2m8NpmJ>fyoXdADqlWDiGWi6Pw1a8NgNDbdTOlMtGBz4FCi8n97UdVQX9f0a2u9d5l7lOCxVDDzd>wJXbi9x4O+Dmo/lm9DbWAjBGKwWu0tTQxsU2TIpqv
FhUZmGd3E6vN+puPXz4yXeVQhMfQ+K8OpSM2ZuTpKCtDgm0SdUDyFnalA4lxHaFZqh+E>3+9JgHK7/KiiZmIJshUmqrwnkX0yKihCcOXCzaFITiByxBM/7PGeJo0IBAjyKI/GflgQ>8GsIWWRkCJnz2OMiYKr8uOMOAfTHnW57Dq+orDG1p
012236-33349111.sxz.ree.fg*-*
bCRIVArOSClIWrZz6KciBFT2iPjqsS/qMRSBYinBzpDmESj8kZHoGQ46BMq+LgHJiY5P>7yygNxCkEv25GKGViKTX1X6KSSLZ+RVNEts4N7jzVLoufZ+X/TAv2Ib7pnnEj7h4rWDn>y7KP1XrTynItaas5z5fpFt2zUHFNElvNmyrjbFZVp
DUsnWWDuvemWUr5YwOLxeRCnwTvfw71gwGEVeBzIJq4TsZb2/G8j9vpb/L7KNybsyQNN>DlOTMW5CHzd5otyYaNBcYo9V/4ky63q2vZMzQDWtCwVPaTKREPUqPLRKea3VkQnnsUic>/iBe+6Sv5GYl+XPGbIjWbTJWLQmc1kv8LXPyvUmTm
cUVypKp9fDlyFUkOkEVAxW8dMxHJ0c83BPw37GkCvsR9itkzO0FpX0Zn+OvRQRkUCyzr>dgijhcH
I need some way to take in Awk the first variable from begin to "-"
Example:
variable1=012231
and
variable1=012236
Variable 2 the 4 digits after the - character
Example:
Variable2=3333
and
variable2=3334
Variable 3 the 2 digits after the 4 digits of variable2
Example:
variable3=94
and
variable3=91
Variable 4 as the text before the newline
Example:
variable4=U2FsdGVkX1+1pfXeR/h4u6P/BrItX75L0wHVIka4yA6tqS9a5CFUWvLu1AB4x2m8NpmJ>fyoXdADqlWDiGWi6Pw1a8NgNDbdTOlMtGBz4FCi8n97UdVQX9f0a2u9d5l7lOCxVDDzd>wJXbi9x4O+Dmo/lm9DbWAjBGKwWu0tTQxsU2TIpqv
FhUZmGd3E6vN+puPXz4yXeVQhMfQ+K8OpSM2ZuTpKCtDgm0SdUDyFnalA4lxHaFZqh+E>3+9JgHK7/KiiZmIJshUmqrwnkX0yKihCcOXCzaFITiByxBM/7PGeJo0IBAjyKI/GflgQ>8GsIWWRkCJnz2OMiYKr8uOMOAfTHnW57Dq+orDG1p
and
variable4=bCRIVArOSClIWrZz6KciBFT2iPjqsS/qMRSBYinBzpDmESj8kZHoGQ46BMq+LgHJiY5P>7yygNxCkEv25GKGViKTX1X6KSSLZ+RVNEts4N7jzVLoufZ+X/TAv2Ib7pnnEj7h4rWDn>y7KP1XrTynItaas5z5fpFt2zUHFNElvNmyrjbFZVp
DUsnWWDuvemWUr5YwOLxeRCnwTvfw71gwGEVeBzIJq4TsZb2/G8j9vpb/L7KNybsyQNN>DlOTMW5CHzd5otyYaNBcYo9V/4ky63q2vZMzQDWtCwVPaTKREPUqPLRKea3VkQnnsUic>/iBe+6Sv5GYl+XPGbIjWbTJWLQmc1kv8LXPyvUmTm
cUVypKp9fDlyFUkOkEVAxW8dMxHJ0c83BPw37GkCvsR9itkzO0FpX0Zn+OvRQRkUCyzr>dgijhcH
Example print expected in output:
'012231' '3333' '94' 'U2FsdGVkX1+1pfXeR/h4u6P/BrItX75L0wHVIka4yA6tqS9a5CFUWvLu1AB4x2m8NpmJ>fyoXdADqlWDiGWi6Pw1a8NgNDbdTOlMtGBz4FCi8n97UdVQX9f0a2u9d5l7lOCxVDDzd>wJXbi9x4O+Dmo/lm9DbWAjBGKwWu0tTQxsU2TIpqv
FhUZmGd3E6vN+puPXz4yXeVQhMfQ+K8OpSM2ZuTpKCtDgm0SdUDyFnalA4lxHaFZqh+E>3+9JgHK7/KiiZmIJshUmqrwnkX0yKihCcOXCzaFITiByxBM/7PGeJo0IBAjyKI/GflgQ>8GsIWWRkCJnz2OMiYKr8uOMOAfTHnW57Dq+orDG1p'
'012236' '3334' '91' 'bCRIVArOSClIWrZz6KciBFT2iPjqsS/qMRSBYinBzpDmESj8kZHoGQ46BMq+LgHJiY5P>7yygNxCkEv25GKGViKTX1X6KSSLZ+RVNEts4N7jzVLoufZ+X/TAv2Ib7pnnEj7h4rWDn>y7KP1XrTynItaas5z5fpFt2zUHFNElvNmyrjbFZVp
DUsnWWDuvemWUr5YwOLxeRCnwTvfw71gwGEVeBzIJq4TsZb2/G8j9vpb/L7KNybsyQNN>DlOTMW5CHzd5otyYaNBcYo9V/4ky63q2vZMzQDWtCwVPaTKREPUqPLRKea3VkQnnsUic>/iBe+6Sv5GYl+XPGbIjWbTJWLQmc1kv8LXPyvUmTm
cUVypKp9fDlyFUkOkEVAxW8dMxHJ0c83BPw37GkCvsR9itkzO0FpX0Zn+OvRQRkUCyzr>dgijhcH'
Haved tested the following code with result of print selecting by number of record and counting the fixed width of the field, without care the format or shape of the content.
awk -v FIELDWIDTHS="6 1 4 2 2 15" 'NR==1{print $1" "$3" "$4}NR==2{print}NR==3{print $1" "$3" "$4}NR==4{print}' file
But it`s a large file with variable lenght of number of records in the large string so the equal will not work for this case I will need catch this string to a variable to print it later in the output as field in all the sequences of show this field.
Could help me with some code to parse the input and print the output as close to the need, please explain how to take the positions in the input.
Thank in advance.
Using any awk in any shell on every Unix box:
$ cat tst.awk
split($0,f,"-") > 1 {
if ( NR > 1 ) {
prt()
delete var
}
var[1] = f[1]
var[2] = substr(f[2],1,4)
var[3] = substr(f[2],5,2)
next
}
{ var[4] = var[4] $0 }
END { prt() }
function prt( i) {
for ( i=1; i<=4; i++ ) {
printf "\047%s\047%s", var[i], (i<4 ? OFS : ORS)
}
}
$ awk -f tst.awk file
'012231' '3333' '94' 'U2FsdGVkX1+1pfXeR/h4u6P/BrItX75L0wHVIka4yA6tqS9a5CFUWvLu1AB4x2m8NpmJ>fyoXdADqlWDiGWi6Pw1a8NgNDbdTOlMtGBz4FCi8n97UdVQX9f0a2u9d5l7lOCxVDDzd>wJXbi9x4O+Dmo/lm9DbWAjBGKwWu0tTQxsU2TIpqvFhUZmGd3E6vN+puPXz4yXeVQhMfQ+K8OpSM2ZuTpKCtDgm0SdUDyFnalA4lxHaFZqh+E>3+9JgHK7/KiiZmIJshUmqrwnkX0yKihCcOXCzaFITiByxBM/7PGeJo0IBAjyKI/GflgQ>8GsIWWRkCJnz2OMiYKr8uOMOAfTHnW57Dq+orDG1p'
'012236' '3334' '91' 'bCRIVArOSClIWrZz6KciBFT2iPjqsS/qMRSBYinBzpDmESj8kZHoGQ46BMq+LgHJiY5P>7yygNxCkEv25GKGViKTX1X6KSSLZ+RVNEts4N7jzVLoufZ+X/TAv2Ib7pnnEj7h4rWDn>y7KP1XrTynItaas5z5fpFt2zUHFNElvNmyrjbFZVpDUsnWWDuvemWUr5YwOLxeRCnwTvfw71gwGEVeBzIJq4TsZb2/G8j9vpb/L7KNybsyQNN>DlOTMW5CHzd5otyYaNBcYo9V/4ky63q2vZMzQDWtCwVPaTKREPUqPLRKea3VkQnnsUic>/iBe+6Sv5GYl+XPGbIjWbTJWLQmc1kv8LXPyvUmTmcUVypKp9fDlyFUkOkEVAxW8dMxHJ0c83BPw37GkCvsR9itkzO0FpX0Zn+OvRQRkUCyzr>dgijhcH'
I am trying to read .txt files from a directory which have the following order;
x-23.txt
x-43.txt
x-83.txt
:
:
x-243.txt
I am calling these files using filename = system("ls ../Data/*.txt"). The goal is to load these files and plot certain columns. At the same time, I am trying to parse the file names such that it would look like as below so that I can use them as title in the plot and add/subtract them from a certain column;
23
43
83
:
:
243
For that, I tried the following;
dirname = '../Data/'
str = system('echo "'.dirname. '" | perl -pe ''s/x[\d-](\d+).txt/\1.\2/'' ')
cv = word(str, 1)
The above lines doesn't seem to trim and produce numbers on the files. The code all together;
filelist1 = system("ls ../Data/*.txt")
print filelist1
dirname = '../Data/'
str = system('echo "'.dirname. '" | perl -pe ''s/x[\d-](\d+).txt/\1.\2/'' ')
cv = word(str, 1)
plot for [filename1 in filelist1] filename1 using (-cv/1000+ Tx($4)):(X($3)) with points pt 7 lc 6 title system('basename '.filename1),\
I am trying to use the file numbers "cv" after parsing the .txt files to subtract them from column Tx($4) while plotting.
directory = "../temp/"
filelist = system("cd ../temp/ ; ls *.txt")
files = words(filelist)
filename(i) = directory . word(filelist,i)
title(i) = word(filelist,i)[3 : strstrt(word(filelist,i),'.')-1]
plot for [i=1:files] filename(i) using ... title title(i)
Test case (edited to show pulling files from another directory):
gnuplot> print filelist
x-234.txt
x-23.txt
x-2.txt
x-34.txt
gnuplot> do for [i=1:files] { print i, ": ", filename(i) }
1: ../temp/x-234.txt
2: ../temp/x-23.txt
3: ../temp/x-2.txt
4: ../temp/x-34.txt
gnuplot> plot for [i=1:files] x*i title title(i)
I am likely recreating the wheel here but this is my stab and solving an issue partly and asking for community assistance to resolve the remaining.
My task is to split EDI X12 documents into their own file (ISA to IEA)
and CRLF each line separately (similar to ex. EDI2.EDI below).
Below is my Powershell script and example EDI documents 1, 2 and 3.
My script will successfully split a contiguous X12 EDI document from ISA to IEA and CRLF into a file so that one contiguous string becomes something more readable. This works well and will even handle any segment delimiter as well as any line delimiter.
My issue is dealing with non-contiguous documents (ex. EDI2) or combined (ex. EDI3). The source folder could have any of the formatted files shown below. If the file already contains the CRLF, then I just need to split it from ISA to IEA. My script is failing when i pull in CRLF'd files.
Could someone help me solving this?
$sourceDir = "Z:\temp\EDI\temp\"
$targetDir = "Z:\temp\EDI\temp\archive"
<##### F U N C T I O N S #####>
<#############################>
Function FindNewFile
{
Param (
[Parameter(mandatory=$true)]
[string]$filename,
[int]$counter)
$filename = Resolve-Path $filename
$validFileName = "{0}\{1} {2}{3}" -f $targetDir, #([system.io.fileinfo]$filename).DirectoryName,
([system.io.fileinfo]$filename).basename,
$counter, #"1", #([guid]::newguid()).tostring("N"),
([system.io.fileinfo]$filename).extension
Return $validFileName
}
<###### M A I N L I N E ######>
<#############################>
If(test-path $sourceDir)
{
$files = #(Get-ChildItem $sourceDir | Where {!$_.PsIsContainer -and $_.extension -eq ".edi" -and $_.length -gt 0})
"{0} files to process. . ." -f $files.count
If($files)
{
If(!(test-path $targetDir))
{
New-Item $targetDir -ItemType Directory | Out-Null
}
foreach ($file in $files)
{
$me = $file.fullname
# Get the new file name
$isaCount = 1
$newFile = FindNewFile $me $isaCount
$data = get-content $me
# Reset variables for each new file
$dataLen = [int] $data.length
$linDelim = $null
$textLine = $null
$firstRun = $True
$errorFlag = $False
for($x=0; $x -lt $data.length; $x++)
{
$textLine = $data.substring($x, $dataLen)
$findISA = "ISA{0}" -f $textLine.substring(3,1)
If($textLine.substring(0,4) -eq $findISA)
{
$linDelim = $textLine.substring(105, 1)
If(!($FirstRun))
{
$isaCount++
$newFile = FindNewFile $me $isaCount
}
$FirstRun = $False
}
If($linDelim)
{
$delimI = $textLine.IndexOf($linDelim) + 1
$textLine = $textLine.substring(0,$delimI)
$fLine = $textLine
add-content $newFile $fLine
$x += $fLine.length - 1
$dataLen = $data.length - ($x + 1)
}
Else
{
$errorFlag = $True
"`t=====> {0} is not a valid EDI X12 file!" -f $me
$x += $data.length
}
}
If(!($errorFlag))
{
"{0} contained {1} ISA's" -f $me, $isaCount
}
}
}
Else
{
"No files in {0}." -f $sourceDir
}
}
Else
{
"{0} does not exist!" -f $sourceDir
}
Filename: EDI1.EDI
ISA*00* *00* *08*925xxxxxx0 *01*78xxxx100 *170331*1630*U*00401*000000114*0*P*>~GS*FA*8473293489*782702100*20170331*1630*42*T*004010UCS~ST*997*116303723~SE*6*116303723~GE*1*42~IEA*1*000000114~ISA*00* *00* *08*WARxxxxxx *01*78xxxxxx0 *170331*1545*U*00401*000002408*0*T*>~GS*FA*5035816100*782702100*20170331*1545*1331*T*004010UCS~ST*997*000001331~~SE*24*000001331~GE*1*1331~IEA*1*000002408~
Filename: EDI2.EDI
ISA*00* *00* *ZZ*REINxxxxxxxDSER*01*78xxxx100 *170404*0819*|*00501*100000097*0*P*}~
GS*PO*REINHxxxxxxDSER*782702100*20170404*0819*1097*X*005010~
ST*850*1097~
SE*14*1097~
GE*1*1097~
IEA*1*100000097~
Filename: EDI3.EDI
ISA*00* *00* *08*925xxxxxx0 *01*78xxxx100 *170331*1630*U*00401*000000114*0*P*>~GS*FA*8473293489*782702100*20170331*1630*42*T*004010UCS~ST*997*116303723~SE*6*116303723~GE*1*42~IEA*1*000000114~ISA*00* *00* *08*WARxxxxxx *01*78xxxxxx0 *170331*1545*U*00401*000002408*0*T*>~GS*FA*5035816100*782702100*20170331*1545*1331*T*004010UCS~ST*997*000001331~~SE*24*000001331~GE*1*1331~IEA*1*000002408~
ISA*00* *00* *ZZ*REINxxxxxxxDSER*01*78xxxx100 *170404*0819*|*00501*100000097*0*P*}~
GS*PO*REINHxxxxxxDSER*78xxxxxx0*20170404*0819*1097*X*005010~
ST*850*1097~
SE*14*1097~
GE*1*1097~
IEA*1*100000097~
FWIW, I've compiled this code from all over the net including stackoverflow.com. If you see your code and desire recognition, let me know and I'll add it. I'm not claiming any of this is original! My motto is "ARRRGH!"
EDI3 is an invalid X12 document, each file should only contain one ISA segment with repeated envelopes if required.
The segment terminator should also be consistent. In EDI3 it is both ~ and ~ which is again invalid.
Segment terminator should be tilde "~".
It can be suffixed by: nothing, "\n" or, "\r\n", what is optional is the suffix for human reading. Some implementations might be more relaxed in terms of the X12 standard.
https://www.ibm.com/support/knowledgecenter/en/SS6V3G_5.3.1/com.ibm.help.gswformstutscreen.doc/GSW_EDI_Delimiters.html
https://docs.oracle.com/cd/E19398-01/820-1275/agdbj/index.html
https://support.microsoft.com/en-sg/help/2723596/biztalk-2010-configuring-segment-terminator-for-an-x12-encoded-interch
BTW, check my splitter/viewer: https://gist.github.com/ppazos/94a63ab18910ab0c0d23c9ff4ff7e5c2
I am trying to search through a binary file. After reviewing the file via a hex editor I found patterns throughout the file. You can see them here. As you can see they are before and after the file listing.
/% ......C:\Users\\Desktop\test1.pdf..9
/% ......C:\Users\\Desktop\testtesttesttest.pdf..9
What I woudld like to do is find ..9 (HEX = 000039), and then "backup" until I find, /% ...... (hex = 2F25A01C1000000000), then move forward x amount of bytes so I can get the complete path. The code I have now is below:
$file = 'C:\Users\<username>\Desktop\bc03160ee1a59fc1.automaticDestinations-ms'
$begin_pattern = '2F25A01C1000000000' #/% ......
$end_pattern = '000039' #..9
$prevBytes = '8'
$bytes = [string]::join('', (gc $file -en byte | % {'{0:x2}' -f $_}))
[regex]::matches($bytes, $end_pattern) |
% {
$i = $_.index - $prevBytes * 2
[string]::join('', $bytes[$i..($i + $prevBytes * 2 - 1)])
}
Some of the output roughly translates to this:
ffff2e0000002f000000300000003b0000003200000033000000340000003500000036000000370000003800
655c4465736b746f705c466f72656e7369635f426f6f6b735c5b656e5d646566745f6d616e75616c2e706466
0000000000000000000000000000010000000a00000000000000000020410a000000000000000a00000000
ÿÿ./0;2345678?e\Desktop\deft_manual.pdf?
?sic Science, Computers, and the Internet.pdf
?ware\Desktop\Dive Into Python 3.pdf?
You can use the System.IO.BinaryReader class from PowerShell.
$path = "<yourPathToTheBinaryFile>"
$binaryReader = New-Object System.IO.BinaryReader([System.IO.File]::Open($path, [System.IO.FileMode]::Open, [System.IO.FileAccess]::Read, [System.IO.FileShare]::ReadWrite))
Then you have access to all the methods like:
$binaryReader.BaseStream.Seek($pos, [System.IO.SeekOrigin]::Begin)
AFAIK, no easy way to "find" a pattern without reading the bytes (using ReadBytes) and implementing the search yourself.
I am trying to extract 4th column from csv file (comma separated, and skipping first 2 header lines) using this command,
awk 'NR <2 {next}{FS =","}{print $4}' filename.csv | more
However, it doesn't work because the first column cantains comma, thus 4th column is not really 4th. Below is an example of a row:
"sdfsdfsd, sfsdf", 454,fgdfg, I_want_this_column,sdfgdg,34546, 456465, etc
Unless you have specific reasons for using awk, I would recommend using a CSV parsing library. Many scripting languages have one built-in (or at least available) and they'll save you from these headaches.
if your first column has quotes always,
$ awk 'BEGIN{ FS="\042[ ]*," } { m=split($2,a,","); print a[3] } ' file
I_want_this_column
if the column you want is always the last 2nd,
$ awk -F"," '{print $(NF-1)}' file
I_want_this_column
You can try this demo script to break down the columns
awk 'BEGIN{ FS="," }
{
for(i=1;i<=NF;i++){
# save normal
if($i !~ /^[ ]*\042|[ ]*\042[ ]*$/){
a[++j]=$i
}
# if quotes at the end
if(f==1 && $i ~ /[ ]*\042[ ]*$/){
s=s","$i
a[++j]=s
#reset
s="";f=0
}
# if quotes in front
if($i ~ /^[ ]*\042/){
s=s $i
f=1
}
if(f==1 && ( $i !~/\042/ ) ){
s=s","$i
}
}
}
END{
# print columns
for(p=1;p<=j;p++){
print "Field "p,": "a[p]
}
} ' file
output
$ cat file
"sdfsdfsd, sfsdf", "454,fgdfg blah , words ", I_want_this_column,sdfgdg
$ ./shell.sh
Field 1 : "sdfsdfsd, sfsdf"
Field 2 : fgdfg blah
Field 3 : "454,fgdfg blah , words "
Field 4 : I_want_this_column
Field 5 : sdfgdg
You shouldn't use awk here. Use Python csv module or Perl Text::CSV or Text::CSV_XS modules or another real csv parser.
Related question -
parse csv file using gawk
If you can't avoid awk, this piece of code does the job you need:
BEGIN {FS=",";}
{
f=0;
j=0;
for (i = 1; i <=NF ; ++i) {
if (f) {
a[j] = a[j] "," $(i);
if ($(i) ~ "\"$") {
f = 0;
}
}
else {
++j;
a[j] = $(i);
if ((a[j] ~ "^\"[^\"]*$")) {
f = 1;
}
}
}
for (i = 1; i <= j; ++i) {
gsub("^\"","",a[i]);
gsub("\"$","",a[i]);
gsub("\"\"","\"",a[i]);
print "i = \"" a[i] "\"";
}
}
Working with CSV files that have quoted fields with commas inside can be difficult with the standard UNIX text tools.
I wrote a program called csvquote to make the data easy for them to handle. In your case, you could use it like this:
csvquote filename.csv | awk 'NR <2 {next}{FS =","}{print $4}' | csvquote -u | more
or you could use cut and tail like this:
csvquote filename.csv | tail -n +3 | cut -d, -f4 | csvquote -u | more
The code and docs are here: https://github.com/dbro/csvquote