In AudioKit, notes are identified by the MIDINoteNumber type, which is an alias for UInt8. How would I translate musical notes to MIDINoteNumber?
Related
I would like to read Japanese characters from a scanned image using swift's Vision framework. However, when I attempt to set the recognition language of VNRecognizeTextRequest to Japanese using
request.recognitionLanguages = ["ja", "en"]
the output of my program becomes nonsensical roman letters. For each image of japanese text there is unexpected recognized text output. However, when set to other languages such as Chinese or German the text output is as expected. What could be causing the unexpected output seemingly peculiar to Japanese?
I am building from the github project here.
As they said in WWDC 2019 video, Text Recognition in Vision Framework:
First, a prerequisite, you need to check the languages that are supported by language-based correction...
Look at supportedRecognitionLanguages for VNRecognizeTextRequestRevision2 for “accurate” recognition, and it would appear that the supported languages are:
["en-US", "fr-FR", "it-IT", "de-DE", "es-ES", "pt-BR", "zh-Hans", "zh-Hant"]
If you use “fast” recognition, the list is shorter:
["en-US", "fr-FR", "it-IT", "de-DE", "es-ES", "pt-BR"]
And if you fall back to VNRecognizeTextRequestRevision1, it is even shorter (lol):
["en-US"]
It would appear that Japanese is not a supported language at this point.
VisionKit has been support more language after mac update to macos Ventura.
need rebuild app using xcode 14
VNRecognizeTextRequest().supportedRecognitionLanguages()
["en-US", "fr-FR", "it-IT", "de-DE", "es-ES", "pt-BR", "zh-Hans", "zh-Hant", "yue-Hans", "yue-Hant", "ko-KR", "ja-JP", "ru-RU", "uk-UA"]
This question already has an answer here:
Swift how to sort dict keys by byte value and not alphabetically?
(1 answer)
Closed 5 years ago.
Consider the following predicate
print("S" > "g")
Running this on Xcode yields false, whereas running this on the online compiler of tutorialspoint or e.g. the IBM Swift Sandbox (Swift Dev. 4.0 (Sep 5, 2017) / Platform: Linux (x86_64)), yields true.
How come there's a different result of the predicate on the online compilers (Linux?) as compared to vs Xcode?
This is a known open "bug" (or perhaps rather a known limitation):
SR-530 - [String] sort order varies on Darwin vs. Linux
Quoting Dave Abrahams' comment to the open bug report:
This will mostly be fixed by the new string work, wherein String's
default sort order will be implemented as a lexicographical ordering
of FCC-normalized UTF16 code units.
Note that on both platforms we rely on ICU for normalization services,
and normalization differences among different implementations of ICU
are a real possibility, so there will never be a guarantee that two
arbitrary strings sort the same on both platforms.
However, for Latin-1 strings such as those in the example, the new
work will fix the problem.
Moreover, from The String Manifest:
Comparing and Hashing Strings
...
Following this scheme everywhere would also allow us to make sorting
behavior consistent across platforms. Currently, we sort String
according to the UCA, except that--only on Apple platforms--pairs of
ASCII characters are ordered by unicode scalar value.
Most likely, the particular example of the OP (covering solely ASCII characters), comparison according to UCA (Unicode Collation Algorithm) is used for Linux platforms, whereas on Apple platforms, the sorting of these single ASCII character String's (or; String instances starting with ASCII characters) is according to unicode scalar value.
// ASCII value
print("S".unicodeScalars.first!.value) // 83
print("g".unicodeScalars.first!.value) // 103
// Unicode scalar value
print(String(format: "%04X", "S".unicodeScalars.first!.value)) // 0053
print(String(format: "%04X", "g".unicodeScalars.first!.value)) // 0067
print("S" < "g") // 'true' on Apple platforms (comparison by unicode scalar value),
// 'false' on Linux platforms (comparison according to UCA)
See also the excellent accepted answer to the following Q&A:
What does it mean that string and character comparisons in Swift are not locale-sensitive?
We are using Zxing API for generating string to upc-a barcode image. But that image contains only barcode, but not with string like as the image in below link:
https://postimg.org/image/7t66lqa83/
So please suggest us how to generate barcode image with both barcode and string Sean Owen
UPC-A barcodes are designed for scanning products at a shop's checkout. They encode a world-wide unique 12 digit number for the product (13 digits for EAN-13 or GTIN-13, which is compatible with UPC-A). Product numbers are assigned by the international organization GS1.
Therefore, UPC-A barcode should not be used for anything else than to encode GS1 product numbers. And they are technically unable to encode strings.
I used the OpenEars for my app.just recognize "a" to "z" in the alphabet.
But it had a bad recognition in recognize alphabet than word.
So, how can i use my sound model to improve the recognition of OpenEars.
And how can I use OpenEars to recognize some special sound.
for example. I give OpenEars a dog sound and I want it to give me back "dog"
So this is a two part question which might be better to the community split up. OpenEars from what I understand is best served as using words in the dictionary. If you want it to recognize alphabet letters I would try and use the phonetic spelling of each letter instead of using just the letter. So instead of using 'f' use "ef".
As for the second part of the question, you might be able to recognize specific types of dogs which go "ruff" but smaller dogs with more of a "yip!" would have to be added to the initial dictionary as well.
I would get the demo app and really just experiment with these words.
I used the OpenEars for my app.just recognize "a" to "z" in the alphabet.
But it had a bad recognition in recognize alphabet than word.
So, how can i use my sound model to improve the recognition of OpenEars.
And how can I use OpenEars to recognize some special sound.
for example. I give OpenEars a dog sound and I want it to give me back "dog"
So this is a two part question which might be better to the community split up. OpenEars from what I understand is best served as using words in the dictionary. If you want it to recognize alphabet letters I would try and use the phonetic spelling of each letter instead of using just the letter. So instead of using 'f' use "ef".
As for the second part of the question, you might be able to recognize specific types of dogs which go "ruff" but smaller dogs with more of a "yip!" would have to be added to the initial dictionary as well.
I would get the demo app and really just experiment with these words.