How to get a single video from YouTube using its id? - youtube-api
I am trying to use the YouTube Data API v3 to get a single video from its YouTube id. I am using the Google API python client. When I try to execute the following code:
search_response = youtube.search().list(
id="QkhBcLk_8f0",
part="snippet",
).execute()
I always get this error:
Traceback (most recent call last):
File "search.py", line 55, in <module>
youtube_search(args)
File "search.py", line 24, in youtube_search
part="snippet",
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/discovery.py",
line 669, in method
raise TypeError('Got an unexpected keyword argument "%s"' % name)
TypeError: Got an unexpected keyword argument "id"
But I know there is a id parameter in the API, as it is listed in the reference.
Anybody know what am I doing wrong?
You should use youtube.videos() method.
youtube.videos().list(id="QkhBcLk_8f0",part="snippet").execute()
{u'etag': u'"YxyobdYztCvdjXOUqpUttvF39GM/nr0_6QcUDyGR0D_Gxz762lsqqfU"',
u'items': [{u'etag': u'"YxyobdYztCvdjXOUqpUttvF39GM/n7wasYBTHMM3SB4Jtsxu6JeDFPA"',
u'id': u'QkhBcLk_8f0',
u'kind': u'youtube#video',
u'snippet': {u'categoryId': u'27',
u'channelId': u'UCzu2OUGZlNtXPom3KNPGFzg',
u'channelTitle': u'FFreeThinker',
u'description': u'http://facebook.com/ScienceReason ... Great Minds, Great Words: Richard Feynman - The Uncertainty of Knowledge ... The Nature and Purpose of the Universe.\n\nPlaylist "Great Minds, Great Words":\n\u2022 http://www.youtube.com/user/FFreeThinker#grid/user/CC4F721030F8D4D1\n\n---\nPlease SUBSCRIBE to Science & Reason:\n\u2022 http://www.youtube.com/FFreeThinker\n\u2022 http://www.youtube.com/ScienceTV\n\u2022 http://www.youtube.com/Best0fScience\n\u2022 http://www.youtube.com/RationalHumanism\n---\n\nRichard Feynman (1918-1988) was an American physicist known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics and the physics of the superfluidity of supercooled liquid helium, as well as in particle physics (he proposed the parton model).\n\nFor his contributions to the development of quantum electrodynamics, Feynman, jointly with Julian Schwinger and Sin-Itiro Tomonaga, received the Nobel Prize in Physics in 1965. He developed a widely used pictorial representation scheme for the mathematical expressions governing the behavior of subatomic particles, which later became known as Feynman diagrams. During his lifetime, Feynman became one of the best-known scientists in the world.\n\nHe assisted in the development of the atomic bomb and was a member of the panel that investigated the Space Shuttle Challenger disaster. In addition to his work in theoretical physics, Feynman has been credited with pioneering the field of quantum computing, and introducing the concept of nanotechnology (creation of devices at the molecular scale). He held the Richard Chace Tolman professorship in theoretical physics at the California Institute of Technology.\n\nFeynman was a keen popularizer of physics through both books and lectures, notably a 1959 talk on top-down nanotechnology called "There\'s Plenty of Room at the Bottom" and "The Feynman Lectures on Physics". Feynman also became known through his semi-autobiographical books ("Surely You\'re Joking, Mr. Feynman!" and "What Do You Care What Other People Think?") and books written about him, such as "Tuva or Bust!"\n\nHe was regarded as an eccentric and free spirit. He was a prankster, juggler, safecracker, proud amateur painter, and bongo player. He liked to pursue a variety of seemingly unrelated interests, such as art, percussion, Maya hieroglyphs, and lock picking.\n\nFeynman also had a deep interest in biology, and was a friend of the geneticist and microbiologist Esther Lederberg, who developed replica plating and discovered bacteriophage lambda. They had several mutual physicist friends who, after beginning their careers in nuclear research, moved for moral reasons into genetics, among them Le\xf3 Szil\xe1rd, Guido Pontecorvo, and Aaron Novick.\n\n\u2022 http://en.wikipedia.org/wiki/Richard_Feynman\n.',
u'liveBroadcastContent': u'none',
u'localized': {u'description': u'http://facebook.com/ScienceReason ... Great Minds, Great Words: Richard Feynman - The Uncertainty of Knowledge ... The Nature and Purpose of the Universe.\n\nPlaylist "Great Minds, Great Words":\n\u2022 http://www.youtube.com/user/FFreeThinker#grid/user/CC4F721030F8D4D1\n\n---\nPlease SUBSCRIBE to Science & Reason:\n\u2022 http://www.youtube.com/FFreeThinker\n\u2022 http://www.youtube.com/ScienceTV\n\u2022 http://www.youtube.com/Best0fScience\n\u2022 http://www.youtube.com/RationalHumanism\n---\n\nRichard Feynman (1918-1988) was an American physicist known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics and the physics of the superfluidity of supercooled liquid helium, as well as in particle physics (he proposed the parton model).\n\nFor his contributions to the development of quantum electrodynamics, Feynman, jointly with Julian Schwinger and Sin-Itiro Tomonaga, received the Nobel Prize in Physics in 1965. He developed a widely used pictorial representation scheme for the mathematical expressions governing the behavior of subatomic particles, which later became known as Feynman diagrams. During his lifetime, Feynman became one of the best-known scientists in the world.\n\nHe assisted in the development of the atomic bomb and was a member of the panel that investigated the Space Shuttle Challenger disaster. In addition to his work in theoretical physics, Feynman has been credited with pioneering the field of quantum computing, and introducing the concept of nanotechnology (creation of devices at the molecular scale). He held the Richard Chace Tolman professorship in theoretical physics at the California Institute of Technology.\n\nFeynman was a keen popularizer of physics through both books and lectures, notably a 1959 talk on top-down nanotechnology called "There\'s Plenty of Room at the Bottom" and "The Feynman Lectures on Physics". Feynman also became known through his semi-autobiographical books ("Surely You\'re Joking, Mr. Feynman!" and "What Do You Care What Other People Think?") and books written about him, such as "Tuva or Bust!"\n\nHe was regarded as an eccentric and free spirit. He was a prankster, juggler, safecracker, proud amateur painter, and bongo player. He liked to pursue a variety of seemingly unrelated interests, such as art, percussion, Maya hieroglyphs, and lock picking.\n\nFeynman also had a deep interest in biology, and was a friend of the geneticist and microbiologist Esther Lederberg, who developed replica plating and discovered bacteriophage lambda. They had several mutual physicist friends who, after beginning their careers in nuclear research, moved for moral reasons into genetics, among them Le\xf3 Szil\xe1rd, Guido Pontecorvo, and Aaron Novick.\n\n\u2022 http://en.wikipedia.org/wiki/Richard_Feynman\n.',
u'title': u'Great Minds: Richard Feynman - The Uncertainty Of Knowledge'},
u'publishedAt': u'2010-03-04T15:12:56.000Z',
u'tags': [u'great',
u'minds',
u'words',
u'richard',
u'feynman',
u'uncertainty',
u'knowledge',
u'nature',
u'purpose',
u'universe',
u'god',
u'religion',
u'atheists',
u'atheism',
u'science',
u'physicists',
u'quantum',
u'mechanics',
u'electrodynamics',
u'superfluidity',
u'nobel',
u'prize',
u'theoretical',
u'physics',
u'atomic',
u'bomb',
u'space',
u'nano',
u'technology'],
u'thumbnails': {u'default': {u'height': 90,
u'url': u'https://i.ytimg.com/vi/QkhBcLk_8f0/default.jpg',
u'width': 120},
u'high': {u'height': 360,
u'url': u'https://i.ytimg.com/vi/QkhBcLk_8f0/hqdefault.jpg',
u'width': 480},
u'maxres': {u'height': 720,
u'url': u'https://i.ytimg.com/vi/QkhBcLk_8f0/maxresdefault.jpg',
u'width': 1280},
u'medium': {u'height': 180,
u'url': u'https://i.ytimg.com/vi/QkhBcLk_8f0/mqdefault.jpg',
u'width': 320},
u'standard': {u'height': 480,
u'url': u'https://i.ytimg.com/vi/QkhBcLk_8f0/sddefault.jpg',
u'width': 640}},
u'title': u'Great Minds: Richard Feynman - The Uncertainty Of Knowledge'}}],
u'kind': u'youtube#videoListResponse',
u'pageInfo': {u'resultsPerPage': 1, u'totalResults': 1}}
Related
How would someone create a machine learning algorithm that extracts the speaker from a book/novel?
Basically organizes the content based on the speaker? Excerpt From: Robert Louis Stevenson. “The Strange Case of Dr. Jekyll and Mr. Hyde.” Example Input: But Lanyon's face changed, and he held up a trembling hand. "I wish to see or hear no more of Dr. Jekyll," he said in a loud, unsteady voice. "I am quite done with that person; and I beg that you will spare me any allusion to one whom I regard as dead. Example Output: [ “Narrator”: “But Lanyon's face changed, and he held up a trembling hand.”, “Lanyon”: “I wish to see or hear no more of Dr. Jekyll”, “Narrator”: “he said in a loud, unsteady voice.”, “Lanyon”: “I am quite done with that person; and I beg that you will spare me any allusion to one whom I regard as dead.” ]
I have not heard of the algorithm that does exactly this. But there are two well known problem that could be useful: named entity recognition (to find all potential speakers) and anaphora resolution (to decide who "he" or "she" is in each case). You would also need to train a classifier that for each quoted chunk of text to decide whether it is a direct speech. And you would probably need another classifier to decide for each identified piece of speech and for each identified speaker in the context, how likely is that this speech actually belongs to this speaker.
Zener Diode - What constitutes "Similar?"
I have very little experience with ECE in general and I am delving into using an Arduino for some small hobby type projects. I was following an online guide, and the person who wrote says that I need: "2 - 1N5227 or similar 3.6V biased zener diodes" I have read up a bit on Zener Diodes and now understand what they do and what their purpose is. I am not able to tell what he means by similar in this context though. I purchased a Diode Kit that includes 4 types of Zener Diodes. They all have different part numbers and voltages. The 4 I have are: 1N751 5.1V 1N4733 5.1V 1N4735 6.2V 1N4742 12V Would any of those be usable in this context or should I order the specific model he states? The guide being referenced is this, if it is helpful: http://www.instructables.com/id/RC-Transmitter-to-USB-Gamepad-Using-Arduino/ I really appreciate the time and assistance with this, this is a fun area to learn in!
In electronics and other engineering areas, similar refers to the property that stands out (in this case the voltage), in your case refers to looking for another zener diode whose voltage is similar to the example. As I see none of those diodes replaces the example.
Zener diodes has two parameters you need to match in the selection of a replacement (independently the manufacturer): The Zener voltage (Vz) and the diode power. For your application you will need a Zener diode of 3.6 V, and usually with 1/4 W to 1/2 W (depending the application power you will need) it will be enough. You need also to calculate the limiting resistor for the Zener diode. I recommend you to read the book of Albert Paul Malvino or similar to better understand. Regards.
Accuracy of IBM Watson speech recognition is low
I develop an application that uses speech-to-text to transcribe audio to text. The accuracy is low. Some sentences have no meaning. Is there a way to improve the accuracy of speech-to-text? Here's an example: http://book.vidalab.co/books/alice-in-wonderland Alice in Wonderland, in section 2: "over at home to go white pawn this way you see ads" should be "over at home to go white pawn this way you see Alice" "rat in white" should be "red and white" "and the white army tries to win and the red on the Trice twin" should be "and the white army tries to win and the red army tries to win"
You can try different services, for example Speechmatics, it's not very good at getting speakers but words are much more accurate than from Watson, the result is like this: Credits of Alice in Wonderland by Alice girs Timberg this is a box recording all of her vocal recordings are in the public domain for more information or volunteer. Please visit libber Vox dot org. I just listed stage directions read by McKayla Curtis Lewis Carroll. Read by Shannon Brown Alice read by Amanda Friday the Red Queen read by Shauna canat White Queen read by Elizabeth Klatt White Rabbit read by Todd Humpty Dumpty read by Jeff Machado written read by Brett Hirsch. The Mock Turtle read by Ted the alarm Mad Hatter read by Elliot gage the March Hare by Charlotte Duckett's dormouse read by Kimberly Krauss frog read by Larry Wilson Duchess read by L.A. Cheshire Cat read by Sarah Herschell Tweedle-Dee read. By Charlotte Brown. Do you do do I read by the sea a solo the King of Hearts read by Ted alarm the Queen of Hearts read by eating Ray Headrick knave by glorious Joe Carter pillar back at 2 loss to spot read by Dave Harris. Five Spot read by Dave Harith. Seven of spades read by Dave Hereth end of credits. Surnames recognition is very complex task, not many companies are doing it properly.
There are two major parts in any STT system: acoustic model and language model. The first one is about audio and speaker and handles things like: noise, pronunciations, accents and so on. Language Model is about structure of a given language and the words used in the speech. If you would like to test an STT, use the recordings which are as close as possible to your target speech. A system which performs very well for general speech, or, for example, medical transcription, may be not a good in handle speech about archeology or poetry.e
How to discover common information blocks from multiple web pages of a same website?
It is a pattern recognition task in web crawler. The traditional crawler gets the data of the whole page. If there is any way to make the crawler a litter intelligence, like just to identify and capture the the information part.
It is a research problem called wrapper induction or web data extraction. I don't know any library for this, but there are a lot of research papers (see below the list of good ones IMHO) and some research projects like DIADEM (their site contains list of publications as well). Muslea, Ion, Steven Minton, and Craig A. Knoblock. “Hierarchical Wrapper Induction for Semistructured Information Sources.” Autonomous Agents and Multi-Agent Systems 4, no. 1–2 (2001): 93–114. Dalvi, Nilesh, Ravi Kumar, and Mohamed Soliman. “Automatic Wrappers for Large Scale Web Extraction.” Proceedings of the VLDB Endowment 4, no. 4 (2011): 219–230. Dalvi, Nilesh, Ashwin Machanavajjhala, and Bo Pang. “An Analysis of Structured Data on the Web.” Proceedings of the VLDB Endowment 5, no. 7 (2012): 680–691. Gentile, Anna Lisa, Ziqi Zhang, Isabelle Augenstein, and Fabio Ciravegna. “Unsupervised Wrapper Induction Using Linked Data.” In Proceedings of the Seventh International Conference on Knowledge Capture, 41–48, 2013. Weninger, Tim, and Jiawei Han. “Exploring Structure and Content on the Web: Extraction and Integration of the Semi-Structured Web.” In Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, 779–780, 2013. http://dl.acm.org/citation.cfm?id=2433499.
Apache Spark ALS collaborative filtering results. They don't make sense
I wanted to try out Spark for collaborative filtering using MLlib as explained in this tutorial: https://databricks-training.s3.amazonaws.com/movie-recommendation-with-mllib.html The algorithm is based on the paper "Collaborative Filtering for Implicit Feedback Datasets", doing matrix factorization. Everything is up and running using the 10 million Movielens data set. The data set it split into 80% training 10% test and 10% validation. RMSE Baseline: 1.060505464225402 RMSE (train) = 0.7697248827452756 RMSE (validation) = 0.8057135933012889 for the model trained with rank = 24, lambda = 0.1, and Iterations = 10. The best model improves the baseline by 23.94%. Which are values similar to the tutorial, although with different training parameters. I tried running the algorithm several times and always got recommendations that don't make any sense to me. Even rating only kids movies I get the following results: For ratings: personal rating: Toy Story (1995) rating: 4.0 personal rating: Jungle Book, The (1994) rating: 5.0 personal rating: Lion King, The (1994) rating: 5.0 personal rating: Mary Poppins (1964) rating: 4.0 personal rating: Alice in Wonderland (1951) rating: 5.0 Results: Movies recommended for you: Life of Oharu, The (Saikaku ichidai onna) (1952) More (1998) Who's Singin' Over There? (a.k.a. Who Sings Over There) (Ko to tamo peva) (1980) Sundays and Cybele (Dimanches de Ville d'Avray, Les) (1962) Blue Light, The (Das Blaue Licht) (1932) Times of Harvey Milk, The (1984) Please Vote for Me (2007) Man Who Planted Trees, The (Homme qui plantait des arbres, L') (1987) Shawshank Redemption, The (1994) Only Yesterday (Omohide poro poro) (1991) Which except for Only Yesterday doesn't seem to make any sense. If there is anyone out there who knows how to interpret those results or get better ones I would really appreciate you sharing your knowledge. Best regards EDIT: As suggested I trained another model with more factors: Baseline error: 1.0587417035872992 RMSE (train) = 0.7679883378412548 RMSE (validation) = 0.8070339258049574 for the model trained with rank = 100, lambda = 0.1, and numIter = 10. And different personal ratings: personal rating: Star Wars: Episode VI - Return of the Jedi (1983) rating: 5.0 personal rating: Mission: Impossible (1996) rating: 4.0 personal rating: Die Hard: With a Vengeance (1995) rating: 4.0 personal rating: Batman Forever (1995) rating: 5.0 personal rating: Men in Black (1997) rating: 4.0 personal rating: Terminator 2: Judgment Day (1991) rating: 4.0 personal rating: Top Gun (1986) rating: 4.0 personal rating: Star Wars: Episode V - The Empire Strikes Back (1980) rating: 3.0 personal rating: Alien (1979) rating: 4.0 The recommended movies are: Movies recommended for you: Carmen (1983) Silent Light (Stellet licht) (2007) Jesus (1979) Life of Oharu, The (Saikaku ichidai onna) (1952) Heart of America (2003) For the Bible Tells Me So (2007) More (1998) Legend of Leigh Bowery, The (2002) Funeral, The (Ososhiki) (1984) Longshots, The (2008) Not one useful result. EDIT2: With using the implicit feedback method, I get much better results! With the same action movies as above the recommendations are: Movies recommended for you: Star Wars: Episode IV - A New Hope (a.k.a. Star Wars) (1977) Terminator, The (1984) Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981) Die Hard (1988) Godfather, The (1972) Aliens (1986) Rock, The (1996) Independence Day (a.k.a. ID4) (1996) Star Trek II: The Wrath of Khan (1982) GoldenEye (1995) That's more what I expected! The question is why the explicit version is so-so-so bad
Note that the code you are running does not use implicit feedback, and is not quite the algorithm you refer to. Just make sure you are not using ALS.trainImplicit. You may need a different, lambda and rank. RMSE of 0.88 is "OK" for this data set; I am not clear that the example's values are optimal or just the one that the toy test produced. You use a different value still here. Maybe it's just not optimal yet. It could even be stuff like bugs in the ALS implementation fixed since. Try comparing to another implementation of ALS if you can. I always try to resist rationalizing the recommendations since our brains inevitably find some explanation even for random recommendations. But, hey, I can say that you did not get action, horror, crime drama, thrillers here. I find that kids movies go hand in hand with taste for arty movies, since, the kind of person who filled out their tastes for MovieLens way back when and rated kids movies were not actually kids, but parents, and maybe software engineer types old enough to have kids do tend to watch these sorts of foreign films you see.
Collaborative Filtering just give you items that people, who have the same taste as you, really like. If you rate only kids movies, it doesn't mean that you will get recommended only kids movies. It just means that people who rated Toy Story, Jungle Book, Lion King, etc... as you did also like Life of Oharu, More, Who's Singin' Over There?, etc... You have a good animation on the wikipedia page: CF I didn't read the link that you gave but one thing that you can change is the similarity measure you are using if you want to stay with collaborative filtering. If you want recommendation based on your taste, you might try latent factor model like Matrix Factorization. Here the latent factor might discover that movie can be describe as features that describe the characteristics of rated objects. It might be that a movie is comic, children, horror, etc.. (You never really know what the latent factor are by the way). And if you only rate kids movies, you might get as recommendation others kids movies. Hope it helps.
Second what Vlad said, try correlation or Jaccard. I.e. ignore the rating numbers and just look at the binary "are these two movies together in a user's preference list or not". This was a game-changer for me when I was building my first recommender: http://tdunning.blogspot.com/2008/03/surprise-and-coincidence.html Good luck
I have tried using the same dataset and following this Spark tutorial, I get the same (subjectively bad) results. However, using a simpler method - for instance based on Pearson Correlation as a similarity measure - instead of matrix factorization, I get much, much better results. This means I would mostly get kid movies with your input preferences and the same input ratings file. Unless you really need the factorization (which has a lot of advantages, though), I would suggest using another recommendation method.