Youtube API v3 - Related videos from specific channel - youtube-api

I want to get related videos only from the uploader's channel, but it looks like a search with relatedToVideoId will ignore channelId when specified.
E.g. https://www.googleapis.com/youtube/v3/search?channelId=UCgiDRy6oyLanAcFeM4-_OYA&relatedToVideoId=eWXm5ZKGXSw&part=snippet,id&type=video&maxResults=10&key={your_api_key}
And https://www.googleapis.com/youtube/v3/search?relatedToVideoId=eWXm5ZKGXSw&part=snippet,id&type=video&maxResults=10&key={your_api_key}
Will both return the same set of results.
Am I doing something wrong, or is this the intended behavior?

You're not doing anything wrong -- whether or not this is intended could only be answered by the engineering team, however. But it seems that the relatedToVideoId parameter is designed to ignore all other search filters (even 'q').
It seems logical that this is intended, as it is possibly tapping into the same algorithm that generates the related video thumbnails when a video is done playing (in other words, it's specifically used as a discovery tool for videos outside the keyword or channel relationships).

The above answer is correct, but if you still want to use this method and differentiate the channel's videos' to show yours only, you can, by doing this:
(written in jQuery, but the same concept applies to other languages)
var channelTitle = item.snippet.channelTitle;
var result = "";
if(channelTitle === "Your Channel Name")
{
// print results
$('.related-video').append(result);
$(item).show(); // show item
}
else // does not match channel name
{
$(item).hide(); // hide item
}

Related

Realm Swift: Question about Query-based public database

I’ve seen all around the documentation that Query-based sync is deprecated, so I’m wondering how should I got about my situation:
In my app (using Realm Cloud), I have a list of User objects with some information about each user, like their username. Upon user login (using Firebase), I need to check the whole User database to see if their username is unique. If I make this common realm using Full Sync, then all the users would synchronize and cache the whole database for each change right? How can I prevent that, if I only want the users to get a list of other users’ information at a certain point, without caching or re-synchronizing anything?
I know it's a possible duplicate of this question, but things have probably changed in four years.
The new MongoDB Realm gives you access to server level functions. This feature would allow you to query the list of existing users (for example) for a specific user name and return true if found or false if not (there are other options as well).
Check out the Functions documentation and there are some examples of how to call it from macOS/iOS in the Call a function section
I don't know the use case or what your objects look like but an example function to calculate a sum would like something like this. This sums the first two elements in the array and returns their result;
your_realm_app.functions.sum([1, 2]) { sum, error in
if let err = error {
print(err.localizedDescription)
return
}
if case let .double(x) = result {
print(x)
}
}

How to Let Chrome History Ignore Part of URL

As my work involves viewing many items from a website, I need to know which items have been visited and which not, so as to avoid repeated viewing.
The problem is that the URL of these items include some garbage parameters that are dynamically changing. This means the browser's history record is almost useless in identifying which items have already been viewed.
This is an example of the URL:
https://example.com/showitemdetail/?item_id=e6de72e&hitkey=true&index=234&cur_page=1&pageSize=30
Only the "item_id=e6de72e" part is useful in identifying each item. The other parameters are dynamic garbage.
My question is: how to let Chrome mark only the "example.com/showitemdetail/?item_id=e6de72e" part as visited, and ignore the rest parameters?
Please note that I do NOT want to modify the URLs, because that might alarm the website server to suspect that I am abusing their database. I want the garbage parameters to be still there, but the browser history mechanism to ignore them.
I know this is not easy. I am proposing a possible solution, but do not know whether it can be implemented. It's like this:
Step: 1) An extension background script to extract the item_id from each page I open, and then store it in a collection of strings. This collection of strings should be saved in a file somewhere.
Step: 2) Each time I open a webpage with a list of various items, the background script verifies whether each URL contains a string which matches any one in the above collection. If so, that URL would be automatically added to history. Then that item will naturally be shown as visited.
Does the logic sound OK? And if so how to implementable it by making a simple extension?
Of course, if you have other more neat solutions, I'd be very interested to learn.
Assuming that the link to the items always have the item_id, that would work, yes.
You would need the following steps:
Recording an element
content_script that adds a code to the product pages and tracks it.
On accessing the product page:
i. You can extract the current product id by checking the URL parameters (see one of these codes).
ii. You use storage api to retrieve a certain stored variable, say: visited_products. This variable you need to implement it as a Set since it's the best data type to handle unique elements.
iii. You check whether the current element is on the list with .has(). If yes, then you skip it. If all is good, it should always be new, but no harm in checking. If not, then you use add() to add the new product id (although Set will not allow you to add a repeated items, so you can skip the check and just save add it directly). Make sure you store it to Chrome.
Now you have registered a visit to a product.
Checking visited elements
You use a content_script again to be inserted on product pages or all pages if desired.
You get all the links of the page with document.querySelectorAll(). You could apply a CSS selector like: a[href*="example.com/showitemdetail/?item_id="] which would select all the links whose href contains that URL portion.
Then, you iterate the links with a for loop. On each iteration, you extract the item_id. Probably, the easiest way is: /(?:item_id=)(.*?)(?:&|$)/. This matches all characters preceded by item_id= (not captured) until it finds an & or end of the string (whichever happens first, and not captured).
With the id captured, you can check the Set of the first part with .has() to see whether it's on the list.
Now, about how to handle whether it's on the list, depends on you. You could hide visited elements. Or apply different CSS classes or style to them so you differentiate them easily.
I hope this gives you a head start. Maybe you can give it a try and, if you cannot make it work, you can open a new question with where you got stuck.
Thanks a lot, fvbuendia. After some trial and error elbow grease, I made it.
I will not post all the codes here, but will give several tips for other users' reference:
1) To get the URL of newly opened webpage and extract the IDs, use chrome.tabs.onUpdated.addListener and extractedItemId = tab.url.replace(/..../, ....);
2) Then save the IDs to storage.local, using chrome.storage.local.set and chrome.storage.local.get. The IDs should be saved to an object array.
1) and 2) should be written in the background script.
3) Each time the item list page is opened, the background calls a function in the content script, asking for all the URLs in the page. Like this:
chrome.tabs.onUpdated.addListener(function(tabId, changeInfo, tab) {
if(changeInfo.status == "complete") {
if(tab.url.indexOf("some string typical of the item list page URL") > -1) {
chrome.tabs.executeScript(null, { code: 'getalltheurls();' });
} }
});
4) The function to be executed in content script:
function getalltheurls() {
var urls = [];
var links = document.links;
for (var i = 0; i < links.length; i++) {
if(links[i].href.indexOf("some string typical of the item list URLs") > -1) { urls.push(links[i].href);}
}
chrome.runtime.sendMessage({ urls: urls });
};
5) Background receives the URLs, then converts them to an array of IDs, using
idinlist = urls[i].replace(........)
6) Then background gets local storage, using chrome.storage.local.get, and checks if these IDs are in the stored array. If so, add the URL to history.
for (var i = 0; i < urls.length; i++) {
if (storedIDs.indexOf(idinlist) > -1 ) { chrome.history.addUrl({ url: urls[i] }); }
}

How to get Video url from media picker in Umbraco

I've media picker in my current Document Type. In that I have taken two Media picker. first for the multiple images slider and another for video.
And Content
Now I am trying to get this url in my code by given code:
var imageList = CurrentPage.productsSliderImages.Split(new string[] { "," }, StringSplitOptions.RemoveEmptyEntries);
var video = Umbraco.Media(CurrentPage.productSliderVideo);
I am getting ImageList successfully. But I am getting video null.
If I replace my video with any Image again its start working. Is there any problem with Video or other file with media picker?
Watch:
http://prntscr.com/e9wal1
To resolve a problem like this, I would recommend trying to print the raw value of the video Media Picker to the screen or inspecting it in debug mode. I like to work with the more strongly typed IPublishedContent, so I would debug with some code like this:
var videoData = Model.Content.GetPropertyValue<string>("productSliderVideo");
Normally, if you are working on a View that inherits from #inherits UmbracoTemplatePage, both Model.Content and CurrentPage will give you the data on the current page. You can work with the CurrentPage if you like working with dynamics or you can work with Model.Content to work with more strongly typed IPublishedContent models. I prefer the strongly typed version because it is a lot easier for me to debug.
Once you verify that you are getting an id back, I would check the media item that you have picked in the backoffice just as a sanity check. Make sure it matches. If it does, I would try reindexing the InternalIndexer in the Examine Index Manager. As far as I understand, Umbraco uses the internal examine indexer as a media cache. After doing all of this, I would try the below. It is the same as what you are doing above, but with the TypedMedia instead of the dynamic media. Maybe it will reveal more to you. I personally find the typed content and typed media a lot easier to debug. It might make sense to switch over to that for the sake of debugging even if you decide you want to switch back to the dynamics afterwards:
var video = Umbraco.TypedMedia(videoData);

What is available for limiting the use of extend when using Breezejs, such users cant get access to sensitive data

Basically this comes up as one of the related posts:
Isn't it dangerous to have query information in javascript using breezejs?
It was someone what my first question was about, but accepting the asnwers there, i really would appreciate if someone had examples or tutorials on how to limit the scope of whats visible to the client.
I started out with the Knockout/Breeze template and changed it for what i am doing. Sitting with a almost finished project with one concern. Security.
I have authentication fixed and is working on authorization and trying to figure out how make sure people cant get something that was not intended for them to see.
I got the first layer fixed on the root model that a member can only see stuff he created or that is public. But a user may hax together a query using extend to fetch Object.Member.Identities. Meaning he get all the identities for public objects.
Are there any tutorials out there that could help me out limiting what the user may query.?
Should i wrap the returned objects with a ObjectDto and when creating that i can verify that it do not include sensitive information?
Its nice that its up to me how i do it, but some tutorials would be nice with some pointers.
Code
controller
public IQueryable<Project> Projects()
{
//var q = Request.GetQueryNameValuePairs().FirstOrDefault(k=>k.Key.ToLower()=="$expand").Value;
// if (!ClaimsAuthorization.CheckAccess("Projects", q))
// throw new WebException("HET");// UnauthorizedAccessException("You requested something you do not have permission too");// HttpResponseException(HttpStatusCode.MethodNotAllowed);
return _repository.Projects;
}
_repository
public DbQuery<Project> Projects
{
get
{
var memberid = User.FindFirst("MemberId");
if (memberid == null)
return (DbQuery<Project>)(Context.Projects.Where(p=>p.IsPublic));
var id = int.Parse(memberid.Value);
return ((DbQuery<Project>)Context.Projects.Where(p => p.CreatedByMemberId == id || p.IsPublic));
}
}
Look at applying the Web API's [Queryable(AllowedQueryOptions=...)] attribute to the method or doing some equivalent restrictive operation. If you do this a lot, you can subclass QueryableAttribute to suit your needs. See the Web API documentation covering these scenarios.
It's pretty easy to close down the options available on one or all of your controller's query methods.
Remember also that you have access to the request query string from inside your action method. You can check quickly for "$expand" and "$select" and throw your own exception. It's not that much more difficult to block an expand for known navigation paths (you can create white and black lists). Finally, as a last line of defense, you can filter for types, properties, and values with a Web API action filter or by customizing the JSON formatter.
The larger question of using authorization in data hiding/filtering is something we'll be talking about soon. The short of it is: "Where you're really worried, use DTOs".

How to add / create Foul or bad language keyword filter in SharePoint Lists, Documents?

I am building an Internal social networking website on SharePoint. Since its a networking intranet, I want it to be Open and non moderated. However, I also dont want people to use abusive / Foul or bad language words in the portal.
I tried Googling and wasnt really sucessfull in finding a solution.
Microsoft Forefront will do that for me, but it only does for Documents. But I also want to do that on Lists since Discussion forum on the SharePoint is in a list format.
You may create site solution/list definition for your site using Visual studio Sharepoint Site Solution Genarator. Create a custom list and name it as you wish. I would name it "AbusiveWordList" in the following code example.
After creating site solution/list definition, Add below code in Item Adding function, which will iterate through all column in the list and will check from the custom list that is created named "AbusiveWordList". This list contains abusive words.
The chkbody function which will reference list item from custom list named "AbusiveWordList" and check if the bodytext contains item from AbusiveWordList.If yes, then it will throw an error.
*base.ItemAdding(properties);
foreach (DictionaryEntry
dictionaryEntry in
properties.AfterProperties) { string
bodytext = "";
bodytext = bodytext +
dictionaryEntry.Value;
finalwordcount = finalwordcount +
chkbody(bodytext, properties); }
if (finalwordcount > 0) {
properties.ErrorMessage = "Abusive /
Foul / Illicit information
found.Kindly refer to the terms and
conditions.";
properties.Cancel = true;
}
You will probably need to override any controls that display text to avoid this issue. As this would be a lot of work, perhaps an HTTP Module would be a better solution.
I've worked on a module that used regular expressions to make SharePoint's output XHTML compliant. Similarly, you could use regular expressions to strip out offensive words when a page is rendered. It wouldn't stop people typing them but as no-one would be able to see them this wouldn't matter. You could use a basic SharePoint custom list to store the offensive words you don't want displayed.

Resources