How to get the mp4 url for Youtube videos using Youtube v3 API - youtube

How do I get the full mp4 url to play the video from it's actual location in my application using some other source except Youtube. The gdata/youtube API has been deprecated so I am having trouble. Any help will be appreciated. Thanks.

i made a very simple API : https://gist.github.com/egyjs/9e60f1ae3168c38cc0f0054c15cd6a83
As Example:
YouTube Video Link: https://www.youtube.com/watch?v=**YGCLs9Bt_KY**
now to get the Direct link
you need to call the api , like this (change example.com to your site) :
https://example.com/?url=https://www.youtube.com/watch?v=YGCLs9Bt_KY
returns:
[
{
"url": "https:\/\/r10---sn-aigllnlr.googlevideo.com\/videoplayback?key=yt6&signature=81D86D3BC3D34D8A3B865464BE7BC54F34C1B0BC.7316033C2DD2F65E4D345CFA890257B63D7FE2A2&mt=1522999783&expire=1523021537&sparams=dur%2Cei%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Cratebypass%2Crequiressl%2Csource%2Cexpire&requiressl=yes&ei=gSLHWvuxDMOUVYaTqYgB&dur=244.204&pl=22&itag=22&ip=185.27.134.50&lmt=1522960451860848&id=o-AAoaDzyDCVXS404wfqZoCIdolGU-NM3-4yDxC0t868iL&ratebypass=yes&ms=au%2Conr&fvip=2&source=youtube&mv=m&ipbits=0&mm=31%2C26&mn=sn-aigllnlr%2Csn-5hne6nsy&mime=video%2Fmp4&c=WEB&initcwndbps=710000",
"quality": "hd720",
"itag": "22",
"type": "video\/mp4; codecs=\"avc1.64001F, mp4a.40.2\""
},
{
"url": "https:\/\/r10---sn-aigllnlr.googlevideo.com\/videoplayback?key=yt6&mt=1522999783&gir=yes&expire=1523021537&sparams=clen%2Cdur%2Cei%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Cratebypass%2Crequiressl%2Csource%2Cexpire&itag=43&ratebypass=yes&fvip=2&ipbits=0&mime=video%2Fwebm&initcwndbps=710000&signature=71DC48B9BF4B2E3ED46FE0A4CD36FE027DACF31E.4624B7B4BCB947336CEB029E9958B136F79759EB&clen=24203231&requiressl=yes&dur=0.000&pl=22&ip=185.27.134.50&lmt=1522961642553275&ei=gSLHWvuxDMOUVYaTqYgB&ms=au%2Conr&source=youtube&mv=m&id=o-AAoaDzyDCVXS404wfqZoCIdolGU-NM3-4yDxC0t868iL&mm=31%2C26&mn=sn-aigllnlr%2Csn-5hne6nsy&c=WEB",
"quality": "medium",
"itag": "43",
"type": "video\/webm; codecs=\"vp8.0, vorbis\""
},
{
"url": "https:\/\/r10---sn-aigllnlr.googlevideo.com\/videoplayback?key=yt6&mt=1522999783&gir=yes&expire=1523021537&sparams=clen%2Cdur%2Cei%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Cratebypass%2Crequiressl%2Csource%2Cexpire&itag=18&ratebypass=yes&fvip=2&ipbits=0&mime=video%2Fmp4&initcwndbps=710000&signature=C83DE33E3DC80981A65DB3FE4E6B3A48BF7500E4.361D0EE6210B30D3D3A80F43228DEF1BD20691A4&clen=15954979&requiressl=yes&dur=244.204&pl=22&ip=185.27.134.50&lmt=1522960340235683&ei=gSLHWvuxDMOUVYaTqYgB&ms=au%2Conr&source=youtube&mv=m&id=o-AAoaDzyDCVXS404wfqZoCIdolGU-NM3-4yDxC0t868iL&mm=31%2C26&mn=sn-aigllnlr%2Csn-5hne6nsy&c=WEB",
"quality": "medium",
"itag": "18",
"type": "video\/mp4; codecs=\"avc1.42001E, mp4a.40.2\""
},
{
"url": "https:\/\/r10---sn-aigllnlr.googlevideo.com\/videoplayback?key=yt6&mt=1522999783&gir=yes&expire=1523021537&sparams=clen%2Cdur%2Cei%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Crequiressl%2Csource%2Cexpire&itag=36&fvip=2&ipbits=0&mime=video%2F3gpp&initcwndbps=710000&signature=3E993D911492DA039A16BB26182ACDC6C6A04FCC.BFB9728C71CD03970B0F15AFD51A7355F9D3F899&clen=6759799&requiressl=yes&dur=244.273&pl=22&ip=185.27.134.50&lmt=1522957367267598&ei=gSLHWvuxDMOUVYaTqYgB&ms=au%2Conr&source=youtube&mv=m&id=o-AAoaDzyDCVXS404wfqZoCIdolGU-NM3-4yDxC0t868iL&mm=31%2C26&mn=sn-aigllnlr%2Csn-5hne6nsy&c=WEB",
"quality": "small",
"itag": "36",
"type": "video\/3gpp; codecs=\"mp4v.20.3, mp4a.40.2\""
},
{
"url": "https:\/\/r10---sn-aigllnlr.googlevideo.com\/videoplayback?key=yt6&mt=1522999783&gir=yes&expire=1523021537&sparams=clen%2Cdur%2Cei%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Crequiressl%2Csource%2Cexpire&itag=17&fvip=2&ipbits=0&mime=video%2F3gpp&initcwndbps=710000&signature=810D13A2C507A4EA220E6DA895B39B237FA22DAF.898D020851087CF3C10BC6E3ED7360736A239904&clen=2443931&requiressl=yes&dur=244.273&pl=22&ip=185.27.134.50&lmt=1522957365473654&ei=gSLHWvuxDMOUVYaTqYgB&ms=au%2Conr&source=youtube&mv=m&id=o-AAoaDzyDCVXS404wfqZoCIdolGU-NM3-4yDxC0t868iL&mm=31%2C26&mn=sn-aigllnlr%2Csn-5hne6nsy&c=WEB",
"quality": "small",
"itag": "17",
"type": "video\/3gpp; codecs=\"mp4v.20.3, mp4a.40.2\""
}
]
update :
to see the source code :
GIST: https://gist.github.com/egyjs/9e60f1ae3168c38cc0f0054c15cd6a83

Sorry sir, You cannot do this with youtube api v3. you have to use a url of youtube which is not an api, but here you can get all the videos related to this.
See
http://www.youtube.com/get_video_info?&video_id='. $my_id.'&asv=3&el=detailpage&hl=en_US
Or you can do to get all the videos download link even it is private or not allow for your country
1st: Go to any webpage of youtube videos link https://www.youtube.com/watch?v=9mdJV5-eias
2nd : view source of that page
3rd : 188 or 187 line, where you find source Javascript codes where location of the videos with mp4 format is also available.
You can do the second Idea by simplehtmldom and some php functions. and the first one can get by using curl which is easy but in little bit hard way by php. Thank you hope this will help you.

For Local to Java/Android here is how i achieved this, credit goes to #abdo-el-zahaby i converted his php script to ~equivalent java code it uses okhttp client to get urls
final String videoInfoUrl = "http://www.youtube.com/get_video_info?video_id=some_video_id&el=embedded&ps=default&eurl=&gl=US&hl=en";
Request request = new Request.Builder()
.cacheControl(CacheControl.FORCE_NETWORK)
.url(videoInfoUrl)
.build();
final Response response = okHttpClient.newCall(request).execute();
InputStream inputStream = response.body().byteStream();
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream));
String line;
final StringBuilder contentBuilder = new StringBuilder();
while ((line = bufferedReader.readLine()) != null) {
contentBuilder.append(line);
}
final String streamKey = "url_encoded_fmt_stream_map";
final Map<String, String> map = new HashMap<>();
final String content = contentBuilder.toString();
String[] ampSplit = content.split("&");
for (String s : ampSplit) {
printDivider();
final String[] equalsPlit = s.split("=");
if (equalsPlit.length >= 2) {
String key;
String value;
key = equalsPlit[0];
value = equalsPlit[1];
map.put(key, value);
}
printDivider();
}
int count = 0;
if (map.containsKey(streamKey)) {
String[] streams = map.get(streamKey).split(",");
for (String stream : streams) {
String[] streamSplit = stream.split("&");
for (String s : streamSplit) {
printDivider();
final String urlDecoded = URLDecoder.decode(s, "UTF-8");
String[] details = urlDecoded.split(",");
for (String detail : details) {
System.out.println("Detail " + URLDecoder.decode(detail, "UTF-8"));
final String urlContent= URLDecoder.decode(detail, "UTF-8");
final String url = urlContent.substring(urlContent.indexOf("http"), urlContent.indexOf(";"));
mp4Url.put(Integer.toString(count++), url);
}
}
printDivider();
}
}
This is the code i am using to download and store in sdcard/internal memory
Request request = new Request.Builder()
.cacheControl(CacheControl.FORCE_NETWORK)
.url(url)
.build();
final Response response = okHttpClient.newCall(request).execute();
InputStream inputStream = response.body().byteStream();
final File newFile = new File(location);
boolean created = newFile.createNewFile();
System.out.println(location + " new file created: " + created);
byte[] buff = new byte[4096];
long downloaded = 0;
long target = response.body().contentLength();
System.out.println("File size is: " + Long.toString(target));
OutputStream outStream = new FileOutputStream(newFile);
while (true) {
int read = inputStream.read(buff);
if (read == -1) {
break;
}
outStream.write(buff, 0, read);
//write buff
downloaded += read;
}
System.out.println("Target: " + target +", Downloaded: " + downloaded);
outStream.flush();

I don't know how it works, but check out https://podsync.net/.
When you input a youtube playlist link, it does some magic and generates an mp4 link to be used by podcast catchers. Examine the returned xml file and you'll see lines like this for each video in the playlist:
<enclosure url="http://podsync.net/download/[random-letters]/[video-id].mp4" length="x" type="video/mp4"></enclosure>
In my experience, that URL works no matter what video-id you use.

Related

Large File upload to ASP.NET Core 3.0 Web API fails due to Request Body to Large

I have an ASP.NET Core 3.0 Web API endpoint that I have set up to allow me to post large audio files. I have followed the following directions from MS docs to set up the endpoint.
https://learn.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads?view=aspnetcore-3.0#kestrel-maximum-request-body-size
When an audio file is uploaded to the endpoint, it is streamed to an Azure Blob Storage container.
My code works as expected locally.
When I push it to my production server in Azure App Service on Linux, the code does not work and errors with
Unhandled exception in request pipeline: System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Request body too large.
Per advice from the above article, I have configured incrementally updated Kesterl with the following:
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseKestrel((ctx, options) =>
{
var config = ctx.Configuration;
options.Limits.MaxRequestBodySize = 6000000000;
options.Limits.MinRequestBodyDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.MinResponseDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.RequestHeadersTimeout =
TimeSpan.FromMinutes(2);
}).UseStartup<Startup>();
Also configured FormOptions to accept files up to 6000000000
services.Configure<FormOptions>(options =>
{
options.MultipartBodyLengthLimit = 6000000000;
});
And also set up the API controller with the following attributes, per advice from the article
[HttpPost("audio", Name="UploadAudio")]
[DisableFormValueModelBinding]
[GenerateAntiforgeryTokenCookie]
[RequestSizeLimit(6000000000)]
[RequestFormLimits(MultipartBodyLengthLimit = 6000000000)]
Finally, here is the action itself. This giant block of code is not indicative of how I want the code to be written but I have merged it into one method as part of the debugging exercise.
public async Task<IActionResult> Audio()
{
if (!MultipartRequestHelper.IsMultipartContentType(Request.ContentType))
{
throw new ArgumentException("The media file could not be processed.");
}
string mediaId = string.Empty;
string instructorId = string.Empty;
try
{
// process file first
KeyValueAccumulator formAccumulator = new KeyValueAccumulator();
var streamedFileContent = new byte[0];
var boundary = MultipartRequestHelper.GetBoundary(
MediaTypeHeaderValue.Parse(Request.ContentType),
_defaultFormOptions.MultipartBoundaryLengthLimit
);
var reader = new MultipartReader(boundary, Request.Body);
var section = await reader.ReadNextSectionAsync();
while (section != null)
{
var hasContentDispositionHeader = ContentDispositionHeaderValue.TryParse(
section.ContentDisposition, out var contentDisposition);
if (hasContentDispositionHeader)
{
if (MultipartRequestHelper
.HasFileContentDisposition(contentDisposition))
{
streamedFileContent =
await FileHelpers.ProcessStreamedFile(section, contentDisposition,
_permittedExtensions, _fileSizeLimit);
}
else if (MultipartRequestHelper
.HasFormDataContentDisposition(contentDisposition))
{
var key = HeaderUtilities.RemoveQuotes(contentDisposition.Name).Value;
var encoding = FileHelpers.GetEncoding(section);
if (encoding == null)
{
return BadRequest($"The request could not be processed: Bad Encoding");
}
using (var streamReader = new StreamReader(
section.Body,
encoding,
detectEncodingFromByteOrderMarks: true,
bufferSize: 1024,
leaveOpen: true))
{
// The value length limit is enforced by
// MultipartBodyLengthLimit
var value = await streamReader.ReadToEndAsync();
if (string.Equals(value, "undefined",
StringComparison.OrdinalIgnoreCase))
{
value = string.Empty;
}
formAccumulator.Append(key, value);
if (formAccumulator.ValueCount >
_defaultFormOptions.ValueCountLimit)
{
return BadRequest($"The request could not be processed: Key Count limit exceeded.");
}
}
}
}
// Drain any remaining section body that hasn't been consumed and
// read the headers for the next section.
section = await reader.ReadNextSectionAsync();
}
var form = formAccumulator;
var file = streamedFileContent;
var results = form.GetResults();
instructorId = results["instructorId"];
string title = results["title"];
string firstName = results["firstName"];
string lastName = results["lastName"];
string durationInMinutes = results["durationInMinutes"];
//mediaId = await AddInstructorAudioMedia(instructorId, firstName, lastName, title, Convert.ToInt32(duration), DateTime.UtcNow, DateTime.UtcNow, file);
string fileExtension = "m4a";
// Generate Container Name - InstructorSpecific
string containerName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{instructorId}";
string contentType = "audio/mp4";
FileType fileType = FileType.audio;
string authorName = $"{firstName} {lastName}";
string authorShortName = $"{firstName[0]}{lastName}";
string description = $"{authorShortName} - {title}";
long duration = (Convert.ToInt32(durationInMinutes) * 60000);
// Generate new filename
string fileName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{Guid.NewGuid()}";
DateTime recordingDate = DateTime.UtcNow;
DateTime uploadDate = DateTime.UtcNow;
long blobSize = long.MinValue;
try
{
// Update file properties in storage
Dictionary<string, string> fileProperties = new Dictionary<string, string>();
fileProperties.Add("ContentType", contentType);
// update file metadata in storage
Dictionary<string, string> metadata = new Dictionary<string, string>();
metadata.Add("author", authorShortName);
metadata.Add("tite", title);
metadata.Add("description", description);
metadata.Add("duration", duration.ToString());
metadata.Add("recordingDate", recordingDate.ToString());
metadata.Add("uploadDate", uploadDate.ToString());
var fileNameWExt = $"{fileName}.{fileExtension}";
var blobContainer = await _cloudStorageService.CreateBlob(containerName, fileNameWExt, "audio");
try
{
MemoryStream fileContent = new MemoryStream(streamedFileContent);
fileContent.Position = 0;
using (fileContent)
{
await blobContainer.UploadFromStreamAsync(fileContent);
}
}
catch (StorageException e)
{
if (e.RequestInformation.HttpStatusCode == 403)
{
return BadRequest(e.Message);
}
else
{
return BadRequest(e.Message);
}
}
try
{
foreach (var key in metadata.Keys.ToList())
{
blobContainer.Metadata.Add(key, metadata[key]);
}
await blobContainer.SetMetadataAsync();
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
blobSize = await StorageUtils.GetBlobSize(blobContainer);
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
Media media = Media.Create(string.Empty, instructorId, authorName, fileName, fileType, fileExtension, recordingDate, uploadDate, ContentDetails.Create(title, description, duration, blobSize, 0, new List<string>()), StateDetails.Create(StatusType.STAGED, DateTime.MinValue, DateTime.UtcNow, DateTime.MaxValue), Manifest.Create(new Dictionary<string, string>()));
// upload to MongoDB
if (media != null)
{
var mapper = new Mapper(_mapperConfiguration);
var dao = mapper.Map<ContentDAO>(media);
try
{
await _db.Content.InsertOneAsync(dao);
}
catch (Exception)
{
mediaId = string.Empty;
}
mediaId = dao.Id.ToString();
}
else
{
// metadata wasn't stored, remove blob
await _cloudStorageService.DeleteBlob(containerName, fileName, "audio");
return BadRequest($"An issue occurred during media upload: rolling back storage change");
}
if (string.IsNullOrEmpty(mediaId))
{
return BadRequest($"Could not add instructor media");
}
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
var result = new { MediaId = mediaId, InstructorId = instructorId };
return Ok(result);
}
I reiterate, this all works great locally. I do not run it in IISExpress, I run it as a console app.
I submit large audio files via my SPA app and Postman and it works perfectly.
I am deploying this code to an Azure App Service on Linux (as a Basic B1).
Since the code works in my local development environment, I am at a loss of what my next steps are. I have refactored this code a few times but I suspect that it's environment related.
I cannot find anywhere that mentions that the level of App Service Plan is the culprit so before I go out spending more money I wanted to see if anyone here had encountered this challenge and could provide advice.
UPDATE: I attempted upgrading to a Production App Service Plan to see if there was an undocumented gate for incoming traffic. Upgrading didn't work either.
Thanks in advance.
-A
Currently, as of 11/2019, there is a limitation with the Azure App Service for Linux. It's CORS functionality is enabled by default and cannot be disabled AND it has a file size limitation that doesn't appear to get overridden by any of the published Kestrel configurations. The solution is to move the Web API app to a Azure App Service for Windows and it works as expected.
I am sure there is some way to get around it if you know the magic combination of configurations, server settings, and CLI commands but I need to move on with development.

How to upload a small file plus metadata with GraphServiceClient to OneDrive with a single POST request?

I would like to upload small files with metadata (DriveItem) attached so that the LastModifiedDateTime property is set properly.
First, my current workaround is this:
var graphFileSystemInfo = new Microsoft.Graph.FileSystemInfo()
{
CreatedDateTime = fileSystemInfo.CreationTimeUtc,
LastAccessedDateTime = fileSystemInfo.LastAccessTimeUtc,
LastModifiedDateTime = fileSystemInfo.LastWriteTimeUtc
};
using (var stream = new System.IO.File.OpenRead(localPath))
{
if (fileSystemInfo.Length <= 4 * 1024 * 1024) // file.Length <= 4 MB
{
var driveItem = new DriveItem()
{
File = new File(),
FileSystemInfo = graphFileSystemInfo,
Name = Path.GetFileName(item.Path)
};
try
{
var newDriveItem = await graphClient.Me.Drive.Root.ItemWithPath(item.Path).Content.Request().PutAsync<DriveItem>(stream);
await graphClient.Me.Drive.Items[newDriveItem.Id].Request().UpdateAsync(driveItem);
}
catch (Exception ex)
{
throw;
}
}
else
{
// large file upload
}
}
This code works by first uploading the content via PutAsync and then updating the metadata via UpdateAsync. I tried to do it vice versa (as suggested here) but then I get the error that no file without content can be created. If I then add content to the DriveItem.Content property, the next error is that the stream's ReadTimeout and WriteTimeout properties cannot be read. With a wrapper class for the FileStream, I can overcome this but then I get the next error: A stream property 'content' has a value in the payload. In OData, stream property must not have a value, it must only use property annotations.
By googling, I found that there is another way to upload data, called multipart upload (link). With this description I tried to use the GraphServiceClient to create such a request. But it seems that this is only fully implemented for OneNote items. I took this code as template and created the following function to mimic the OneNote behavior:
public static async Task UploadSmallFile(GraphServiceClient graphClient, DriveItem driveItem, Stream stream)
{
var jsondata = JsonConvert.SerializeObject(driveItem);
// Create the metadata part.
StringContent stringContent = new StringContent(jsondata, Encoding.UTF8, "application/json");
stringContent.Headers.ContentDisposition = new ContentDispositionHeaderValue("related");
stringContent.Headers.ContentDisposition.Name = "Metadata";
stringContent.Headers.ContentType = new MediaTypeHeaderValue("application/json");
// Create the data part.
var streamContent = new StreamContent(stream);
streamContent.Headers.ContentDisposition = new ContentDispositionHeaderValue("related");
streamContent.Headers.ContentDisposition.Name = "Data";
streamContent.Headers.ContentType = new MediaTypeHeaderValue("text/plain");
// Put the multiparts together
string boundary = "MultiPartBoundary32541";
MultipartContent multiPartContent = new MultipartContent("related", boundary);
multiPartContent.Add(stringContent);
multiPartContent.Add(streamContent);
var requestUrl = graphClient.Me.Drive.Items["F4C4DC6C33B9D421!103"].Children.Request().RequestUrl;
// Create the request message and add the content.
HttpRequestMessage hrm = new HttpRequestMessage(HttpMethod.Post, requestUrl);
hrm.Content = multiPartContent;
// Send the request and get the response.
var response = await graphClient.HttpProvider.SendAsync(hrm);
}
With this code, I get the error Entity only allows writes with a JSON Content-Type header.
What am I doing wrong?
Not sure why the provided error occurs, your example appears to be a valid and corresponds to Request body example
But the alternative option could be considered for this matter, since Microsoft Graph supports JSON batching, the folowing example demonstrates how to upload a file and update its metadata within a single request:
POST https://graph.microsoft.com/v1.0/$batch
Accept: application/json
Content-Type: application/json
{
"requests": [
{
"id":"1",
"method":"PUT",
"url":"/me/drive/root:/Sample.docx:/content",
"headers":{
"Content-Type":"application/octet-stream"
},
},
{
"id":"2",
"method":"PATCH",
"url":"/me/drive/root:/Sample.docx:",
"headers":{
"Content-Type":"application/json; charset=utf-8"
},
"body":{
"fileSystemInfo":{
"lastModifiedDateTime":"2019-08-09T00:49:37.7758742+03:00"
}
},
"dependsOn":["1"]
}
]
}
Here is a C# example
var bytes = System.IO.File.ReadAllBytes(path);
var stream = new MemoryStream(bytes);
var batchRequest = new BatchRequest();
//1.1 construct upload file query
var uploadRequest = graphClient.Me
.Drive
.Root
.ItemWithPath(System.IO.Path.GetFileName(path))
.Content
.Request();
batchRequest.AddQuery(uploadRequest, HttpMethod.Put, new StreamContent(stream));
//1.2 construct update driveItem query
var updateRequest = graphClient.Me
.Drive
.Root
.ItemWithPath(System.IO.Path.GetFileName(path))
.Request();
var driveItem = new DriveItem()
{
FileSystemInfo = new FileSystemInfo()
{
LastModifiedDateTime = DateTimeOffset.UtcNow.AddDays(-1)
}
};
var jsonPayload = new StringContent(graphClient.HttpProvider.Serializer.SerializeObject(driveItem), Encoding.UTF8, "application/json");
batchRequest.AddQuery(updateRequest, new HttpMethod("PATCH"), jsonPayload, true, typeof(Microsoft.Graph.DriveItem));
//2. execute Batch request
var result = await graphClient.SendBatchAsync(batchRequest);
var updatedDriveItem = result[1] as DriveItem;
Console.WriteLine(updatedDriveItem.LastModifiedDateTime);
where SendBatchAsync is an extension method which implements JSON Batching support for Microsoft Graph .NET Client Library

I am trying to upload a video using OAuth 2.0, where do I get video id?

I am uploading video into YouTube its asking Please enter a video Id to update: I am new to it. Where do I get video id.
Here is my code
package com.google.api.services.samples.youtube.cmdline.data;
import com.google.api.client.auth.oauth2.Credential;
import com.google.api.client.googleapis.json.GoogleJsonResponseException;
import com.google.api.services.samples.youtube.cmdline.Auth;
import com.google.api.services.youtube.YouTube;
import com.google.api.services.youtube.model.Video;
import com.google.api.services.youtube.model.VideoListResponse;
import com.google.api.services.youtube.model.VideoSnippet;
import com.google.common.collect.Lists;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.List;
/**
* Update a video by adding a keyword tag to its metadata. The demo uses the
* YouTube Data API (v3) and OAuth 2.0 for authorization.
*
* #author Ibrahim Ulukaya
*/
public class UpdateVideo {
/**
* Define a global instance of a Youtube object, which will be used
* to make YouTube Data API requests.
*/
private static YouTube youtube;
/**
* Add a keyword tag to a video that the user specifies. Use OAuth 2.0 to
* authorize the API request.
*
* #param args command line args (not used).
*/
public static void main(String[] args) {
// This OAuth 2.0 access scope allows for full read/write access to the
// authenticated user's account.
List<String> scopes = Lists.newArrayList("https://www.googleapis.com/auth/youtube");
try {
// Authorize the request.
Credential credential = Auth.authorize(scopes, "updatevideo");
// This object is used to make YouTube Data API requests.
youtube = new YouTube.Builder(Auth.HTTP_TRANSPORT, Auth.JSON_FACTORY, credential)
.setApplicationName("youtube-cmdline-updatevideo-sample").build();
// Prompt the user to enter the video ID of the video being updated.
String videoId = getVideoIdFromUser();
// Prompt the user to enter a keyword tag to add to the video.
String tag = getTagFromUser();
System.out.println("You chose " + tag + " as a tag.");
// Call the YouTube Data API's youtube.videos.list method to
// retrieve the resource that represents the specified video.
YouTube.Videos.List listVideosRequest = youtube.videos().list("snippet").setId(videoId);
VideoListResponse listResponse = listVideosRequest.execute();
// Since the API request specified a unique video ID, the API
// response should return exactly one video. If the response does
// not contain a video, then the specified video ID was not found.
List<Video> videoList = listResponse.getItems();
if (videoList.isEmpty()) {
System.out.println("Can't find a video with ID: " + videoId);
return;
}
// Extract the snippet from the video resource.
Video video = videoList.get(0);
VideoSnippet snippet = video.getSnippet();
// Preserve any tags already associated with the video. If the
// video does not have any tags, create a new array. Append the
// provided tag to the list of tags associated with the video.
List<String> tags = snippet.getTags();
if (tags == null) {
tags = new ArrayList<String>(1);
snippet.setTags(tags);
}
tags.add(tag);
// Update the video resource by calling the videos.update() method.
YouTube.Videos.Update updateVideosRequest = youtube.videos().update("snippet", video);
Video videoResponse = updateVideosRequest.execute();
// Print information from the updated resource.
System.out.println("\n================== Returned Video ==================\n");
System.out.println(" - Title: " + videoResponse.getSnippet().getTitle());
System.out.println(" - Tags: " + videoResponse.getSnippet().getTags());
} catch (GoogleJsonResponseException e) {
System.err.println("GoogleJsonResponseException code: " + e.getDetails().getCode() + " : "
+ e.getDetails().getMessage());
e.printStackTrace();
} catch (IOException e) {
System.err.println("IOException: " + e.getMessage());
e.printStackTrace();
} catch (Throwable t) {
System.err.println("Throwable: " + t.getMessage());
t.printStackTrace();
}
}
/*
* Prompt the user to enter a keyword tag.
*/
private static String getTagFromUser() throws IOException {
String keyword = "";
System.out.print("Please enter a tag for your video: ");
BufferedReader bReader = new BufferedReader(new InputStreamReader(System.in));
keyword = bReader.readLine();
if (keyword.length() < 1) {
// If the user doesn't enter a tag, use the default value "New Tag."
keyword = "New Tag";
}
return keyword;
}
/*
* Prompt the user to enter a video ID.
*/
private static String getVideoIdFromUser() throws IOException {
String videoId = "";
System.out.print("Please enter a video Id to update: ");
BufferedReader bReader = new BufferedReader(new InputStreamReader(System.in));
videoId = bReader.readLine();
if (videoId.length() < 1) {
// Exit if the user doesn't provide a value.
System.out.print("Video Id can't be empty!");
System.exit(1);
}
return videoId;
}
}
To get the video ID use the Try It section of the Activities: list.
Here what it looks like:
API response:
/**
* API response
*/
{
"kind": "youtube#activityListResponse",
"etag": "\"VPWTmrH7dFmi4s1RqrK4tLejnRI/KnvtCHPiiQN_WAXGjDQ8lcr7iwg\"",
"nextPageToken": "CAEQAA",
"pageInfo": {
"totalResults": 12,
"resultsPerPage": 1
},
"items": [
{
"kind": "youtube#activity",
"etag": "\"VPWTmrH7dFmi4s1RqrK4tLejnRI/SGHxz5HNfpA3N5yUU96mLHABdfo\"",
"id": "VTE1MDU1MDk4OTM5NDIwMjk3MDUxNzAwOA==",
"contentDetails": {
"upload": {
"videoId": "7Wfz719AJgQ"
}
}
}
]
}
As you can see, the "videoId": "7Wfz719AJgQ" was in the response. You can absolutely get the videoID by this method.
To have a deeper insight, explore the documentation.

Trouble exporting to excel from mvc action

I created a simple action to download some content as excel file:
public FileResult ExportToExcel()
{
string filename = "list.xlsx";
string contentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
List<string[]> list = new List<string[]>();
list.Add(new[] { "col1", "col2", "cols3" });
list.Add(new[] { "col4", "col5", "cols6" });
list.Add(new[] { "col7", "col8", "cols9" });
StringWriter sw = new StringWriter();
sw.WriteLine("ID,Date,Description");
foreach (string[] item in list)
{
sw.WriteLine("{0},{1},{2}", item[0], item[1], item[2]);
}
byte[] fileContents = Encoding.UTF8.GetBytes(sw.ToString());
return this.File(fileContents, contentType, filename);
}
I have 2 issues with it:
1. The file is downloaded but I cannot open it and am getting a warning:
Excel cannot open the file ... because the file format or file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file.
When I use old excel format:
string filename = "List.xls";
string contentType = "application/vnd.ms-excel";
I am able to open the file but after 3 different warnings about file being corrupted etc.
Btw I compared saving and tried to write file as pdf
string filename = "List.pdf";
string contentType = "application/pdf";
And I still couldn't open the file - it said format is not valid etc.
2. The contents appear in the file in the second example however the commas are not recognised as column separators and all data in a row is written as one column.
What separator to use for excel format or how to write data to file to have it in a table excel format?
Ideal solution for me would be just return exported view (strongly typed) but I didn't find out how to do it so far.
--- EDIT: Working solution ---
public FileResult ExportToExcel()
{
string filename = "List.xlsx";
string contentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
List<string[]> titles = new List<string[]>() { new[] { "a", "be", "ce" } };
List<string[]> list = new List<string[]>
{
new[] { "col1", "col2", "cols3" },
new[] { "col4", "col5", "cols6" },
new[] { "col7", "col8", "cols9" },
new[] { "col10", "col11", "cols12" }
};
XLWorkbook wb = new XLWorkbook();
XLTables xt = new XLTables();
var ws = wb.Worksheets.Add("List");
ws.Cell(1, 1).InsertData(titles);
ws.Cell(2, 1).InsertData(list);
ws.Columns().AdjustToContents();
var stream = new MemoryStream();
wb.SaveAs(stream);
stream.Seek(0, SeekOrigin.Begin);
wb.Dispose();
return this.File(stream, contentType, filename);
}
The reason why it is not being correctly rendered is because you cannot just return the mime type and expect the framework to figure out the rest.
I would go with a nuget package called closedXML which will allow you to create an excel file in memory and stream it back to the client.
it comes with a full documentation (here) for more information.
Using this package you can do something like
XLWorkbook wb = new XLWorkbook();
XLTables xt = new XLTables();
var ws = wb.Worksheets.Add("Sheet 1");
var firstCell = ws.Cell(1, 1);
var lastCell = ws.Cell(3, list.Count);
var table = ws.Range(firstCell.Address, lastCell.Address).AsTable();
table.Cell(2, 1).InsertData(list);
table.CreateTable();
ws.Columns().AdjustToContents();
using(var stream = new MemoryStream())
{
wb.SaveAs(stream);
stream.Seek(0, SeekOrigin.Begin);
wb.Dispose();
return File(stream , contentType, filename);
}

How to put two jasperReports in one zip file to download?

public String generateReport() {
try
{
final FacesContext facesContext = FacesContext.getCurrentInstance();
final HttpServletResponse response = (HttpServletResponse) facesContext.getExternalContext().getResponse();
response.reset();
response.setHeader("Content-Disposition", "attachment; filename=\"" + "myReport.zip\";");
final BufferedOutputStream bos = new BufferedOutputStream(response.getOutputStream());
final ZipOutputStream zos = new ZipOutputStream(bos);
for (final PeriodScale periodScale : Scale.getPeriodScales(this.startDate, this.endDate))
{
final JasperPrint jasperPrint = JasperFillManager.fillReport(
this.reportsPath() + File.separator + "periodicScale.jasper",
this.parameters(this.reportsPath(), periodScale.getScale(),
periodScale.getStartDate(), periodScale.getEndDate()),
new JREmptyDataSource());
final byte[] bytes = JasperExportManager.exportReportToPdf(jasperPrint);
response.setContentLength(bytes.length);
final ZipEntry ze = new ZipEntry("periodicScale"+ periodScale.getStartDate() + ".pdf"); // periodicScale13032015.pdf for example
zos.putNextEntry(ze);
zos.write(bytes, 0, bytes.length);
zos.closeEntry();
}
zos.close();
facesContext.responseComplete();
}
catch (final Exception e)
{
e.printStackTrace();
}
return "";
}
This is my action method in the managedBean which is called by the user to print a JasperReport, but when I try to put more than one report inside the zip file it's not working.
getPeriodScales are returning two objects and JasperFillManager.fillReport is running correctly as the reports print when I just generate data for one report, when I try to stream two reports though and open in WinRar only one appears and I get an "unexpedted end of archive", in 7zip both appear but the second is corrupted.
What am I doing wrong or is there a way to stream multiple reports without zipping it?
I figured out what was, I was setting the contentLenght of the response with bytes.length size, but it should be bytes.length * Scale.getPeriodScales(this.startDate, this.endDate).size()
public JasperPrint generatePdf(long consumerNo) {
Consumer consumerByCustomerNo = consumerService.getConsumerByCustomerNo(consumerNo);
consumerList.add(consumerByCustomerNo);
BillHeaderIPOP billHeaderByConsumerNo = billHeaderService.getBillHeaderByConsumerNo(consumerNo);
Long billNo = billHeaderByConsumerNo.getBillNo();
List<BillLineItem> billLineItemByBilNo = billLineItemService.getBillLineItemByBilNo(billNo);
System.out.println(billLineItemByBilNo);
List<BillReadingLine> billReadingLineByBillNo = billReadingLineService.getBillReadingLineByBillNo(billNo);
File jrxmlFile = ResourceUtils.getFile("classpath:demo.jrxml");
JasperReport jasperReport = JasperCompileManager.compileReport(jrxmlFile.getAbsolutePath());
pdfContainer.setName(consumerByCustomerNo.getName());
pdfContainer.setTelephone(consumerByCustomerNo.getTelephone());
pdfContainer.setFromDate(billLineItemByBilNo.get(0).getStartDate());
pdfContainer.setToDate(billLineItemByBilNo.get(0).getEndDate());
pdfContainer.setSupplyAddress(consumerByCustomerNo.getSupplyAddress());
pdfContainer.setMeterNo(billReadingLineByBillNo.get(0).getMeterNo());
pdfContainer.setBillType(billHeaderByConsumerNo.getBillType());
pdfContainer.setReadingType(billReadingLineByBillNo.get(0).getReadingType());
pdfContainer.setLastBilledReadingInKWH(billReadingLineByBillNo.stream().filter(billReadingLine -> billReadingLine.getRegister().contains("KWH")).collect(Collectors.toList()).get(0).getLastBilledReading());
pdfContainer.setLastBilledReadingInKW(billReadingLineByBillNo.stream().filter(billReadingLine -> billReadingLine.getRegister().contains("KW")).collect(Collectors.toList()).get(0).getLastBilledReading());
pdfContainer.setReadingType(billReadingLineByBillNo.get(0).getReadingType());
pdfContainer.setRateCategory(billLineItemByBilNo.get(0).getRateCategory());
List<PdfContainer> pdfContainerList = new ArrayList<>();
pdfContainerList.add(pdfContainer);
Map<String, Object> parameters = new HashMap<>();
parameters.put("billLineItemByBilNo", billLineItemByBilNo);
parameters.put("billReadingLineByBillNo", billReadingLineByBillNo);
parameters.put("consumerList", consumerList);
parameters.put("pdfContainerList", pdfContainerList);
JasperPrint jasperPrint = JasperFillManager.fillReport(jasperReport, parameters, new JREmptyDataSource());
return jasperPrint;
}
//above code is accroding to my requirement , you just focus on the jasperPrint object which am returning , then jasperPrint object is being used for pdf generation , storing those pdf into a zip file .
#GetMapping("/batchpdf/{rangeFrom}/{rangeTo}")
public String batchPdfBill(#PathVariable("rangeFrom") long rangeFrom, #PathVariable("rangeTo") long rangeTo) throws JRException, IOException {
consumerNosInRange = consumerService.consumerNoByRange(rangeFrom, rangeTo);
String zipFilePath = "C:\\Users\\Barada\\Downloads";
FileOutputStream fos = new FileOutputStream(zipFilePath +"\\"+ rangeFrom +"-To-"+ rangeTo +"--"+ Math.random() + ".zip");
BufferedOutputStream bos = new BufferedOutputStream(fos);
ZipOutputStream outputStream = new ZipOutputStream(bos);
try {
for (long consumerNo : consumerNosInRange) {
JasperPrint jasperPrint = generatePdf(consumerNo);
byte[] bytes = JasperExportManager.exportReportToPdf(jasperPrint);
outputStream.putNextEntry(new ZipEntry(consumerNo + ".pdf"));
outputStream.write(bytes, 0, bytes.length);
outputStream.closeEntry();
}
} finally {
outputStream.close();
}
return "All Bills PDF Generated.. Extract ZIP file get all Bills";
}
}

Resources