Teams Graph Calls API, how make a Consultative Transfer - microsoft-graph-api

I can make a call from a BOT to a teams user, and can carry out a "blind" transfer of that call to another teams user using
public async Task TransferCallAsync(string replaceCallId, string userDisplayName, string userID)
{
var transferTarget = new InvitationParticipantInfo
{
Identity = new IdentitySet
{
User = new Identity
{
DisplayName = userDisplayName,
Id = userID
}
},
AdditionalData = new Dictionary<string, object>()
{
{"endpointType", "default"}
},
};
await graphServiceClient.Communications.Calls[replaceCallId]
.Transfer(transferTarget)
.Request()
.PostAsync();
}
derived from https://learn.microsoft.com/en-us/graph/api/call-transfer?view=graph-rest-1.0&tabs=csharp%2Chttp#request
In order to carry out a "consultative transfer" (traditionally this means that the Party A is put on hold whilst the operator/BOT dials the Party B, speaks to the Party B, then actions the transfer of Party A to Party B and drops out of the call), the documentation at
https://learn.microsoft.com/en-us/graph/api/call-transfer?view=graph-rest-1.0&tabs=csharp%2Chttp#request-1
shows that I need to use the same code as for a blind transfer, but add a
"ReplacesCallId = "..some Call ID.."
to the transferTarget structure.
My question is, where does this ID come from ?
Note that this is not the id of the call which exists between the BOT and party A. This gives a "Code: 8523 Message: replacesCallId matches callId. Cannot replace a call with itself"
What API steps are necessary to generate the ReplacesCallId value.

Related

Exception Azure AD B2C Setting Custom Attribute With GraphServiceClient

Following directions from here: https://learn.microsoft.com/en-us/azure/active-directory-b2c/user-flow-custom-attributes?pivots=b2c-user-flow
I am able to create and get users fine:
// This works!
var graphServiceClient = new GraphServiceClient(
$"https://graph.microsoft.com/beta",
AuthenticationProvider);
var user = await graphClient.Users[userId].Request().GetAsync();
if (user.AdditionalData == null)
user.AdditionalData = new Dictionary<string, object>();
user.AdditionalData[$"extension_xxxxxxx_Apps] = "TestValue";
// this does not work!
result = await graphClient.Users[user.Id].Request().UpdateAsync(msGraphUser);
For xxxxxxx I tried both the Client ID and Object Id from the b2c-extensions-app in my tenant.
Exception:
Microsoft.Graph.ServiceException: 'Code: Request_BadRequest
Message: The following extension properties are not available: extension_xxxxxxx_Apps.
what am I missing? How can I set a custom attribute from GraphServiceClient?
Thank you
Try creating a "new" user rather getting the existing one. When you call UpdateAsync, B2C will only set the properties that you provide (it won't overwrite the other props with null). This may or may not help, but the thing is, we're doing the same thing you do above except with a "new" User, and it works for us.
User b2cUser = await this.GetOurUser(userId);
var additionalData = new Dictionary<string, object>();
additionalData["extension_id-of-extensions-app-here_Foo"] = "Ice Cream Cone";
var updatedB2cUser = new User
{
AdditionalData = additionalData
};
await this.GraphServiceClient.Users[b2cUser.Id].Request().UpdateAsync(updatedB2cUser);
In practice, we include additional props such as Identities, because we use B2C as an identity provider...so the above is some pared-down code from our B2C wrapper showing just the "custom property" part.
Update
Actually, you may just need to remove the hyphens from your extension-app ID and double-check which one you're using.

Receiving a real incoming call in Twilio's ClientQuickstart

For the time being I use a test account in Twilio, but I hope that this has no relevance regarding my question.
As my firs experimental step towards Twilio, I'm testing the client-quickstart-csharp-1.4 package on Visual Studio 2017 on Windows.
Outgoing calls work fine to my verified phone, but I have problems with incoming calls. When I make a call from a real phone to my Twilio phonenumber, then my code in VoiceController.cs doesn't run (doesn't hit any breakpoint) and I hear a voice message about that I should reconfigure something my application (but I don't understand, what). In contrast of this, when I make a call from my TwiMLApp config page, pressing the red Call button (see picture),
then my code stops at the breakpoints, and says the text I wrote in the argumet of response.Say().
My questions:
Why does the call work differently from a real phone then from my
TwiMLApp config page?
How can I achieve my code run (i.e. say the text I wrote in the code) also when I make a call from a real phone?
How Can I achieve a real, live voice dialogue between the caller phone and my computer's speaker and microphone at incoming calls (similarly to
the outgoing calls)?
Remark 1.
Both outgoing an incoming calls work fine in Agile CRM using the Twilio widget for voice calls. But for the time of my experiments I've removed this widget (and also the "Agile CRM Twilio Saga" TwiML App from Twilio), to avoid the interferences between the different applications.
Remark 2.
Perhaps I should configure something with this screen (the screenshot found here), but I don't find this page on my twilio portal.
Instead of this, I have a page like this:
But I don't know what to change here to make my program work.
It seems that this application is designed
to manage outgoing calls (to a real phone, or to an other client of this
application) and
accept calls from the web (from an another client,
or from the TwiML App setting page, seen on the first screenshot on the o.p.), but not from a real phone.
Every (outgoing or incoming) call falls into the Index() method of the VoiceController class. This method tries to find out whether a call is incoming or outgoing.
In the case of on outgoing call, the To property of the request parameter of this method is a phonenumber, while at an incoming call from the web is a string (a username), or null (when the call comes from the TwiML App setting page). This justifies the if-else structure in the original code (extended just my remarks starting with (mma))
public ActionResult Index(VoiceRequest request)
{
var callerId = ConfigurationManager.AppSettings["TwilioCallerId"];
var response = new TwilioResponse();
if (!string.IsNullOrEmpty(request.To))
{
// wrap the phone number or client name in the appropriate TwiML verb
// by checking if the number given has only digits and format symbols
if (Regex.IsMatch(request.To, "^[\\d\\+\\-\\(\\) ]+$")) //(mma) supposed to be an outgoing call
{
response.Dial(new Number(request.To), new { callerId });
}
else //(mma) a call from one client to antorher
{
response.Dial(new Client(request.To), new { callerId });
}
}
else //(mma) incoming call from the TwiML App setting page
{
response.Say("Thanks for calling!");
}
return TwiML(response);
Question 3. can be separated into the following two parts:
If at an incoming call we want to establish a real connection with a pre-specified client (say calledUser) instead of reading out the "Thanks for calling!" message, we should replace response.Say("Thanks for calling!"); by response.Dial(cl, new { request.From }); where cl = new Client(calledUser); We can put the value of calledUser into our Local.config, so we can read it from there: var calledUser = ConfigurationManager.AppSettings["calledUser"];
If we want to accept a call from a real phone, then we should recognize this situation. This is exactly when request.To == callerId( = our Twilio phononumber) , so we must split the first condition according this. The new branch will call the pre-specified user.
Putting these together, our new code in VoiceController.cs will look like this:
public ActionResult Index(VoiceRequest request)
{
var callerId = ConfigurationManager.AppSettings["TwilioCallerId"];
var calledUser = ConfigurationManager.AppSettings["calledUser"];
var response = new TwilioResponse();
if (!string.IsNullOrEmpty(request.To))
{
// wrap the phone number or client name in the appropriate TwiML verb
// by checking if the number given has only digits and format symbols
if (Regex.IsMatch(request.To, "^[\\d\\+\\-\\(\\) ]+$"))
{
if (request.To != callerId) //(mma) supposed to be an outgoing call
{
response.Dial(new Number(request.To), new { callerId });
}
else //(mma) supposed to be an incoming call from a real phone
{
var cl = new Client(calledUser);
response.Dial(cl, new { request.From });
}
}
else //(mma) a call from one client to antorher
{
response.Dial(new Client(request.To), new { request.From });
}
}
else //(mma) incoming call from the TwiML App setting page
{
var cl = new Client(calledUser);
response.Dial(cl, new { request.From });
}
return TwiML(response);
}
Of course, if we want to accept a call, then we should start a client with the pre-defined username (calledUser). In order to do this, we can introduce a new Url parameter User, put its value into TempData["User"] by the HomeController and change the var identity = Internet.UserName().AlphanumericOnly(); line in the TokenController.cs to var identity = TempData["User"] == null ? Internet.UserName().AlphanumericOnly() : TempData["User"].ToString();
So, our new HomeController and TokenController look like this:
public class HomeController : Controller
{
public ActionResult Index(string user)
{
TempData["User"] = user;
return View();
}
}
and this:
public class TokenController : Controller
{
// GET: /token
public ActionResult Index()
{
// Load Twilio configuration from Web.config
var accountSid = ConfigurationManager.AppSettings["TwilioAccountSid"];
var authToken = ConfigurationManager.AppSettings["TwilioAuthToken"];
var appSid = ConfigurationManager.AppSettings["TwilioTwimlAppSid"];
// Create a random identity for the client
var identity = TempData["User"] == null ? Internet.UserName().AlphanumericOnly() : TempData["User"].ToString();
// Create an Access Token generator
var capability = new TwilioCapability(accountSid, authToken);
capability.AllowClientOutgoing(appSid);
capability.AllowClientIncoming(identity);
var token = capability.GenerateToken();
return Json(new
{
identity,
token
}, JsonRequestBehavior.AllowGet);
}
}
And, of course, our Local.config file should contain such a line:
<add key="calledUser" value="TheNameOfThePreDefinedUser" />

Add Pagination MVC and Azure table storage

Iam trying to apply Pagination to my MVC application. Iam using Azure table storage
Here is what I have tried:-
public List<T> GetPagination(string partitionKey, int start, int take)
{
List<T> entities = new List<T>();
TableQuery<T> query = new TableQuery<T>().Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey.ToLower()));
entities = Table.ExecuteQuery(query).Skip(start).Take(take).ToList();
return entities;
}
Controller:-
public ActionResult Index()
{
key= System.Web.HttpContext.Current.Request[Constants.Key];
if (String.IsNullOrEmpty(key))
return RedirectToAction("NoContext", "Error");
var items= _StorageHelper.GetPagination(key,0,3);
ItemCollection itemCollection = new ItemCollection();
itemCollection .Items= Mapper.Map<List<ItemChart>, List<ItemModel>>(items);
itemCollection .Items.ForEach(g => g.Object = g.Object.Replace(key, ""));
return View(itemCollection);
}
This currently gives me the first 3 entries from my data. Now how can I show and implement the "Previous" and "Next" to show the rest of the entries on next page? How do I implement the rest of the controller and HTML page?
Any help is appreciated.
When it comes to pagination, there are a few things to consider:
Not all LINQ operators (and in turn OData query options) are supported for Table Service. For example Skip is not supported. For a list of supported operators, please see this link: https://msdn.microsoft.com/en-us/library/azure/dd135725.aspx.
The way pagination works with Table Service is that when you query your table to fetch some data, maximum number of entities that can be returned by table service is 1000. There's no guarantee that always 1000 entities will be returned. It may be less than 1000 or even 0 depending on how you're querying. However if there are more results available, Table Service returns something called a Continuation Token. You must use this token to fetch next set of results from table service. Please see this link for more information on query timeout and pagination: https://msdn.microsoft.com/en-us/library/azure/dd135718.aspx.
Taking these two factors into consideration, you can't really implement a paging solution where a user can directly jump to a particular page (for example, user is sitting on page 1 and then the user can't go to page 4). At the most you can implement next page, previous page and first page kind of functionality.
To implement next page kind of functionality, store the continuation token returned by table service and use that in your query.
To implement previous page kind of functionality, you must store all the continuation tokens returned in an array or something and keep track of which page a user is on currently (that would be the current page index). When a user wants to go to previous page, you just get the continuation token for the previous index (i.e. current page index - 1) and use that in your query.
To implement first page kind of functionality, just issue your query without continuation token.
Do take a look at ExecuteQuerySegmented method in Storage Client Library if you want to implement pagination.
UPDATE
Please see the sample code below. For the sake of simplicity, I have only kept first page and next page functionality:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.Storage.Auth;
using Microsoft.WindowsAzure.Storage.Blob;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Queue;
using Microsoft.WindowsAzure.Storage.Table;
namespace TablePaginationSample
{
class Program
{
static string accountName = "";
static string accountKey = "";
static string tableName = "";
static int maxEntitiesToFetch = 10;
static TableContinuationToken token = null;
static void Main(string[] args)
{
var cloudStorageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
var cloudTableClient = cloudStorageAccount.CreateCloudTableClient();
var table = cloudTableClient.GetTableReference(tableName);
Console.WriteLine("Press \"N\" to go to next page\nPress \"F\" to go first page\nPress any other key to exit program");
var query = new TableQuery().Take(maxEntitiesToFetch);
var continueLoop = true;
do
{
Console.WriteLine("Fetching entities. Please wait....");
Console.WriteLine("-------------------------------------------------------------");
var queryResult = table.ExecuteQuerySegmented(query, token);
token = queryResult.ContinuationToken;
var entities = queryResult.Results;
foreach (var entity in entities)
{
Console.WriteLine(string.Format("PartitionKey = {0}; RowKey = {1}", entity.PartitionKey, entity.RowKey));
}
Console.WriteLine("-------------------------------------------------------------");
if (token == null)//No more token available. We've reached end of table
{
Console.WriteLine("All entities have been fetched. The program will now terminate.");
break;
}
else
{
Console.WriteLine("More entities available. Press \"N\" to go to next page or Press \"F\" to go first page or Press any other key to exit program.");
Console.WriteLine("-------------------------------------------------------------");
var key = Console.ReadKey();
switch(key.KeyChar)
{
case 'N':
case 'n':
continue;
case 'F':
case 'f':
token = null;
continue;
default:
continueLoop = false;
break;
}
}
} while (continueLoop);
Console.WriteLine("Press any key to terminate the application.");
Console.ReadLine();
}
}
}

Returning an attachment from a remote web service

Summary
I need to retrieve attachments stored in a parent app from a link in a client of a child app. The attachments are available in the parent app via a web service call -- which returns a standard FileContentResult with content type "application/octet-stream". The best way I can think is to retrieve this via a WebRequest and pass the resulting response stream to a FileStreamResult, though I have some alternatives available.
Does anyone know if, when making a WebRequest, the response stream becomes available immediately once the first part of the response is returned or is it buffered so I don't get the response until all data has been retrieved?
Are there any other options than those listed in the full question below for doing this that I'm missing? (Other than keeping the attachments in both child and parent DBs -- I really don't want to do this since then I'd need to regularly synchronize them, too).
TLDR Version
I have two related applications which communicate through a RESTful web service. The parent application maintains a collection of entities which may have attachments. For example, a Request might have an Excel spreadsheet as an attachment. The entity and its attachment are stored in the database and access to the attachment is controlled using the same logic as access to the Request. That is, you should not be able to download an attachment if you cannot view the Request.
In the child application I maintain some integration glue for the entities assigned to a particular institution -- the app is used to communicate between our Board of Regents and each Regents school. I don't want to maintain and synchronize the full entity/attachment. I only want to maintain enough information to allow me to connect to the web service in the parent app and get the details for entities that the particular instance of the child application has access to.
This works well for the entity data itself. The amount of data is small and the overhead of buffering in the child application doesn't present a signficant delay in accessing the data. If necessary, I could cache the data locally to avoid performance penalities.
My concern is the attachments. I've considered three different mechanisms for providing access to the attachment from a client of the child application.
Generate a one-time use token and associated url that allows the client to directly download the attachment from the parent application. The token generation web service call would ensure that users of the child application should have access to the attachment. The drawback to this is that you'd only be able to click on the link once in the client. Clicking again would result in an error rather than getting the attachment.
Buffer the attachment in the child app. In this scenario I would provide a controller/action to download the attachment in the child app, then call a web service method to get the attachment and have the child app send the attachment as a FileContentResult. This removes the issue of only being able to click the link once, but the attachments could be reasonably large and buffering the data in the child application could potentially double the amount of time to download the attachment and, worse, incur a significant delay before the attachment download begins.
Link in the child app, but provide the stream from the web service request directly to a FileStreamResult. This seems, to me, to be the best option as the FileStreamResult reads in chunks rather than having to have all the data available before it is sent to the client. The only drawback that I can see here is that I can no longer dispose of the WebResponse directly as the FileStreamResult won't be executed until after my action returns.
Here is what I have for the code for API wrapper code for (2) and (3):
private class ResponseModel<T> : IDisposable
{
public T Model { get; set; }
public WebResponse Response { get; set; }
private bool Disposed { get; set; }
private void Dispose( bool disposing )
{
if (!Disposed)
{
if (disposing)
{
((IDisposable)this.Response).Dispose();
}
Disposed = true;
}
}
public void Dispose()
{
Dispose( true );
}
}
private ResponseModel<T> GetAttachmentResponse<T>( long id ) where T : IDownloadModel, new()
{
var request = GetRequest( string.Format( "{0}/api/getattachment/{1}/{2}", this.BaseUrl, this.Key, id ) );
var response = request.GetResponse();
var model = (T)Activator.CreateInstance<T>();
var contentDisposition = response.Headers["Content-Disposition"];
if (!string.IsNullOrEmpty( contentDisposition ))
{
var filename = contentDisposition.Split( new[] { ';', ' ' }, StringSplitOptions.RemoveEmptyEntries )
.SingleOrDefault( s => s.StartsWith( "filename", StringComparison.OrdinalIgnoreCase ) );
if (!string.IsNullOrEmpty( filename ))
{
model.Name = filename.Split( '=' ).Skip( 1 ).FirstOrDefault();
}
}
if (string.IsNullOrEmpty( model.Name ))
{
model.Name = "untitled";
}
return new ResponseModel<T> { Model = model, Response = response };
}
public FileDownloadModel GetAttachment( long id )
{
using (var response = GetAttachmentResponse<FileDownloadModel>( id ))
{
var reader = new BinaryReader( response.Response.GetResponseStream() );
response.Model.Content = reader.ReadBytes( (int)response.Response.ContentLength );
return response.Model;
}
}
public FileStreamDownloadModel GetAttachmentStream( long id )
{
// since we're returning the stream, we can't dispose of the response when done.
var response = GetAttachmentResponse<FileStreamDownloadModel>( id );
response.Model.Stream = response.Response.GetResponseStream();
return response.Model;
}
public interface IDownloadModel
{
string ContentType { get; }
string Name { get; set; }
}
Model classes
public class FileDownloadModel : IDownloadModel
{
public byte[] Content { get; set; }
public string Name { get; set; }
public string ContentType { get { return "application/octet-stream"; } }
}
public class FileStreamDownloadModel : IDownloadModel
{
public Stream Stream { get; set; }
public string Name { get; set; }
public string ContentType { get { return "application/octet-stream"; } }
}
I would suggest a variant on Option 1 [call it Option 1(a)].
Instead of generating a one-time token, "borrow" the MVC AntiForgeryToken classes, and have your parent application return a custom token and cookie to the child app for inclusion in the form returned to the user.
If the child application may have links for multiple documents on a single page, in the request for the token information, have the child app submit a unique identifier (identifying the page request from the user) as part of the request. You can then use this identifier in generating the tokens, and you can store the identifier as part of the verification process. This will give you a multi-use token, unique for each link on the page.
Slap an expiration time on the unique identifier, and you should be good to go.

Sproutcore datasources and creating new records with relationships

I'm trying to get my head around datasources and related models in sproutcore and am getting no where fast was wondering if anyone could maybe help me understand this all bit better.
Basically I have two related models Client and Brand, Clients can have many Brands and Brands can have a single Client, I have defined my models correctly and everything is pulling back as expected. The problem I'm having is working out how to create a new Brand and setup its relationship.
So on my Brand controller I have a createBrand method like so:
var brand = DBs.store.createRecord(DBs.Brand, {
title: this.get('title')
}, Math.floor(Math.random()*1000000));
brand.set('client', this.get('client'));
MyApp.store.commitRecords();
So as this is a new Brand I randomly generate a new ID for it (the second argument to createRecord). This is calling my createRecord in my datasource to create the new Brand, and then it also calls the updateRecord for the client.
The problem I'm having is that the clientUpdate is being passed the temporary (randomly generated id) in the relationship. How should I be structuring my creating of the new brand? Should I be waiting for the server to return the newly created brands ID and then updating the client relationship? If so how would I go about doing this?
Thanks
Mark
Right, after sitting in the sproutcore IRC channel and talking to mauritslamers he recommended creating a framework to handle all the server interactions for me manually.
So I setup a framework called CoreIo, which contains all my models, store and data source.
The data source is only used for fetching records from the server ie:
fetch: function(store, query) {
var recordType = query.get('recordType'),
url = recordType.url;
if (url) {
SC.Request.getUrl(CoreIo.baseUrl+url)
.header({ 'Accept': 'application/json'})
.json()
.notify(this, '_didFetch', store, query, recordType)
.send();
return YES;
}
return NO;
},
_didFetch: function (response, store, query, recordType) {
if (SC.ok(response)) {
store.loadRecords(recordType, response.get('body'));
store.dataSourceDidFetchQuery(query);
} else {
store.dataSourceDidErrorQuery(query, response);
}
},
Then the CoreIo framework has creation methods for my models ie:
CoreIo.createBrand = function (brand, client) {
var data = brand,
url = this.getModelUrl(CoreIo.Brand);
data.client_id = client.get('id');
SC.Request.postUrl(url)
.json()
.notify(this, this.brandDidCreate, client)
.send(data);
};
CoreIo.brandDidCreate = function (request, client) {
var json = request.get('body'),
id = json.id;
var ret = CoreIo.store.pushRetrieve(CoreIo.Brand, id, json);
var brand = CoreIo.store.find(CoreIo.Brand, id);
if (ret) {
client.get('brands').pushObject(brand);
}
};
I would then call into these 'actions' to create my new models which would setup the relationships as well.

Resources