SignalR in MVC3, timing and start/connect issues? - asp.net-mvc

I am having a really weird issue with MVC3 and signalr.. I have a simple hub;
[HubName("test")]
public class Test: Hub
{
public object GetStuff()
{
return new { dummy = "Test" };
}
}
And some client-side code;
var connection = $.connection.test;
connection.start();
connection.getStuff();
This throws an error;
TypeError: Object # has no method 'start'
If I instead do
var connection = $.connection("test");
I get a different error;
TypeError: Object # has no method 'getStuff' jquery-1.6.4.min.js:4
POST http://localhost:63021/Controller/test/negotiate 405 (Method Not Allowed)
Note its trying to negotiate under the controller for some reason?
Is there some specific route I need to register? Some other magic I dont know about?
UPDATE
So playing a bit with console -- the first version does in fact create an object that has getStuff() which i can call. But signalr throws up because i have to call start() first -- which doesn't exist! The second one creates an object that DOES have start(), but it doesnt have getStuff()..
UPDATE 2
Tried doing $.connection.hub.start instead. This seems to work in the console, but not in the page onload.. Possibly start isnt finished before the hub call is made? Is it async?

Starting the SignalR connection is not instantaneous. You call to connection.GetStuff(); may fail if the connection has not yet been established. If you want this code to run after a connection to the hub is established you should use a callback function.
var connection = $.connection.test;
$.connection.hub.start(function(){
// By convention all exposed hub methods start with lowercase
connection.getStuff();
});
Hub Quickstart: https://github.com/SignalR/SignalR/wiki/QuickStart-Hubs
In-depth look at SignalR javascript client: https://github.com/SignalR/SignalR/wiki/SignalR-JS-Client-Hubs

You must add the hub portion:
$.connection.hub.start();

Try this:
var connection = $.connection("#Url.Content("~/echo")");

Related

Async Function Fails when called as part of a Constructor

I'm rather new to Blazor, but I am currently trying to get access to some classes from within a class library that I've created and deployed as a Nuget package. As background, the Nuget package is an Api library, which allows me to talk to a webservice (I don't know if this is relevant or not). However, every time I go to the page where I'm testing, the page never loads and instead I left looking at the browser loading circle until I navigate away or close the application. During my testing here, it seems like it's the #inject call of my interface into the Blazor component which is causing the issue as when I remove it and try to load the page normally, the page does so.
So to demonstrate what I have setup, here is where I've added the Singletons to the DI:
builder.Services.AddSingleton<IApiConfigHelper, ApiConfigHelper>();
builder.Services.AddSingleton<IApiHelper, ApiHelper>();
builder.Services.AddSingleton<ISystemEndpoint, SystemEndpoint>();
Then on the blazor page, I have the following declarations at the top of my page:
#using Library.Endpoints
#using Library.Models
#page "/"
#inject ISystemEndpoint _systemEndpoint
Now I am leaning towards is this something to do with the Nuget package and using it with DI. I have tested the library away from this project (In a console application) and can confirm it's working as it should.
I have also created a local class library as a test to, to see if I could inject a data access class into the page and I can confirm that this works without an issue, which suggests to me that DI is working, just not with my Nuget package.
I did have a look into CORS, given that the Nuget package is accessing an external domain, and setup the following simple CORS policy in the app:
builder.Services.AddCors(policy =>
{
policy.AddPolicy("OpenCorsPolicy", opt =>
opt.AllowAnyOrigin()
.AllowAnyHeader()
.AllowAnyMethod());
});
Which is added to the app after the AddRouting call like so:
app.UseCors("OpenCorsPolicy");
However again, this wasn't the solution so if anyone is able to point me in the right direction with where I may be going wrong with this or offer any advice, I would be most grateful.
EDIT 1 - Provides details #mason queried
Regarding SystemEndpoint, the constructor is being injected with 2 things, as below:
public SystemEndpoint(IApiHelper apiHelper, IOptions<UriConfigModel> uriOptions)
{
_apiHelper = apiHelper;
_uriOptions = uriOptions.Value;
}
My Nuget Library is dependant on the following:
Azure.Identity
Azure.Security.KeyVault.Secrets
Microsoft.AspNet.WebApi.Client
Microsoft.Extensisons.Options.ConfigurationExtensions
EDIT 2 - Doing some further testing with this I have added a simple Endpoint class to my Nuget library, which returns a string with a basic message, as well as returning the values of the 2 UriConfig properties as below. I added this test to 1) sanity check that my DI was working correctly, and 2) check the values that are being assigned from appsettings to my UriConfig Object.
public class TestEndpoint : ITestEndpoint
{
private readonly IOptions<UriConfigModel> _uriConfig;
public TestEndpoint(IOptions<UriConfigModel> uriConfig)
{
_uriConfig = uriConfig;
}
public string TestMethod()
{
return $"You have successfully called the test method\n\n{_uriConfig.Value.Release} / {_uriConfig.Value.Version}";
}
}
However when adding in the dependency of IApiHelper into the Ctor, the method then breaks and fails to load the page. Looking into ApiHeloer, the Ctor has a dependency being injected into it of IApiConfigHelper. Looking at the implementation, the Ctor of ApiConfigHelper is setting up the values and parameters of the HttpClient that should make the REST calls to the external Api.
Now I believe what is breaking the code at this point is a call I'm making to Azure Key Vault, via REST, to pull out the secret values to connect to the Api. The call to KeyVault is being orchestrated via the following method, making use of the Azure.Security.KeyVault.Secrets Nuget Package, however I assume that at the heart of it, it's making a REST call to Azure on my behalf:
private async Task<KeyVaultSecret> GetKeyVaultValue(string secretName = "")
{
try
{
if (_secretClient is not null)
{
var result = await _secretClient.GetSecretAsync(secretName);
return result.Value;
}
}
catch (ArgumentException ae)
{
Console.WriteLine(ae.Message);
}
catch (Azure.RequestFailedException rfe)
{
Console.WriteLine(rfe.Message);
}
return new(secretName, "");
}
So that's where I stand with this at the moment. I still believe it could be down to CORS, as it seems to be falling over when making a call to an external service / domain, but I still can say 100%. As a closing thought, could it be something as simple as when I call call the above method, it's not being awaited????
So after persisting with this it seems like the reason it was failing was down to "awaiting" the call to Azure KeyVault, which was happening indirectly via the constructor of ApiConfigHelper. The resulting method for getting KeyVault value is now:
private KeyVaultSecret GetKeyVaultValue(string secretName = "")
{
try
{
if (_secretClient is not null)
{
var result = _secretClient.GetSecret(secretName);
if (result is not null)
{
return result.Value;
}
}
}
catch (ArgumentException ae)
{
Console.WriteLine(ae.Message);
}
catch (Azure.RequestFailedException rfe)
{
Console.WriteLine(rfe.Message);
}
return new(secretName, "");
}
I am now able to successfully make calls to my library and return values from the Api it interacts with.
I can also confirm that this IS NOT a CORS issue. Once I saw that removing the await was working, I then removed the CORS policy declarations from the service and the app in my Blazor's start-up code and everything continued to work without an issue.
As a final note, I must stress that this is only seems an issue when using the library with Blazor (possibly webApi projects) as I am able to use the library, awaiting the Azure call just fine in a console application.

Messages not reaching destination queue when using ServerInitializerFactory Netty 4

I am using apache camel netty4 in grails and I have declared mycustom ServerInitializerFactory as follows
public class MyServerInitializerFactory extends ServerInitializerFactory {
private int maxLineSize = 1048576;
NettyConsumer nettyConsumer
public MimacsServerInitializerFactory() {}
#Override
protected void initChannel(Channel channel) throws Exception {
ChannelPipeline pipeline = channel.pipeline()
pipeline.addLast("logger", new LoggingHandler(LogLevel.INFO))
pipeline.addLast("framer", new LengthFieldBasedFrameDecoder(ByteOrder.LITTLE_ENDIAN, maxLineSize, 2, 2, 6, 0, false))
pipeline.addLast("decoder", new MfuDecoder())
pipeline.addLast("encoder", new MfuEncoder())
pipeline.addLast("handler", new MyServerHandler())
}
}
I have a route which I setup as follows in my routebuilder.
from('netty4:tcp://192.168.254.3:553?serverInitializerFactory=#sif&keepAlive=true&sync=true&allowDefaultCodec=false').to('activemq:queue:Tracking.Queue')
My Camel Context is setup in the BootStrap.groovy as follows
def serverInitializerFactory = new MyServerInitializerFactory()
SimpleRegistry registry = new SimpleRegistry()
registry.put("sif", serverInitializerFactory)
CamelContext camelContext = new DefaultCamelContext(registry)
camelContext.addComponent("activemq", activeMQComponent.activeMQComponent("failover:tcp://localhost:61616"))
camelContext.addRoutes new TrackingMessageRoute()
camelContext.start()
When I run my app, my route is started and my framer, decoder, handler and encoders are all invoked but messages do not reach the Tracking. Queue and responses do not get back to the client.
If I do not use serverInitializerFactory in the netty url and user encoders and decoders instead, My messages are hitting the queue but I lose control of the acknowledgement that I want to sent for each type of message that I receive. It seems activemq tries to sent its own response which is rejected by my encoder.
Am I supposed to then write code to send again or is there something I am missing?
You need to add a handler with the consumer so it can be routed, see the unit test how its done:
https://github.com/apache/camel/blob/master/components/camel-netty4/src/test/java/org/apache/camel/component/netty4/NettyCustomPipelineFactoryAsynchTest.java#L112
I managed to get around that problem. In my channelRead0 method. I added the following lines
Exchange exchange = this.consumer.getEndpoint().createExchange(ctx, msg);
where ctx is the ChannelContextHandler and msg is the Message Object, the two are both parameters of the channelRead0 method.
I also added the following lines
this.consumer.createUoW(exchange);
and after my handling code I inserted the following line
this.consumer.doneUoW(exchange);
and everything works like a charm.

Error in Zuul SendErrorFilter during forward

When my Zuul Filter is unable to route to a configured URL, the 'RibbonRoutingFilter' class throws a ZuulException saying "Forwarding error" and the control goes to the 'SendErrorFilter' class.
Now when the SendErrorFilter class tries to do a forward, another exception happens during this forward call.
dispatcher.forward(ctx.getRequest(), ctx.getResponse());
The exception happening during this forward call is
Caused by: java.lang.IllegalArgumentException: UT010023: Request org.springframework.cloud.netflix.zuul.filters.pre.Servlet30WrapperFilter$Servlet30RequestWrapper#6dc974ea was not original or a wrapper
at io.undertow.servlet.spec.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:103) ~[undertow-servlet-1.1.3.Final.jar:1.1.3.Final]
at org.springframework.cloud.netflix.zuul.filters.post.SendErrorFilter.run(SendErrorFilter.java:74) ~[spring-cloud-netflix-core-1.0.0.RELEASE.jar:1.0.0.RELEASE]
at com.netflix.zuul.ZuulFilter.runFilter(ZuulFilter.java:112) ~[zuul-core-1.0.28.jar:na]
at com.netflix.zuul.FilterProcessor.processZuulFilter(FilterProcessor.java:197) ~[zuul-core-1.0.28.jar:na]
Finally when the control comes to my custom ZuulErrorFilter , i do not get the original exception. Instead the exception object i get is the one that occurs during the forward.
Update:
I found that a errorPath property can be configured to point to a Error Handling Service. If it is not configured, Zuul by default looks for a service named /error and tries to dispatch to that service. Since we did not have any service for /error , the dispatcher.forward() was throwing error.
Question
How can we skip this fwd to an error handling service ? We have a ErrorFilter to log the error. We do not want to have a error handling service.
We had faced the same issue and there is a simple solution to fix the Undertow "eating" the original exception, following my blog post:
http://blog.jmnarloch.io/2015/09/16/spring-cloud-zuul-error-handling/
You need to set the flag allow-non-standard-wrappers to true. In Spring Boot this is doable through registering custom UndertowDeploymentInfoCustomizer. Example:
#Bean
public UndertowEmbeddedServletContainerFactory embeddedServletContainerFactory() {
UndertowEmbeddedServletContainerFactory factory = new UndertowEmbeddedServletContainerFactory();
factory.addDeploymentInfoCustomizers(new UndertowDeploymentInfoCustomizer() {
#Override
public void customize(DeploymentInfo deploymentInfo) {
deploymentInfo.setAllowNonStandardWrappers(true);
}
});
return factory;
}
Now regarding the question, either way I would highly encourage you to implement your own ErrorController, because otherwise you may experience odd Spring Boot behaviour (in our setup - ralying on the default was always generating the Whitelabel error page with 200 HTTP status code - which never happens on Tomcat in contradiction) and in this way was not consumable by AJAX calls for instance.
Related Github issue: https://github.com/spring-cloud/spring-cloud-netflix/issues/524

webapi odata update savechanges issue - Unable to connect to remote server

In my mvc webapplication, I am using webapi to connect to my database through odata.
Both MVC WebApp and Odata WebApi are on different ports of Azure cloud service webrole endpoints.
MVC WebApp - 80
Odata WebApi - 23900
When I do a odataproxy updateobject and call savechanges like
odataProxy.UpdateObject(xxx);
odataProxy.SaveChanges(System.Data.Services.Client.SaveChangesOptions.PatchOnUpdate);
I am getting a weird exception on savechanges method call - unable to connect to remote server.
When I tried to look into inner exceptions, It says that - No connection could be made as the target machine actively refused it 127.0.0.1:23901
So if you observe the port number in the exception, it shows as 23901 and obviously this error should come as the request is supposed to hit 23900.
I am facing this exception only when running on azure cloud solution. Whenever I do an update request, it fails by hitting a wrong port (added by 1).
Another thing is, apart from this updateobject -> savechanges, rest all works like fetching data and adding data.
FWIW, I've just run across this same thing. Darn near annoying and I really hope it doesn't happen in production. I'm surprised no other people have come across this though.
The idea of creating a new context, attaching the object(s) and calling SaveChanges really repulsed me because not only does it practically break all forms of testing, it causes debug code and production code to be fundamentally different.
I was however able to work around this problem in another way, by intercepting the request just before it goes out and using reflection to poke at some private fields in memory to "fix" the port number.
UPDATE: It's actually easier than this. We can intercept the request generation process with the BuildingRequest event. It goes something like this:
var context = new Context(baseUri);
context.BuildingRequest += (o, e) =>
{
FixPort(e);
};
Then the FixPort method just needs to test the port number and build a new Uri, attaching it back to the event args.
[Conditional("DEBUG")]
private static void FixPort(BuildingRequestEventArgs eventArgs)
{
int localPort = int.Parse(LOCAL_PORT);
if (eventArgs.RequestUri.Port != localPort)
{
var builder = new UriBuilder(eventArgs.RequestUri);
builder.Port = localPort;
eventArgs.RequestUri = builder.Uri;
}
}
Here's the original method using reflection and SendingRequest2, in case anyone is still interested.
First we create a context and attach a handler to the SendingRequest2 event:
var context = new Context(baseUri);
context.SendingRequest2 += (o, e) =>
{
FixPort(e.RequestMessage);
};
The FixPort method then handles rewriting the URL of the internal request, where LOCAL_PORT is the port you expect, in your case 23900:
[Conditional("DEBUG")]
private static void FixPort(IODataRequestMessage requestMessage)
{
var httpWebRequestMessage = requestMessage as HttpWebRequestMessage;
if (httpWebRequestMessage == null) return;
int localPort = int.Parse(LOCAL_PORT);
if (httpWebRequestMessage.HttpWebRequest.RequestUri.Port != localPort)
{
var builder = new UriBuilder(requestMessage.Url);
builder.Port = localPort;
var uriField = typeof (HttpWebRequest).GetField("_Uri",
BindingFlags.Instance | BindingFlags.NonPublic);
uriField.SetValue(httpWebRequestMessage.HttpWebRequest, builder.Uri);
}
}
I have found the root cause and a temporary workaround.
Cause:
When you hit WebApi through some port :23900 in Azure compute emulator and do an update or delete operation, somehow the last request is blocking the port and because of the port walking feature in Azure emulator, it is jumping to next port where there is no service available which is causing the issue.
Even this issue is found only in development emulators.
Temp Workaround:
Use a different proxy to attach to updated context object and then save from the other proxy object.
var odataProxy1 = xxx;
var obj = odataProxy1.xyz.FirstOrDefault();
obj.property1="abcd";
...//Other update assignments
var odataProxy2 = xxx;
odataProxy2.AttachTo("objEntitySet",obj);
odataProxy2.UpdateObject(obj)
odataProxy2.SaveChanges(ReplaceOrUpdate);

Web API, async file uploading works locally, not on server

Using the following tutorial: http://www.asp.net/web-api/overview/working-with-http/sending-html-form-data,-part-2, I used the following controller for the base of a file upload call I implemented:
public Task<HttpResponseMessage> PostFormData()
{
// Check if the request contains multipart/form-data.
if (!Request.Content.IsMimeMultipartContent())
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
string root = HttpContext.Current.Server.MapPath("~/App_Data");
var provider = new MultipartFormDataStreamProvider(root);
// Read the form data and return an async task.
var task = Request.Content.ReadAsMultipartAsync(provider).
ContinueWith<HttpResponseMessage>(t =>
{
if (t.IsFaulted || t.IsCanceled)
{
Request.CreateErrorResponse(HttpStatusCode.InternalServerError, t.Exception);
}
// A whole lotta logic to save the file, process it, etc.
});
return task;
}
To save on space I didn't include the majority of the logic I wrote, since the error happens on the first line within of ContinueWith,
if (t.IsFaulted || t.IsCanceled)
If I run this locally from VS2010, both of the above booleans are false, and the code works perfectly - all of it, even extra few dozen lines I commented out. When I deploy it to a server running IIS7, t.IsFaulted is always true. I've never worked with asynchronous calls in C#, and have only done a few simple controllers in Web API...is there something I have to install/configure/etc. on a production server to make it work?
Making it more difficult is the fact that all of the exceptions that are occurring stay in that task (i.e. don't get caught by ELMAH), so I've no idea how to debug what's happening; IIS is also not logging any errors that are occurring in the event viewer...so I'm at a loss to know exactly what's going on. Any tips on how to make this debugging process easier?

Resources