MassTransit sqs transport local development error - amazon-sqs

I am using MassTransit's sqs transport. On a real aws sqs setup everything works great. However, when developing locally using localstack I run into the following error.
MassTransit.Messages[0]
R-FAULT amazonsqs://us-east-1/UpdateName f28f6a22-f0dd-4ffa-926f-2b8c871c8bf0 Value cannot be null.
Parameter name: source
System.Runtime.Serialization.SerializationException: An exception occurred while deserializing the message envelope ---> System.ArgumentNullException: Value cannot be null.
Parameter name: source
at System.Linq.ThrowHelper.ThrowArgumentNullException(ExceptionArgument argument)
at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)
at MassTransit.Serialization.JsonConsumeContext..ctor(JsonSerializer deserializer, IObjectTypeDeserializer objectTypeDeserializer, ReceiveContext receiveContext, MessageEnvelope envelope)
at MassTransit.Serialization.JsonMessageDeserializer.MassTransit.IMessageDeserializer.Deserialize(ReceiveContext receiveContext)
--- End of inner exception stack trace ---
at MassTransit.Serialization.JsonMessageDeserializer.MassTransit.IMessageDeserializer.Deserialize(ReceiveContext receiveContext)
at MassTransit.Serialization.SupportedMessageDeserializers.Deserialize(ReceiveContext receiveContext)
at MassTransit.Pipeline.Filters.DeserializeFilter.Send(ReceiveContext context, IPipe`1 next)
at GreenPipes.Filters.RescueFilter`2.GreenPipes.IFilter<TContext>.Send(TContext context, IPipe`1 next)
dbug: MassTransit.AmazonSqsTransport.Pipeline.ConfigureTopologyFilter<MassTransit.AmazonSqsTransport.Topology.ErrorSettings>
This is only a problem with the latest MassTransit (5.5.1), but in version 5.3.0 it works fine.
Any help to fix this error would be much appreciated.

Related

kafka streams stateful mode is error (rocksdb don't initialize)

error code :
Failed to close task manager due to the following error:
java.lang.NoClassDefFoundError: Could not initialize class org.rocksdb.DBOptions
at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:133)
at org.apache.kafka.streams.state.internals.TimestampedSegment.openDB(TimestampedSegment.java:49)
at org.apache.kafka.streams.state.internals.TimestampedSegments.getOrCreateSegment(TimestampedSegments.java:50)
at org.apache.kafka.streams.state.internals.TimestampedSegments.getOrCreateSegment(TimestampedSegments.java:25)
at org.apache.kafka.streams.state.internals.AbstractSegments.getOrCreateSegmentIfLive(AbstractSegments.java:84)
at org.apache.kafka.streams.state.internals.AbstractRocksDBSegmentedBytesStore.put(AbstractRocksDBSegmentedBytesStore.java:146)
at org.apache.kafka.streams.state.internals.RocksDBWindowStore.put(RocksDBWindowStore.java:61)
at org.apache.kafka.streams.state.internals.RocksDBWindowStore.put(RocksDBWindowStore.java:27)
at org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:111)
at org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:34)
at org.apache.kafka.streams.state.internals.CachingWindowStore.putAndMaybeForward(CachingWindowStore.java:106)
at org.apache.kafka.streams.state.internals.CachingWindowStore.lambda$initInternal$0(CachingWindowStore.java:86)
at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:151)
at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:109)
at org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:124)
at org.apache.kafka.streams.state.internals.CachingWindowStore.flush(CachingWindowStore.java:291)
at org.apache.kafka.streams.state.internals.WrappedStateStore.flush(WrappedStateStore.java:84)
at org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$flush$4(MeteredWindowStore.java:200)
at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:801)
at org.apache.kafka.streams.state.internals.MeteredWindowStore.flush(MeteredWindowStore.java:200)
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:282)
at org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:647)
at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:745)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.closeTask(AssignedStreamsTasks.java:81)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.closeTask(AssignedStreamsTasks.java:37)
at org.apache.kafka.streams.processor.internals.AssignedTasks.shutdown(AssignedTasks.java:256)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.shutdown(AssignedStreamsTasks.java:535)
at org.apache.kafka.streams.processor.internals.TaskManager.shutdown(TaskManager.java:292)
at org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(StreamThread.java:1133)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:682)
Local works fine.
In the environment where the error occurs, the error occurs when uploading it to docker.
kafka streams version : 2.5.0

A command from the RDF4J framework is throwing an error

When running the following commands, an error is thrown by the add command. The rdf4j framework is used for communicating with a graphdb Knowledge Base:
import org.eclipse.rdf4j.model.Model;
Model model;
//statements have been added to the model
public RepositoryConnection connection;
...
connection.add(model); <-- error is thrown
An error is thrown in add command
2022-01-24 09:46:02 [http-nio-8080-exec-5] ERROR restapi.SubmitRMService - unable to rollback transaction. HTTP error code 404
org.eclipse.rdf4j.repository.RepositoryException: unable to rollback transaction. HTTP error code 404
at org.eclipse.rdf4j.http.client.RDF4JProtocolSession.rollbackTransaction(RDF4JProtocolSession.java:786)
at org.eclipse.rdf4j.repository.http.HTTPRepositoryConnection.rollback(HTTPRepositoryConnection.java:354)
at org.eclipse.rdf4j.repository.base.AbstractRepositoryConnection.conditionalRollback(AbstractRepositoryConnection.java:335)
at org.eclipse.rdf4j.repository.base.AbstractRepositoryConnection.add(AbstractRepositoryConnection.java:379)
The code is running successfully in two other environments except one we are setting with kubernetes. We ensured that /opt/graphdb/home has write permissions.
Question: I just do not understand the 404 RepositoryException error. The code can successfully run queries before the .add(model), so the connection with the graphdb is ok.

Error 401 on AdminApp deployment to Websphere 9 Docker

While running jython py script with adminApp.install to install on Docker WAS, I'm getting the Error:
WASX7017E: Exception received while running file
"deployWebSphere_with_install_standAlone.py"; exception information:
com.ibm.websphere.management.filetransfer.client.TransferFailedException:
401 Unauthorized (for:
C:\Users\devopsjava\AppData\Local\Temp\app5674632847255103189.ear).
I tried give full control on this path,
I tried set custom property:com.ibm.websphere.management.filetransfer.serverBasicAuth=true
(https://www-01.ibm.com/support/docview.wss?uid=swg1PK71800)
My code:
options = ['-node ',node,'-cell ',cell,"-server ",server]
print AdminApp.install(packageFile, options)
I always get :
WASX7017E: Exception received while running file
"deployWebSphere_with_install_standAlone.py"; exception information:
com.ibm.websphere.management.filetransfer.client.TransferFailedException:
401 Unauthorized (for:
C:\Users\devopsjava\AppData\Local\Temp\app5674632847255103189.ear).
Maybe, I need to set some more property, or there is problem with security between docker domain and the outside world ?

Azure Functions Dependency Injection Failing Again

I have an Azure V2 application running in Azure. For about the 3rd time now over various releases my custom dependency Injection is failing again!
This time it is worse than ever as the issue only happens when it is deployed to Azure and the logging was a nightmare to find out what was happening.
Azure Functions is currently running as follows with the usual error that it is unable to index my Functions.
018-11-10T10:41:56.091 [Information] Host Status: {
"id": "ad-api-dev",
"state": "Default",
"version": "2.0.12165.0",
"versionDetails": "2.0.12165.0 Commit hash: f9d6c271296eaae48f933c67d1df796fd4ff2a94"
}
2018-11-10T10:41:57.607 [Information] Initializing Host.
2018-11-10T10:41:57.607 [Information] Host initialization: ConsecutiveErrors=0, StartupCount=1
2018-11-10T10:41:57.644 [Information] Starting JobHost
2018-11-10T10:41:57.655 [Information] Starting Host (HostId=ad-api-dev, InstanceId=387865be-cfdf-45ca-98ba-2e0091827461, Version=2.0.12165.0, ProcessId=5140, AppDomainId=1, InDebugMode=True, InDiagnosticMode=False, FunctionsExtensionVersion=~2)
2018-11-10T10:41:57.674 [Information] Loading functions metadata
2018-11-10T10:41:57.723 [Information] 15 functions loaded
2018-11-10T10:41:58.158 [Information] Generating 15 job function(s)
2018-11-10T10:41:58.237 [Error] Error indexing method 'CompleteSimulation.Run'
Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexingException : Error indexing method 'CompleteSimulation.Run' ---> System.InvalidOperationException : Cannot bind parameter 'executionService' to type IExecutionService. Make sure the parameter Type is supported by the binding. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
at async Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexer.IndexMethodAsyncCore(MethodInfo method,IFunctionIndexCollector index,CancellationToken cancellationToken) at C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Indexers\FunctionIndexer.cs : 277
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at async Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexer.IndexMethodAsync(MethodInfo method,IFunctionIndexCollector index,CancellationToken cancellationToken) at C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Indexers\FunctionIndexer.cs : 167
End of inner exception
at async Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexer.IndexMethodAsync(MethodInfo method,IFunctionIndexCollector index,CancellationToken cancellationToken) at C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Indexers\FunctionIndexer.cs : 175
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at async Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexer.IndexTypeAsync(Type type,IFunctionIndexCollector index,CancellationToken cancellationToken) at C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Indexers\FunctionIndexer.cs : 103
2018-11-10T10:41:58.574 [Warning] Function 'CompleteSimulation.Run' failed indexing and will be disabled.
My local version runs fine as below
Azure Functions Core Tools (2.2.32 Commit hash: c5476ae629a0a438d6850e58eae1f5203c896cd6)
Function Runtime Version: 2.0.12165.0
[10/11/2018 13:35:15] Building host: startup suppressed:False, configuration suppressed: False
[10/11/2018 13:35:15] Reading host configuration file 'Y:\src\modo.solutions\Modo.Esg.Api\ModoSol.Functions.Api\bin\Debug\netstandard2.0\host.json'
[10/11/2018 13:35:15] Host configuration file read:
[10/11/2018 13:35:15] {
[10/11/2018 13:35:15] "version": "2.0"
[10/11/2018 13:35:15] }
[10/11/2018 13:35:17] Initializing extension with the following settings: Initializing extension with the following settings:
You can see I am using the same runtime version locally as Azure, So if anyone knows why my Dependency Injection would work locally but not when deployed it would be appreciated.
I am using Autofac 4.8.1 as my dependency injection container.
Thanks in advance.
BTW If anyone else has that issue of not being able to see why their Functions host is not starting follow these instructions.
Go into the Portal for your Azure function and find and enable the Live Log Streaming.
While this window is open re-deploy/publish your app.
Once thats done go into cloud explorer (Visual Studio Windows only :/) and find the logs.
If you don't have the live log streaming window open while you deploy you don't get any logs!!!
Screen shot below.

Error while starting grails

I am trying to work on grails through command prompt.
I am using the below versions:
| Grails Version: 3.2.3
| Groovy Version: 2.4.7
| JVM Version: 1.8.0_112
The following error is thrown:
| Error Error occurred running Grails CLI: This is usually a temporary
error dur ing hostname resolution and means that the local server did
not receive a respon se from an authoritative server (repo.grails.org)
(Use --stacktrace to see the f ull trace)
Upon using stacktrace the following is displayed:
| Error Error occurred running Grails CLI: This is usually a temporary
error dur ing hostname resolution and means that the local server did
not receive a respon se from an authoritative server (repo.grails.org)
(NOTE: Stack trace has been fi ltered. Use --verbose to see entire
trace.)
java.net.UnknownHostException: This is usually a temporary error
during hostname resolution and means that the local server did not
receive a response from an a uthoritative server (repo.grails.org)
at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefa
ultDnsResolver.java:45)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.resolveHost
name(DefaultClientConnectionOperator.java:278)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnect
ion(DefaultClientConnectionOperator.java:162)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedCli
entConnectionImpl.java:294)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(Default
RequestDirector.java:643)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultReq
uestDirector.java:479)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpCl
ient.java:906)
at org.apache.http.impl.client.DecompressingHttpClient.execute(Decompres
singHttpClient.java:137)
at org.eclipse.aether.transport.http.HttpTransporter.execute(HttpTranspo
rter.java:287)
at org.eclipse.aether.transport.http.HttpTransporter.implGet(HttpTranspo
rter.java:243)
at org.eclipse.aether.spi.connector.transport.AbstractTransporter.get(Ab
stractTransporter.java:59)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$GetTaskRu
nner.runTask(BasicRepositoryConnector.java:447)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunne
r.run(BasicRepositoryConnector.java:350)
at org.eclipse.aether.util.concurrency.RunnableErrorForwarder$1.run(Runn
ableErrorForwarder.java:67)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$DirectExe
cutor.execute(BasicRepositoryConnector.java:581)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector.get(Basic
RepositoryConnector.java:249)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.performDownl
oads(DefaultArtifactResolver.java:520)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(Defa
ultArtifactResolver.java:421)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtif
acts(DefaultArtifactResolver.java:246)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtif
act(DefaultArtifactResolver.java:223)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.
loadPom(DefaultArtifactDescriptorReader.java:320)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.
readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.resolveCa
chedArtifactDescriptor(DefaultDependencyCollector.java:535)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.getArtifa
ctDescriptorResult(DefaultDependencyCollector.java:519)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDe
pendency(DefaultDependencyCollector.java:409)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDe
pendency(DefaultDependencyCollector.java:363)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(D
efaultDependencyCollector.java:351)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDe
pendencies(DefaultDependencyCollector.java:254)
at org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDepen
dencies(DefaultRepositorySystem.java:341)
at org.springframework.boot.cli.compiler.grape.AetherGrapeEngine.resolve
(AetherGrapeEngine.java:302)
at org.springframework.boot.cli.compiler.grape.AetherGrapeEngine.resolve
(AetherGrapeEngine.java:284)
at org.springframework.boot.cli.compiler.grape.AetherGrapeEngine.resolve
(AetherGrapeEngine.java:276)
at org.grails.cli.boot.GrailsDependencyVersions.<init>(GrailsDependencyV
ersions.groovy:53)
at org.grails.cli.boot.GrailsDependencyVersions.<init>(GrailsDependencyV
ersions.groovy:49)
at org.grails.cli.profile.repository.MavenProfileRepository.<init>(Maven
ProfileRepository.groovy:53)
at org.grails.cli.GrailsCli.createMavenProfileRepository(GrailsCli.groov
y:333)
at org.grails.cli.GrailsCli.execute(GrailsCli.groovy:234)
at org.grails.cli.GrailsCli.main(GrailsCli.groovy:159)
| Error Error occurred running Grails CLI: This is usually a temporary
error dur ing hostname resolution and means that the local server did
not receive a respon se from an authoritative server (repo.grails.org)
Kindly help me resolve the above issue.
Most likely reason is your internet is not working or your are behind a proxy. See http://docs.grails.org/latest/guide/conf.html#proxyConfig for how to configure your proxy settings

Resources