ActiveMQTextMessages are kept in memory after they are consumed from topic - memory

So i have a problem. I am running an embedded apache.activemq.broker in my application which has a topic. i have one producer which sends small messages to the topic and consumer consumes them.
The problem is the applications memory footprint just keeps growing and growing to the point when it takes up several gigabytes of memory after some days. I did a memory profiling with JProfiler and noticed that alot of instances of type ActiveMQTextMessage are kept in the memory.
This is how i set up my broker
BrokerService brokerService = new BrokerService();
brokerService.setUseJmx(false);
brokerService.setUseLocalHostBrokerName(false);
brokerService.addConnector(tenantConfiguration.getConnectionString());
brokerService.setBrokerName(tenantConfiguration.getBrokerComponentIdentifier());
brokerService.setPersistenceAdapter(persistenceAdapterFromConnectionString);
SystemUsage systemUsage = new SystemUsage();
brokerService.setSystemUsage(systemUsage);
brokerService.setDestinationPolicy(createDestinationPolicyForBrokerService());
And here is how i set up destination policy
private PolicyMap createDestinationPolicyForBrokerService() {
PolicyMap policyMap = new PolicyMap();
List<PolicyEntry> policyEntries = new ArrayList<>();
ConstantPendingMessageLimitStrategy constantPendingMessageLimitStrategy = new ConstantPendingMessageLimitStrategy();
constantPendingMessageLimitStrategy.setLimit(10);
PolicyEntry queuePolicyEntry = new PolicyEntry();
queuePolicyEntry.setPrioritizedMessages(true);
queuePolicyEntry.setGcInactiveDestinations(true);
queuePolicyEntry.setInactiveTimoutBeforeGC(86400);
queuePolicyEntry.setQueue(">");
queuePolicyEntry.setPendingMessageLimitStrategy(constantPendingMessageLimitStrategy);
PolicyEntry topicPolicyEntry = new PolicyEntry();
topicPolicyEntry.setTopic(">");
topicPolicyEntry.setGcInactiveDestinations(true);
topicPolicyEntry.setInactiveTimoutBeforeGC(5000);
topicPolicyEntry.setPendingMessageLimitStrategy(constantPendingMessageLimitStrategy);
topicPolicyEntry.setUseCache(false);
policyEntries.add(queuePolicyEntry);
policyEntries.add(topicPolicyEntry);
policyMap.setPolicyEntries(policyEntries);
return policyMap;
}
Here is screenshot of one of messages outgoing references
Message
And here is an image when i click on "Show paths to GC root"
Gc root
EDIT:
Here is how i setup the DurableConsumer
private NMSConnectionFactory _connnectionFactory;
private IConnection _connection;
private ISession _session;
public void Start()
{
_connection = _connnectionFactory.CreateConnection(queueUser, queuePwd);
_connection.Start();
_session = _connection.CreateSession(AcknowledgementMode.AutoAcknowledge);
if (!string.IsNullOrEmpty(TopicName))
{
_topicConsumer = _session.CreateDurableConsumer(SessionUtil.GetTopic(_session, TopicName), ConsumerName, null, false);
_topicConsumer.Listener += TopicConsumerOnListener;
}
}
And this is how we i publish messages to topic
public void PublishMessage(string message)
{
using (var connection = _connnectionFactory.CreateConnection(user, pwd))
{
try
{
connection.Start();
ActiveMQTopic topic = new ActiveMQTopic(TopicName);
using (var session = connection.CreateSession())
using (var producer = session.CreateProducer(topic))
{
var textMessage = producer.CreateTextMessage(message);
producer.Send(textMessage);
}
}
catch (Exception exception)
{
Console.WriteLine(exception);
}
}
}
Does anyone know why the messages are not being removed after they are consumed?
Thanks

Solved the problem by adding my own clientId to connection of topic
_connection = _connnectionFactory.CreateConnection(queueUser, queuePwd);
_connection.ClientId = "MY CLIENT ID";
_connection.Start();
That way no new consumer rows are created on restart.

Related

Large File upload to ASP.NET Core 3.0 Web API fails due to Request Body to Large

I have an ASP.NET Core 3.0 Web API endpoint that I have set up to allow me to post large audio files. I have followed the following directions from MS docs to set up the endpoint.
https://learn.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads?view=aspnetcore-3.0#kestrel-maximum-request-body-size
When an audio file is uploaded to the endpoint, it is streamed to an Azure Blob Storage container.
My code works as expected locally.
When I push it to my production server in Azure App Service on Linux, the code does not work and errors with
Unhandled exception in request pipeline: System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Request body too large.
Per advice from the above article, I have configured incrementally updated Kesterl with the following:
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseKestrel((ctx, options) =>
{
var config = ctx.Configuration;
options.Limits.MaxRequestBodySize = 6000000000;
options.Limits.MinRequestBodyDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.MinResponseDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.RequestHeadersTimeout =
TimeSpan.FromMinutes(2);
}).UseStartup<Startup>();
Also configured FormOptions to accept files up to 6000000000
services.Configure<FormOptions>(options =>
{
options.MultipartBodyLengthLimit = 6000000000;
});
And also set up the API controller with the following attributes, per advice from the article
[HttpPost("audio", Name="UploadAudio")]
[DisableFormValueModelBinding]
[GenerateAntiforgeryTokenCookie]
[RequestSizeLimit(6000000000)]
[RequestFormLimits(MultipartBodyLengthLimit = 6000000000)]
Finally, here is the action itself. This giant block of code is not indicative of how I want the code to be written but I have merged it into one method as part of the debugging exercise.
public async Task<IActionResult> Audio()
{
if (!MultipartRequestHelper.IsMultipartContentType(Request.ContentType))
{
throw new ArgumentException("The media file could not be processed.");
}
string mediaId = string.Empty;
string instructorId = string.Empty;
try
{
// process file first
KeyValueAccumulator formAccumulator = new KeyValueAccumulator();
var streamedFileContent = new byte[0];
var boundary = MultipartRequestHelper.GetBoundary(
MediaTypeHeaderValue.Parse(Request.ContentType),
_defaultFormOptions.MultipartBoundaryLengthLimit
);
var reader = new MultipartReader(boundary, Request.Body);
var section = await reader.ReadNextSectionAsync();
while (section != null)
{
var hasContentDispositionHeader = ContentDispositionHeaderValue.TryParse(
section.ContentDisposition, out var contentDisposition);
if (hasContentDispositionHeader)
{
if (MultipartRequestHelper
.HasFileContentDisposition(contentDisposition))
{
streamedFileContent =
await FileHelpers.ProcessStreamedFile(section, contentDisposition,
_permittedExtensions, _fileSizeLimit);
}
else if (MultipartRequestHelper
.HasFormDataContentDisposition(contentDisposition))
{
var key = HeaderUtilities.RemoveQuotes(contentDisposition.Name).Value;
var encoding = FileHelpers.GetEncoding(section);
if (encoding == null)
{
return BadRequest($"The request could not be processed: Bad Encoding");
}
using (var streamReader = new StreamReader(
section.Body,
encoding,
detectEncodingFromByteOrderMarks: true,
bufferSize: 1024,
leaveOpen: true))
{
// The value length limit is enforced by
// MultipartBodyLengthLimit
var value = await streamReader.ReadToEndAsync();
if (string.Equals(value, "undefined",
StringComparison.OrdinalIgnoreCase))
{
value = string.Empty;
}
formAccumulator.Append(key, value);
if (formAccumulator.ValueCount >
_defaultFormOptions.ValueCountLimit)
{
return BadRequest($"The request could not be processed: Key Count limit exceeded.");
}
}
}
}
// Drain any remaining section body that hasn't been consumed and
// read the headers for the next section.
section = await reader.ReadNextSectionAsync();
}
var form = formAccumulator;
var file = streamedFileContent;
var results = form.GetResults();
instructorId = results["instructorId"];
string title = results["title"];
string firstName = results["firstName"];
string lastName = results["lastName"];
string durationInMinutes = results["durationInMinutes"];
//mediaId = await AddInstructorAudioMedia(instructorId, firstName, lastName, title, Convert.ToInt32(duration), DateTime.UtcNow, DateTime.UtcNow, file);
string fileExtension = "m4a";
// Generate Container Name - InstructorSpecific
string containerName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{instructorId}";
string contentType = "audio/mp4";
FileType fileType = FileType.audio;
string authorName = $"{firstName} {lastName}";
string authorShortName = $"{firstName[0]}{lastName}";
string description = $"{authorShortName} - {title}";
long duration = (Convert.ToInt32(durationInMinutes) * 60000);
// Generate new filename
string fileName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{Guid.NewGuid()}";
DateTime recordingDate = DateTime.UtcNow;
DateTime uploadDate = DateTime.UtcNow;
long blobSize = long.MinValue;
try
{
// Update file properties in storage
Dictionary<string, string> fileProperties = new Dictionary<string, string>();
fileProperties.Add("ContentType", contentType);
// update file metadata in storage
Dictionary<string, string> metadata = new Dictionary<string, string>();
metadata.Add("author", authorShortName);
metadata.Add("tite", title);
metadata.Add("description", description);
metadata.Add("duration", duration.ToString());
metadata.Add("recordingDate", recordingDate.ToString());
metadata.Add("uploadDate", uploadDate.ToString());
var fileNameWExt = $"{fileName}.{fileExtension}";
var blobContainer = await _cloudStorageService.CreateBlob(containerName, fileNameWExt, "audio");
try
{
MemoryStream fileContent = new MemoryStream(streamedFileContent);
fileContent.Position = 0;
using (fileContent)
{
await blobContainer.UploadFromStreamAsync(fileContent);
}
}
catch (StorageException e)
{
if (e.RequestInformation.HttpStatusCode == 403)
{
return BadRequest(e.Message);
}
else
{
return BadRequest(e.Message);
}
}
try
{
foreach (var key in metadata.Keys.ToList())
{
blobContainer.Metadata.Add(key, metadata[key]);
}
await blobContainer.SetMetadataAsync();
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
blobSize = await StorageUtils.GetBlobSize(blobContainer);
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
Media media = Media.Create(string.Empty, instructorId, authorName, fileName, fileType, fileExtension, recordingDate, uploadDate, ContentDetails.Create(title, description, duration, blobSize, 0, new List<string>()), StateDetails.Create(StatusType.STAGED, DateTime.MinValue, DateTime.UtcNow, DateTime.MaxValue), Manifest.Create(new Dictionary<string, string>()));
// upload to MongoDB
if (media != null)
{
var mapper = new Mapper(_mapperConfiguration);
var dao = mapper.Map<ContentDAO>(media);
try
{
await _db.Content.InsertOneAsync(dao);
}
catch (Exception)
{
mediaId = string.Empty;
}
mediaId = dao.Id.ToString();
}
else
{
// metadata wasn't stored, remove blob
await _cloudStorageService.DeleteBlob(containerName, fileName, "audio");
return BadRequest($"An issue occurred during media upload: rolling back storage change");
}
if (string.IsNullOrEmpty(mediaId))
{
return BadRequest($"Could not add instructor media");
}
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
var result = new { MediaId = mediaId, InstructorId = instructorId };
return Ok(result);
}
I reiterate, this all works great locally. I do not run it in IISExpress, I run it as a console app.
I submit large audio files via my SPA app and Postman and it works perfectly.
I am deploying this code to an Azure App Service on Linux (as a Basic B1).
Since the code works in my local development environment, I am at a loss of what my next steps are. I have refactored this code a few times but I suspect that it's environment related.
I cannot find anywhere that mentions that the level of App Service Plan is the culprit so before I go out spending more money I wanted to see if anyone here had encountered this challenge and could provide advice.
UPDATE: I attempted upgrading to a Production App Service Plan to see if there was an undocumented gate for incoming traffic. Upgrading didn't work either.
Thanks in advance.
-A
Currently, as of 11/2019, there is a limitation with the Azure App Service for Linux. It's CORS functionality is enabled by default and cannot be disabled AND it has a file size limitation that doesn't appear to get overridden by any of the published Kestrel configurations. The solution is to move the Web API app to a Azure App Service for Windows and it works as expected.
I am sure there is some way to get around it if you know the magic combination of configurations, server settings, and CLI commands but I need to move on with development.

ServiceNow integration with C# and dotnet core application

protected void Button1_Click(object sender, EventArgs e)
{
//Incident Service
IncidentService.ServiceNowSoapClient soapClient = new IncidentService.ServiceNowSoapClient();
soapClient.ClientCredentials.UserName.UserName = "username"; // username have SOAP role in SNow.
soapClient.ClientCredentials.UserName.Password = "Password1";
IncidentService.getRecords _getRecords = new IncidentService.getRecords();
IncidentService.getRecordsResponseGetRecordsResult[] getRecordsResponses = soapClient.getRecords(_getRecords);
_getRecords.active = true;
// Note: Please enable SOAP/REST services in your SNow dev instance table(s), Also,
// Go to system web services --> properties -> enable the 3rd option from the bottom.(This property sets the elementFormDefault attribute of the embedded XML schema to the value of unqualified)
//ServiceNowSoapClient client = new ServiceNowSoapClient();
//client.ClientCredentials.UserName.UserName = "username"; // username have SOAP role in SNow.
//client.ClientCredentials.UserName.Password = "Password1";
//insert newRecord = new insert();
//insertResponse insertResponse = new insertResponse();
//newRecord.first_name = "Jackson";
//newRecord.last_name = "Chris";
//newRecord.phone_number = "911-911-9999";
//newRecord.number = "CUS3048232";
try
{
//insertResponse = client.insert(newRecord);
//TextBox1.Text = insertResponse.sys_id;
getRecordsResponses = soapClient.getRecords(_getRecords);
for (int i = 0; i < getRecordsResponses.Length; i++)
{
TextBox2.Text = getRecordsResponses[i].short_description;
TextBox3.Text = getRecordsResponses[i].category;
}
}
catch (Exception ex)
{
TextBox1.Text = ex.Message;
}
//finally { client.Close(); }
}
How do you leverage ServiceNow data that reside in enterprise servicenow(CMDB,ITIL,various enterprise dbs, new dbs) dev,prod instances
to create End to End automated applications with C#, dotnetcore.
our goal is to Automate applications end to end with ServiceNow, dotnetcore, C#, docker containers, Ansible, Automic.
I know you probably don't need, maybe someone looking for the same finds this question.
I developed an library just for that.
https://emersonbottero.github.io/ServiceNow.Core/

Cloud Dataflow - how does Dataflow do parallelism?

My question is, behind the scene, for element-wise Beam DoFn (ParDo), how does the Cloud Dataflow parallel workload? For example, in my ParDO, I send out one http request to an external server for one element. And I use 30 workers, each has 4vCPU.
Does that mean on each worker, there will be 4 threads at maximum?
Does that mean from each worker, only 4 http connections are necessary or can be established if I keep them alive to get the best performance?
How can I adjust the level of parallelism other than using more cores or more workers?
with my current setting (30*4vCPU worker), I can establish around 120 http connections on the http server. But both server and worker has very low resource usage. basically I want to make them work much harder by sending out more requests out per second. What should I do...
Code Snippet to illustrate my work:
public class NewCallServerDoFn extends DoFn<PreparedRequest,KV<PreparedRequest,String>> {
private static final Logger Logger = LoggerFactory.getLogger(ProcessReponseDoFn.class);
private static PoolingHttpClientConnectionManager _ConnManager = null;
private static CloseableHttpClient _HttpClient = null;
private static HttpRequestRetryHandler _RetryHandler = null;
private static String[] _MapServers = MapServerBatchBeamApplication.CONFIG.getString("mapserver.client.config.server_host").split(",");
#Setup
public void setupHttpClient(){
Logger.info("Setting up HttpClient");
//Question: the value of maxConnection below is actually 10, but with 30 worker machines, I can only see 115 TCP connections established on the server side. So this setting doesn't really take effect as I expected.....
int maxConnection = MapServerBatchBeamApplication.CONFIG.getInt("mapserver.client.config.max_connection");
int timeout = MapServerBatchBeamApplication.CONFIG.getInt("mapserver.client.config.timeout");
_ConnManager = new PoolingHttpClientConnectionManager();
for (String mapServer : _MapServers) {
HttpHost serverHost = new HttpHost(mapServer,80);
_ConnManager.setMaxPerRoute(new HttpRoute(serverHost),maxConnection);
}
// config timeout
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(timeout)
.setConnectionRequestTimeout(timeout)
.setSocketTimeout(timeout).build();
// config retry
_RetryHandler = new HttpRequestRetryHandler() {
public boolean retryRequest(
IOException exception,
int executionCount,
HttpContext context) {
Logger.info(exception.toString());
Logger.info("try request: " + executionCount);
if (executionCount >= 5) {
// Do not retry if over max retry count
return false;
}
if (exception instanceof InterruptedIOException) {
// Timeout
return false;
}
if (exception instanceof UnknownHostException) {
// Unknown host
return false;
}
if (exception instanceof ConnectTimeoutException) {
// Connection refused
return false;
}
if (exception instanceof SSLException) {
// SSL handshake exception
return false;
}
return true;
}
};
_HttpClient = HttpClients.custom()
.setConnectionManager(_ConnManager)
.setDefaultRequestConfig(requestConfig)
.setRetryHandler(_RetryHandler)
.build();
Logger.info("Setting up HttpClient is done.");
}
#Teardown
public void tearDown(){
Logger.info("Tearing down HttpClient and Connection Manager.");
try {
_HttpClient.close();
_ConnManager.close();
}catch (Exception e){
Logger.warn(e.toString());
}
Logger.info("HttpClient and Connection Manager have been teared down.");
}
#ProcessElement
public void processElement(ProcessContext c) {
PreparedRequest request = c.element();
if(request == null)
return;
String response="{\"my_error\":\"failed to get response from map server with retries\"}";
String chosenServer = _MapServers[request.getHardwareId() % _MapServers.length];
String parameter;
try {
parameter = URLEncoder.encode(request.getRequest(),"UTF-8");
} catch (UnsupportedEncodingException e) {
Logger.error(e.toString());
return;
}
StringBuilder sb = new StringBuilder().append(MapServerBatchBeamApplication.CONFIG.getString("mapserver.client.config.api_path"))
.append("?coordinates=")
.append(parameter);
HttpGet getRequest = new HttpGet(sb.toString());
HttpHost host = new HttpHost(chosenServer,80,"http");
CloseableHttpResponse httpRes;
try {
httpRes = _HttpClient.execute(host,getRequest);
HttpEntity entity = httpRes.getEntity();
if(entity != null){
try
{
response = EntityUtils.toString(entity);
}finally{
EntityUtils.consume(entity);
httpRes.close();
}
}
}catch(Exception e){
Logger.warn("failed by get response from map server with retries for " + request.getRequest());
}
c.output(KV.of(request, response));
}
}
Yes, based on this answer.
No, you can establish more connections. Based on my answer, you can use a async http client to have more concurrent requests. As this answer also describes, you need to collect the results from these asynchronous calls and output it synchronously in any #ProcessElement or #FinishBundle.
See 2.
Since your resource usage is low, it indicates that the worker spends most of its time waiting for a response. I think with the described approach above, you can utilize your resources far better and you can achieve the same performance with far less workers.

FlumeRpcClient multithreading

I'm trying to understand the correct way to use the Flume RpcClient in a multithreaded application. Information I have found so far indicates that the components are thread safe, but the example in the Flume documentation clouds the issue when it comes to error handling. This code:
public void sendDataToFlume(String data) {
// Create a Flume Event object that encapsulates the sample data
Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));
// Send the event
try {
client.append(event);
} catch (EventDeliveryException e) {
// clean up and recreate the client
client.close();
client = null;
client = RpcClientFactory.getDefaultInstance(hostname, port);
// Use the following method to create a thrift client (instead of the above line):
// this.client = RpcClientFactory.getThriftInstance(hostname, port);
}
}
If more then one thread calls this method, and the exception is thrown, then there will be a problem as multiple threads try and recreate the client in the exception handler.
Is the intent of the SDK that it should only be used by a single thread? Should this method be synchronized, as it appears to be in the log4jappender that is part of the Flume source? Should I put this code in its own worker and pass it events via a queue?
Does anyone have an example of RpcClient being used by more then one thread (included the error condition)?
Would I be better off using the "embedded agent"? Is that multithread friendly?
With the embedded agent, you get the same case except you don't know what to do:
try {
agent.put(event);
} catch (EventDeliveryException e) {
// ???
}
You could stop the agent, and restart it - but you would need a synchronized block (or a ReentrantReadWriteLock, to not block thread while "reading" the client field). But since I'm not a Flume expert, I can't tell you which one is better.
Example:
class MyClass {
private final ReentrantReadWriteLocklock;
private final Lock readLock;
private final Lock writeLock;
private RpcClient client;
private final String hostname;
private final Integer port;
// Constructor
MyClass(String hostname, Integer port) {
this.hostname = Objects.requireNonNull(hostname, "hostname");
this.port = Objects.requireNonNull(port, "port");
this.lock = new ReentrantReadWriteLock();
this.readLock = this.lock.readLock();
this.writeLock = this.lock.writeLock();
this.client = buildClient();
}
private RpcClient buildClient() {
return RpcClientFactory.getDefaultInstance(hostname, port);
}
public void sendDataToFlume(String data) {
// Create a Flume Event object that encapsulates the sample data
Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));
// Send the event
readLock.lock(); // lock for reading 'client'
try {
try {
client.append(event);
} catch (EventDeliveryException e) {
writeLock.lock(); // lock for reading/writing client
try {
// clean up and recreate the client
client.close();
client = null;
client = buildClient();
} finally {
writeLock.unlock();
}
}
} finally {
readLock.unlock();
}
}
}
Beside, the example will lose the event because it is not sent back. Some kind of loop + a max retry would probably do the trick:
int i = 0;
for (; i < maxRetry; ++i) {
try {
client.append(event);
break;
} catch (EventDeliveryException e) {
// clean up and recreate the client
client.close();
client = null;
client = RpcClientFactory.getDefaultInstance(hostname, port);
// Use the following method to create a thrift client (instead of the above line):
// this.client = RpcClientFactory.getThriftInstance(hostname, port);
}
}
if (i == maxRetry) {
logger.error("flume client is offline, loosing events {}", event);
}
That's the idea, but I don't think that should be the task of the user (eg: us), but an option in the client or the agent to store event that could not be processed due to such errors.

tfs addWorkItemSaveListener is not getting events

I am adding a TFS WorkItemSaveListener but not getting any Event on saving workitem.
public static void main(String[] args) {
// Connecting to Project
final TFSTeamProjectCollection collection = ConsoleSettings.connectToTFS();
// Creating an object of listener
WorkItemSaveListenerImpl listener = new WorkItemSaveListenerImpl();
//Adding the listener
collection.getWorkItemClient().getEventEngine().addWorkItemSaveListener(listener);
for(;;) {
// keeping the program alive
try {
Thread.sleep(10000);
}
catch (InterruptedException exception) {
// TODO Auto-generated catch block
exception.printStackTrace();
}
}
}
Only really guessing here as I don't know the java sdk. But is it possible that the addWorkItemSaveListener event is only triggered for work items changed by that particular work item client?
You may need to setup a soap subscription, or write a server plugin instead.
C# to setup a soap subscription
Sorry it's for the wrong event, but it may be enough to give you an idea.
TfsTeamProjectCollection tpc = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(txtServerUrl.Text));
tpc.EnsureAuthenticated();
IEventService eventSrv = tpc.GetService(typeof(IEventService)) as IEventService;
DeliveryPreference delPref = new DeliveryPreference();
delPref.Address = "http://" + System.Environment.MachineName + ":8001/CheckInNotify";
delPref.Schedule = DeliverySchedule.Immediate;
delPref.Type = DeliveryType.Soap;
subscriptionId = eventSrv.SubscribeEvent(System.Environment.UserDomainName + "\\" + System.Environment.UserName, "CheckInNotify", "", delPref);

Resources