Create JMS Queue using properties file - middleware

my
i am using this python script to create JMS queue using properties file and my properties file name is domain.properties and the error is WLSTException: error occurred while performing connect: ServerUrl
{from java.io import FileInputStream
import java.lang
import os
import string
propInputStream = FileInputStream('domain.properties')
configProps = Properties()
configProps.load(propInputStream)
ServerUrl = configProps.get('server.url')
Username = configProps.get('username')
Password = configProps.get('password')
jmsServerName = configProps.get('jms.server.name')
systemModuleName = configProps.get('system.module.name')
queueSubDeploymentName = configProps.get('queue.sub.deployment.name')
queueName = configProps.get('queue.name')
queueJNDIName = configProps.get('queue.jndi.name')
connect('Username','Password', 'ServerUrl')
edit()
print "================== Queue ==================="
startEdit()
cd('/')
cd('/JMSSystemResources/'+systemModuleName+'/JMSResource/'+systemModuleName)
cmo.createQueue(queueName)
print 'Created a Queue !!'
cd('/JMSSystemResources/'+systemModuleName+'/JMSResource/'+systemModuleName+'/Queues/'+queueName)
cmo.setJNDIName(queueJNDIName)
cmo.setSubDeploymentName(queueSubDeploymentName)
cd('/SystemResources/'+systemModuleName+'/SubDeployments/'+queueSubDeploymentName)
set('Targets',jarray.array([ObjectName('com.bea:Name='+jmsServerName+',Type=JMSServer')], ObjectName))
print 'Targeted the Queue to the created subdeployment !!'
activate()
print "success"
cmd = "rm -f wlst.log"
os.system(cmd)}
**and finally i got the error
WLSTException: Error occurred while performing connect: ServerUrl**

connect('Username','Password', 'ServerUrl')
These parameters are static string, not variables read from property file.
You should remove the single quote.
connect(Username,Password, ServerUrl)

Related

How do I resolve the "Decompressor is not installed for grpc-encoding" issue?

I'm getting this error when I call my gRPC Golang server from Dart:
Caught error: gRPC Error (code: 12, codeName: UNIMPLEMENTED, message: grpc: Decompressor is not installed for grpc-encoding "gzip", details: [], rawResponse: null, trailers: {})
I have read https://github.com/bradleyjkemp/grpc-tools/issues/19, and it doesn't appear to apply to my issue.
The server is running 1.19.2 on Gcloud Ubuntu.
Dart is running 2.18.2 on Mac Monterey
I have a Dart client calling a Go server. Both appear to be using GZIP for compression.
Dart proto
syntax = "proto3";
option java_multiple_files = true;
option java_package = "io.grpc.examples.helloworld";
option java_outer_classname = "HelloWorldProto";
option objc_class_prefix = "HLW";
package helloworld;
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
GO proto:
syntax = "proto3";
option go_package = "google.golang.org/grpc/examples/helloworld/helloworld";
option java_multiple_files = true;
option java_package = "io.grpc.examples.helloworld";
option java_outer_classname = "HelloWorldProto";
package helloworld;
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
Dart Client code:
import 'package:grpc/grpc.dart';
import 'package:helloworld/src/generated/helloworld.pbgrpc.dart';
Future<void> main(List<String> args) async {
final channel = ClientChannel(
'ps-dev1.savup.com',
port: 54320,
options: ChannelOptions(
credentials: ChannelCredentials.insecure(),
codecRegistry:
CodecRegistry(codecs: const [GzipCodec(), IdentityCodec()]),
),
);
final stub = GreeterClient(channel);
final name = args.isNotEmpty ? args[0] : 'world';
try {
final response = await stub.sayHello(
HelloRequest()..name = name,
options: CallOptions(compression: const GzipCodec()),
);
print('Greeter client received: ${response.message}');
} catch (e) {
print('Caught error: $e');
}
await channel.shutdown();
}
The Go gRPC server works fine with a Go gRPC client and BloomRPC.
I'm new to gRPC in general and very new to Dart.
Thanks in advance for any help resolving this issue.
That error that you shared shows that your server doesn't support gzip compression.
The quickest fix is to not use gzip compression in the client's call options, by removing the line:
options: CallOptions(compression: const GzipCodec()),
from your Dart code.
The go-grpc library has an implementation of a gzip compression encoding in package github.com/grpc/grpc-go/encoding/gzip, but it's experimental, so likely not wise to use it in production; or at least you should pay close attention to it:
// Package gzip implements and registers the gzip compressor
// during the initialization.
//
// Experimental
//
// Notice: This package is EXPERIMENTAL and may be changed or removed in a
// later release.
If you want to use it in your server, you just need to import the package; there is no user-facing code in the package:
import (
_ "github.com/grpc/grpc-go/encoding/gzip"
)
The documentation about compression for grpc-go mentions this above package as an example of how your implement such a compressor.
So you may also want to copy the code to a more stable location and take responsibility for maintaining it yourself, until there is a stable supported version of it.

issue while using groovy script in Jenkins

I am using a groovy script to remove content from a Prop file in jenkins job. I am using groovy plugin in Build section. My code works fine when the value is hard coded. but when I am using a variable to pass the value,I am not getting desired result. I have tested my code in intelliji editor and is getting some result. Could you please help me understand what am I doing wrong?
This is working fine
InputStream input = new FileInputStream("C:\\AppianDeployment\\Application.properties")
Properties prop = new Properties()
String removeApps = "AP2"
prop.load(input)
def keyToRemove = "${removeApps}".toString()
*prop.remove("AP1")*
OutputStream output = new FileOutputStream("C:\\AppianDeployment\\Application.properties");
prop.store(output, null);
This is not working
InputStream input = new FileInputStream("C:\\AppianDeployment\\Application.properties")
Properties prop = new Properties()
String removeApps = "AP2"
prop.load(input)
def keyToRemove = "${removeApps}".toString()
*prop.remove(${keyToRemove})*
OutputStream output = new FileOutputStream("C:\\AppianDeployment\\Application.properties");
prop.store(output, null);
There's no such literal in Groovy
prop.remove(${keyToRemove})
Instead you should be using either
prop.remove keyToRemove
or for whatever reason
prop.remove "${keyToRemove}".toString()

Import/Post CSV files into ServiceNow

We have a requirement for a CSV file to be pushed to the instance, imported and an incident created. I have created the import table and transformation map, and I've successfully tested them manually.
However, when I've attempted to use the instructions from ServiceNow documents site Post CSV files to Import Set nothing happens. The screen goes blank after I get prompted for login credentials.
When I check the system logs and import logs all I see is the error "java.lang.NullPointerException".
My url is basically the following one: https://.service-now.com/sys_import.do?sysparm_import_set_tablename=&sysparm_transform_after_load=true&uploadfile=
Is there something I'm missing?
I do the same thing but have it come in via an email to the my SN instance and process it using an inbound action.
var type = {};
type.schedule = 'u_imp_tmpl_u_term_phr_empl_mvs_ids'; //Display name for scheduled import -- eb9f2dae6f46a60051281ecbbb3ee4a5
type.table = 'u_imp_tmpl_u_term_phr_empl_mvs_ids'; //Import table name
gs.log('0. Process File Start');
if(type.schedule != '' && type.table != '') {
gs.log('1. Setting up data source');
current.name = type.schedule + ' ' + gs.nowDateTime(); //append date time to name of data source for audit
current.import_set_table_name = type.table;
current.import_set_table_label = "";
current.type= "File";
current.format = "CSV"; //"Excel CSV (tab)";
current.header_row = 1;
current.file_retrieval_method = "Attachment";
var myNewDataSource = current.insert();
gs.log('2. Data source inserted with sys_id - ' + myNewDataSource);
//point the scheduled import record to the new data source
var gr = new GlideRecord ('scheduled_import_set');
gr.addQuery('name', type.schedule);
gr.query();
if (gr.next()) {
gs.log('3. Found Scheduled Import definition');
gr.data_source = myNewDataSource;
gr.update();
gs.log('4. Scheduled Import definition updated to point to data source just added');
//Packages.com.snc.automation.TriggerSynchronizer.executeNow(gr);
//Execute a scheduled script job
SncTriggerSynchronizer.executeNow(gr);
} else {
// add error conditions to email somebody that this has occurred
gs.log('5. ERROR - Unable to locate scheduled import definition. Please contact your system administrator');
}
gs.log('6. Inbound email processing complete');
//gs.eventQueue("ep_server_processed",current);
event.state="stop_processing";
} else {
gs.log('7. Inbound email processing skipped');
}

Groovy Script Send Multipart Request to Grails Web App for test

I'm trying to test my Grails web application by creating and sending a multipart request from a stand-alone groovy test script that's built by gradle. But I'm struggling.
I can't attach a custom Content-ID header
I can't attach a file of random bytes created at runtime (I can attach an existing file, but I need many random files of varying size)
EDIT (Thanks to Xeon):
My script is now sending a valid multipart request, but my grails web app is not accepting any headers other than "Content-Type" for some reason.
Heres my code:
The Stand-Alone Test Script code:
void sendMultipartRequest(String url) {
HTTPBuilder httpBuilder = new HTTPBuilder(url)
httpBuilder.request(Method.POST){ req ->
MultipartEntityBuilder entityBuilder = new MultipartEntityBuilder()
entityBuilder.setBoundary("----boundary")
entityBuilder.setMode(HttpMultipartMode.RFC6532)
String randomString = myGenerateRandomStringMethod()
FormBodyPart formBodyPart = new FormBodyPart(
"SOME_NAME",
new InputStreamBody(new ByteArrayInputStream(randomString.bytes), "attachment", "SOME_NAME")
)
formBodyPart.addField("Content-ID", "abc123")
entityBuilder.addPart(formBodyPart)
response.success = { resp ->
println("Success with response ${resp.toString()}")
}
response.failure = { resp ->
println("Failure with response ${resp.toString()}")
}
delegate.setHeaders(["Content-Type":"multipart/related; boundary=----boundary"])
req.setEntity(entityBuilder.build())
}
}
Grails web-app side in the controller for handling posts:
def submitFiles() {
if(request instanceof MultipartHttpServletRequest){
HashMap<String, Byte[]> fileMap = extractMultipartFiles(request)
someService.doStuffWith(fileMap)
}
}
private HashMap<String, Byte[]> extractMultipartFiles(MultipartHttpServletRequest multipartRequest) {
HashMap<String, Byte[]> files = new HashMap<>()
for(element in mulipartRequest.multiFileMap){
MultipartFile file = element.value.first()
String contentId = multipartRequest.getMultipartHeaders(element.key).get("Content-ID")?.first()
if(contentId) files.put(contentId, file.getBytes())
}
return files
}
Libraries I'm using:
ext {
groovyVersion = "2.3.4"
commonsLangVersion = "2.6"
httpBuilderVersion = "0.7.1"
httpmimeVersion = "4.3.4"
junitVersion = "4.11"
}
dependencies {
compile "org.codehaus.groovy:groovy-all:${groovyVersion}"
compile "commons-lang:commons-lang:${commonsLangVersion}"
compile "org.codehaus.groovy.modules.http-builder:http-builder:${httpBuilderVersion}"
compile "org.apache.httpcomponents:httpmime:${httpmimeVersion}"
testCompile group: 'junit', name: 'junit', version: "${junitVersion}"
}
You can always use some subclass of ContentBody interface:
FormBodyPart(String name, ContentBody body)
For example use: InputStreamBody:
new FormBodyPart("name", new InputStreamBody(new RandomInputStream(...)), ContentType.MULTIPART_FORM_DATA);
You can use: RandomInputStream class.
And with headers you could probably use: HTTPBuilder$RequestConfigDelegate.setHeaders(Map headers) because it's set to delegate of inner closure.
I've used curl in the past to do this testing:
curl -v -F "param1=1" -F "param2=99" -F "fileparam=#somefile.flv;type=video/x-flv" http://localhost:8080/someapp/sessions
somefile.flv is in the current directory

Best way to delete messages from SQS during development

During development, I'm generating a lot of bogus messages on my Amazon SQS. I was about to write a tiny app to delete all the messages (something I do frequently during development). Does anyone know of a tool to purge the queue?
If you don't want to write script or delete your queue. You can change the queue configuration:
Right click on queue > configure queue
Change Message Retention period to 1 minute (the minimum time it can be set to).
Wait a while for all the messages to disappear.
I found that this way works well for deleting all messages in a queue without deleting the queue.
As of December 2014, the sqs console now has a purge queue option in the queue actions menu.
For anyone who has come here, looking for a way to delete SQS messages en masse in C#...
//C# Console app which deletes all messages from a specified queue
//AWS .NET library required.
using System;
using System.Net;
using System.Configuration;
using System.Collections.Specialized;
using System.IO;
using System.Linq;
using System.Text;
using Amazon;
using Amazon.SQS;
using Amazon.SQS.Model;
using System.Timers;
using System.Collections.Generic;
using System.Text.RegularExpressions;
using System.Diagnostics;
namespace QueueDeleter
{
class Program
{
public static System.Timers.Timer myTimer;
static NameValueCollection appConfig = ConfigurationManager.AppSettings;
static string accessKeyID = appConfig["AWSAccessKey"];
static string secretAccessKeyID = appConfig["AWSSecretKey"];
static private AmazonSQS sqs;
static string myQueueUrl = "https://queue.amazonaws.com/1640634564530223/myQueueUrl";
public static String messageReceiptHandle;
public static void Main(string[] args)
{
sqs = AWSClientFactory.CreateAmazonSQSClient(accessKeyID, secretAccessKeyID);
myTimer = new System.Timers.Timer();
myTimer.Interval = 10;
myTimer.Elapsed += new ElapsedEventHandler(checkQueue);
myTimer.AutoReset = true;
myTimer.Start();
Console.Read();
}
static void checkQueue(object source, ElapsedEventArgs e)
{
myTimer.Stop();
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest();
receiveMessageRequest.QueueUrl = myQueueUrl;
ReceiveMessageResponse receiveMessageResponse = sqs.ReceiveMessage(receiveMessageRequest);
if (receiveMessageResponse.IsSetReceiveMessageResult())
{
ReceiveMessageResult receiveMessageResult = receiveMessageResponse.ReceiveMessageResult;
if (receiveMessageResult.Message.Count < 1)
{
Console.WriteLine("Can't find any visible messages.");
myTimer.Start();
return;
}
foreach (Message message in receiveMessageResult.Message)
{
Console.WriteLine("Printing received message.\n");
messageReceiptHandle = message.ReceiptHandle;
Console.WriteLine("Message Body:");
if (message.IsSetBody())
{
Console.WriteLine(" Body: {0}", message.Body);
}
sqs.DeleteMessage(new DeleteMessageRequest().WithQueueUrl(myQueueUrl).WithReceiptHandle(messageReceiptHandle));
}
}
else
{
Console.WriteLine("No new messages.");
}
myTimer.Start();
}
}
}
Check the first item in queue. Scroll down to last item in queue.
Hold shift, click on item. All will be selected.
I think the best way would be to delete the queue and create it again, just 2 requests.
I think best way is changing Retention period to 1 minute, but here is Python code if someone needs:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import boto.sqs
from boto.sqs.message import Message
import time
import os
startTime = program_start_time = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime())
### Lets connect to SQS:
qcon = boto.sqs.connect_to_region(region,aws_access_key_id='xxx',aws_secret_access_key='xxx')
SHQueue = qcon.get_queue('SQS')
m = Message()
### Read file and write to SQS
counter = 0
while counter < 1000: ## For deleting 1000*10 items, change to True if you want delete all
links = SHQueue.get_messages(10)
for link in links:
m = link
SHQueue.delete_message(m)
counter += 1
#### The End
print "\n\nTerminating...\n"
print "Start: ", program_start_time
print "End time: ", time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime())
Option 1: boto sqs has a purge_queue method for python:
purge_queue(queue)
Purge all messages in an SQS Queue.
Parameters: queue (A Queue object) – The SQS queue to be purged
Return type: bool
Returns: True if the command succeeded, False otherwise
Source: http://boto.readthedocs.org/en/latest/ref/sqs.html
Code that works for me:
conn = boto.sqs.connect_to_region('us-east-1',
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
)
q = conn.create_queue("blah")
#add some messages here
#invoke the purge_queue method of the conn, and pass in the
#queue to purge.
conn.purge_queue(self.queue)
For me, it deleted the queue. However, Amazon SQS only lets you run this once every 60 seconds. So I had to use the secondary solution below:
Option 2: Do a purge by consuming all messages in a while loop and throwing them out:
all_messages = []
rs = self.queue.get_messages(10)
while len(rs) > 0:
all_messages.extend(rs)
rs = self.queue.get_messages(10)
If you have access to the AWS console, you can purge a queue using the Web UI.
Steps:
Navigate to Services -> SQS
Filter queues by your "QUEUE_NAME"
Right-click on your queue name -> Purge queue
This will request for the queue to be cleared and this should be completed with 5 or 10 seconds or so.
See below for how to perform this operation:
To purge an SQS from the API see:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_PurgeQueue.html

Resources