I am trying to build a json request in SoapUI and trying to post to a test step. For building the request, I have below code. When I execute it, it is throwing a JsonException (text provided below.) Any advise would be greatly appreciated. I have done this for over 60 services (so I've done this a 1001 times) and all of them have passed/worked. I am unable to pinpoint as to what the issue here is. Thanks!
import groovy.json.JsonSlurper
import groovy.json.JsonOutput
def setReqPayload ( pArrKeyValues ) {//[objId, dirInd, selActId, actDt, coType, secId]
def jsonPayload = '''
{
"objectId" : "",
"actDate": "",
"dirIndicator" : "",
"selectActId" : "",
"coInfo" : {"secId" : "","coType" : ""}
}
'''
// parse the request
def jsonReq = new JsonSlurper ( ).parseText ( jsonPayload )
jsonReq.objectId = pArrKeyValues [ 0 ] )
jsonReq.dirIndicator = pArrKeyValues [ 1 ]
jsonReq.selectActId = pArrKeyValues [ 2 ]
jsonReq.actDate = pArrKeyValues [ 3 ]
jsonReq.coInfo.coType = pArrKeyValues [ 4 ]
jsonReq.coInfo.secId = pArrKeyValues [ 5 ]
log.info "REQUEST JSON SLURP: " + jsonReq
return jsonReq
}
Exception:
ERROR:groovy.json.JsonException: expecting '}' or ',' but got current char ' ' with an int value of 160 The current character read is ' ' with an int value of 160
I have used below code as well to parse but that is throwing different kind of exception (Not that kind of map) and not allowing me to set the values to the keys.
// parse the request
def parser = new JsonSlurper ( ).setType ( JsonParserType.LAX )
def jsonReq = JsonOutput.toJson ( parser.parseText ( jsonPayload ) )
You have non-breaking space character(s) in your JSON, it's unfortunately invalid, it should be the usual space character.
Using LAX mode was a good idea but it does not seem to handle non-breaking spaces:
Use LAX if you want to enable relaxed JSON parsing, i.e., allow
comments, no quote strings, etc.
So if you cannot clean up your data at the source, you can filter it like this:
jsonPayload = jsonPayload.replace('\u00a0', '\u0020')
Looks like you have some trivial issue in the script.
Below script:
import groovy.json.JsonSlurper
import groovy.json.JsonOutput
def setReqPayload ( pArrKeyValues ) {//[objId, dirInd, selActId, actDt, coType, secId]
def jsonPayload = '''
{
"objectId" : "",
"actDate": "",
"dirIndicator" : "",
"selectActId" : "",
"coInfo" : {"secId" : "","coType" : ""}
}
'''
// parse the request
def jsonReq = new JsonSlurper ( ).parseText ( jsonPayload )
jsonReq.objectId = pArrKeyValues [ 0 ]
jsonReq.dirIndicator = pArrKeyValues [ 1 ]
jsonReq.selectActId = pArrKeyValues [ 2 ]
jsonReq.actDate = pArrKeyValues [ 3 ]
jsonReq.coInfo.coType = pArrKeyValues [ 4 ]
jsonReq.coInfo.secId = pArrKeyValues [ 5 ]
println "REQUEST JSON SLURP: " + jsonReq
return jsonReq
}
setReqPayload([1,2,3,4,5,6])
Produces below output:
{actDate=4, coInfo={coType=5, secId=6}, dirIndicator=2, objectId=1, selectActId=3}
Related
I've been using Twitter API for a while and everything was working smoothly, until I decided to add pagination to my code so I could return more than 100 Tweet when it comes to the "search_recent_tweets" section.
Ever since I added the pagination code, I am getting an Empty dictionary that returns the following error: "'dict' object has no attribute 'meta' "
I haven't been able to solve it.
This is the function where I am calling the pagination:
##This function will return all of recent tweets that match a specific query.
def getRecentTweet(self,query:str,max:int):
tweet_fields = [
"id",
"text",
"attachments",
"author_id",
"context_annotations",
"conversation_id",
"created_at",
"entities",
"geo",
"in_reply_to_user_id",
"lang",
# "non_public_metrics",
# "organic_metrics",
"possibly_sensitive",
# "promoted_metrics",
"public_metrics",
"referenced_tweets",
"reply_settings",
"source",
"withheld",
]
expansions = [
"author_id",
"referenced_tweets.id",
"referenced_tweets.id.author_id",
"entities.mentions.username",
"attachments.poll_ids",
"attachments.media_keys",
"in_reply_to_user_id",
"geo.place_id",
]
user_fields = [
"id",
"name",
"username",
"created_at",
"description",
"entities",
"location",
"pinned_tweet_id",
"profile_image_url",
"protected",
"public_metrics",
"url",
"verified",
"withheld",
]
poll_fields = [
"id",
"options",
"duration_minutes",
"end_datetime",
"voting_status",
]
media_fields = [
"url",
"duration_ms",
"height",
"preview_image_url",
"public_metrics",
"alt_text",
"variants",
]
place_fields = [
"full_name",
"id",
"contained_within",
"country",
"country_code",
"geo",
"name",
"place_type",
]
response = Paginator(
self.__client.search_recent_tweets,
query = query,
max_results = max,
user_auth = True,
limit=15,
start_time = '2022-07-16T17:16:00Z',
end_time = '2022-07-18T17:00:00Z',
tweet_fields = tweet_fields,
user_fields = user_fields,
expansions = expansions,
poll_fields = poll_fields,
place_fields = place_fields,
media_fields = media_fields
)
## This for-loop can be omitted, was added to check what is it part of the response object.
for page in response:
print(50*"-")
print(page)
return response`
To note that this is the code for the client:
from tweepy import Client as clt
from .User.user import User
from .Lists.list import List
from .Space.space import Space
from .Tweets.tweets import Tweet
from .configuration import Configuration
class Client():
def __init__(self, configuration:Configuration):
self.__configuration = configuration
self.__client = clt(bearer_token=configuration.bearer_token,
consumer_key = configuration.api_key,
consumer_secret = configuration.api_key_secret,
access_token = configuration.access_token,
access_token_secret = configuration.access_token_secret,
return_type=dict)
self.__user = User(self.__client)
self.__list = List(self.__client)
self.__space = Space(self.__client)
self.__tweet = Tweet(self.__client)
#property
def user(self):
return self.__user
#property
def list(self):
return self.__list
#property
def space(self):
return self.__space
#property
def tweet(self):
return self.__tweet
And this is the configuration file:
#dataclass
class Configuration():
api_key : str
api_key_secret : str
access_token : str
access_token_secret : str
bearer_token : str
So I am trying to write a Jenkins job using groovy to fetch me some data
The data inside the variable answer after the 3rd line would be some like
[
{
"endIpAddress": "11.21.115.9",
"id": "blabla1",
"name": "blabla",
"resourceGroup": "stg",
"startIpAddress": "11.12.115.9",
"type": "blablafirewallRules"
},
{
"endIpAddress": "1.2.15.9",
"id": "blabla2",
"name": "function1-blabla",
"resourceGroup": "stg",
"startIpAddress": "1.2.15.9",
"type": "blablafirewallRules"
},
{
"endIpAddress": "7.7.7.7",
"id": "blabla2",
"name": "function2-blabla",
"resourceGroup": "stg",
"startIpAddress": "7.7.7.7",
"type": "blablafirewallRules"
},
.
.
.
.
]
What id like to do is to build a list or a 2-dimentions-array that would parse this json and the it will hold all the startipaddress of all the items where name contains "function", so based on this JSON, the data should be
desiredData[0] = [function1-blabla, 1.2.15.9]
desiredData[1] = [function2-blabla, 7.7.7.7]
Up until now I wasn't using JsonSlurper and I was manipulating the text and building the array which is pretty stupid thing to do since this is kind of what JSON is all about I guess.
import groovy.json.JsonSlurper
command = "az mysql server firewall-rule list --resource-group ${rgNameSrvr} --server-name ${BLA} --output json"
answer = azure.executeAzureCliCommand(command, "BLA")
def jsonSlurper = new JsonSlurper()
def json = new JsonSlurper().parseText(answer)
def data = json.findAll{ it.name =~ /^function/ }.collectEntries{ [it.name, it.startIpAddress] }
Code above returns map where key=name and value=ip
If you want 2dim array:
def data = json.findAll{ it.name =~ /^function/ }.collect{ [it.name, it.startIpAddress] }
I'm trying to create an issue, issuelink and copy attachment from triggered issue at the same time using scriptrunner.
For now, the code below are able to create issue and attachment, but I can not link the issue I created, does someone ever deal with this situation?
import org.apache.http.entity.ContentType;
def issueKey = issue.key
def result = get('/rest/api/2/issue/' + issueKey)
.header('Content-Type', 'application/json')
.asObject(Map)
def projectkey = "PILOTLV"
if (result.body.fields.customfield_10078.id == "10124"){
projectkey = "SPCLRQ"
}
def issuetypekey = "10018"
def ticketno = result.body.fields.customfield_10060
if (result.body.fields.issuetype.id == "10015"){
issuetypekey = "10017"
ticketno = result.body.fields.customfield_10059
}
def description = result.body.fields.description
def summary = result.body.fields.summary
def sysname = result.body.fields.customfield_10078.id
logger.info(description)
logger.info(summary)
logger.info(sysname)
logger.info(ticketno)
// create issue
def createReq = Unirest.post("/rest/api/2/issue")
.header("Content-Type", "application/json")
.body([
fields: [
summary : summary,
description: description,
customfield_10078: [
id: sysname
],
customfield_10060: ticketno,
project : [
key: projectkey
],
issuetype : [
id: issuetypekey
]
],
update: [
issuelinks: [
add: [
type:[
name: "Blocks",
inward: "is blocked by",
outward: "blocks"
],
outwardIssue: [
key: issuekey
]
]
]
]
])
.asObject(Map)
assert createReq.status >= 200 && createReq.status < 300
def clonedIssue = createReq.body
// copy attachments
if (issue.fields.attachment) {
issue.fields.attachment.collect { attachment ->
def url = attachment.content as String
url = url.substring(url.indexOf("/secure"))
def fileBody = Unirest.get("${url}").asBinary().body
def resp = Unirest.post("/rest/api/2/issue/${clonedIssue.id}/attachments")
.header("X-Atlassian-Token", "no-check")
.field("file", fileBody, ContentType.create(attachment['mimeType'] as String), attachment['filename'] as String)
.asObject(List)
assert resp.status >=200 && resp.status < 300
}
}
And there is a minor question, I found that the attachment name on new issue can not display Chinese character
https://community.atlassian.com/t5/Jira-questions/rest-api-3-issue-issue-key-attachments-upload-file-with-a/qaq-p/1070389\
Looks like I'm missing library
Simply put, you can't create and link an issue via the REST API at the same time. You have to create the issue first, then link the issue separately.
I'm writing some rules and learning Starlark as I progress.
Assume I have my own provider:
ModularResources = provider(
doc = "Modular resources",
fields = {
"artifactId": "Former Maven artifact id (don't ask me why)",
"srcs": "List of labels (a glob(..) thing)",
},
)
def _modular_resources_impl(ctx):
return ModularResources(
artifactId = ctx.attr.artifactId,
srcs = ctx.attr.srcs,
)
modular_resources = rule(
implementation = _modular_resources_impl,
attrs = {
"artifactId": attr.string(
mandatory = True,
),
"srcs": attr.label_list(
allow_files = True,
mandatory = True,
),
},
)
Then I have a generator rule which requires these:
some_generator = rule(
attrs = {
"deps": attr.label_list(
providers = [ ModularResources ]
),
...
},
...
)
In my implementation I discovered that I need to do a couple of unwraps to get the files:
def _get_files(deps):
result = []
for dep in deps:
for target in dep[ModularResources].srcs:
result += target.files.to_list()
return result
Is there a more efficient way to perform the collection?
As to why I'm doing this, the generator actually needs a special list of files like this:
def _format_files(deps):
formatted = ""
for dep in deps:
for target in dep[ModularResources].srcs:
formatted += ",".join([dep[ModularResources].artifactId + ":" + f.path for f in target.files.to_list()])
return formatted
FWIW, here is an example how this is used:
a/BUILD:
modular_resources(
name = "generator_resources",
srcs = glob(
["some/path/**/*.whatever"],
),
artifactId = "a",
visibility = ["//visibility:public"],
)
b/BUILD:
some_generator(
name = "...",
deps = [
"//a:generator_resources"
]
)
If you want to trade memory for better performance, maybe the operation can more easily be parallelised by blaze if it's done in the provider instead:
def _modular_resources_impl(ctx):
return ModularResources(
artifactId = ctx.attr.artifactId,
formatted_srcs = ",".join([artifactId + ":" + f.path for f in ctx.files.src])
)
I'm trying to create a macro in Bazel to wrap java_test to run testng, however I'm running into trouble passing TestNG the filename
So far I have
load("#bazel_skylib//:lib.bzl", "paths")
def java_testng(file, deps=[], **kwargs):
native.java_test(
name = paths.split_extension(file)[0],
srcs = [file],
use_testrunner=False,
main_class='org.testng.TestNG',
deps = [
"//third_party:org_testng_testng"
] + deps,
args=[file],
**kwargs
)
However args seems to be a non-existent runfile.
Help appreciated on the correct value for args
Here is a sample usage I would like
java_testng(
file = "SomeFakeTest.java",
deps = [
"//:resources",
"//third_party:com_fasterxml_jackson_core_jackson_databind",
"//third_party:org_assertj_assertj_core",
],
)
Here is the solution I came up with
load("#bazel_skylib//:lib.bzl", "paths")
def java_testng(file, deps=[], size="small", **kwargs):
native.java_library(
name = paths.split_extension(file)[0] + "-lib",
deps = [
"//third_party:org_testng_testng"
] + deps,
srcs = [file]
)
native.java_test(
name = paths.split_extension(file)[0],
use_testrunner=False,
main_class='org.testng.TestNG',
runtime_deps = [
"//third_party:org_testng_testng",
paths.split_extension(file)[0] + "-lib"
],
data = [file],
size = size,
args=["-testclass $(location " + file + ")"],
**kwargs
)
I dont know why you used a macro, I manage to call testng without.
See my solution below:
I create my program jar (using some Annotation Processor)
I create my test jar (using some Annotation Processor)
I call testng via java_test().
The alone thing I didn't figure out: how to not hardcode the "libmy-model-test-lib.jar"
java_library(
name = "my-model",
srcs = glob(["src/main/java/**/*.java"]),
resources = glob(["src/main/resources/**"]),
deps = [
"#commons_logging_jar//jar",
":lombok",
":mysema_query",
...
],
)
java_library(
name = "my-model-test-lib",
srcs = glob(["src/test/java/**/*.java"]),
deps = [
"#org_hamcrest_core_jar//jar",
"#commons_logging_jar//jar",
":lombok",
":mysema_query",
...
"#assertj_jar//jar",
"#mockito_jar//jar",
"#testng_jar//jar",
],
)
java_test(
name = "AllTests",
size = "small",
runtime_deps = [
":my-model-test-lib",
":my-model",
"#org_jboss_logging_jar//jar",
"#org_objenesis_jar//jar",
"#com_beust_jcommander//jar",
],
use_testrunner=False,
main_class='org.testng.TestNG',
args=['-testjar','libmy-model-test-lib.jar','-verbose','2'],
)
java_plugin(
name = "lombok_plugin",
processor_class = "lombok.launch.AnnotationProcessorHider$AnnotationProcessor",
deps = ["#lombok_jar//jar"],
)
java_library(
name = "lombok",
exports = ["#lombok_jar//jar"],
exported_plugins = [":lombok_plugin"],
)
java_plugin(
name = "mysema_query_plugin",
processor_class = "com.mysema.query.apt.jpa.JPAAnnotationProcessor",
deps = [
"#querydsl_apt_jar//jar",
"#mysema_codegen_jar//jar",
"#javax_persistence_jar//jar",
"#querydsl_codegen_jar//jar",
"#guava_jar//jar",
"#querydsl_core_jar//jar",
"#javax_inject_jar//jar",
],
)
java_library(
name = "mysema_query",
exports = ["#querydsl_apt_jar//jar"],
exported_plugins = [":mysema_query_plugin"],
)
java_plugin(
name = "mockito_plugin",
processor_class = "",
deps = ["#mockito_jar//jar"],
)
java_library(
name = "mockito",
exports = ["#mockito_jar//jar"],
exported_plugins = [":mockito_plugin"],
)