I am getting a constant error while executing a Dataflow job:
BigQuery import job "dataflow_job_838656419" failed., : BigQuery creation of import job for table "TestTable" in dataset "TestDataSet" in project "TestProject" failed., : BigQuery execution failed., : HTTP transport error: Message: Invalid value for: String is not a valid value HTTP Code: 400
It does not give any specific reason for the google Dataflow job failing continuously.
How do I know what is the error I am committing while executing the google Dataflow job?
The issue is the incorrect use of the BigQuery API, which is case-sensitive with respect to field type. Please specify "STRING" as the field type in the schema that you're providing.
Please see https://cloud.google.com/bigquery/docs/reference/rest/v2/tables for more details.
Related
I have disabled the usage of crumbs inside my Jenkins, for some users it works, for some it won't.
In order to trigger a job remotely I am using a token for 'Trigger builds remotely' - I have tried to change the token, I can see the api call we use access the job, but fail on the crumb part
Since it is an AD user, one can argue that it is a local Auth issue, but the same user can trigger another job on the same Jenkins, so I don't believe it to be the case,
On the matrix security block - I have ticked all the boxes - just to see if that is the case - still not.
I've verified the job name, token string and length,
In order to disable Crumbs I use on the Script Console
(again, this works on another job, with the same user, and others as well)
import jenkins.model.Jenkins
def instance = Jenkins.instance
instance.setCrumbIssuer(null)
Yet.. I am facing with this error when trying to trigger the job:
Error class: <class jenkins.JenkinsException>. Exception occurred: Error in request. Possibly authentication failed [401]: Unauthorized
<title>Error 401 Invalid password/token for user: USER_NAME</title>
URI: /crumbIssuer/api/json
STATUS: 401
MESSAGE: Invalid password/token for user: USER_NAME
SERVLET: Stapler
Can any one explain why it is not working ?
Need to post a byte message to solace queue using Jmeter. I have tried in following manner might be am incorrect but tried with following:
Use JMSPublisher sampler
create jndi.properties file and put in jmeter/lib
jndi.properties
java.naming.factory.initial = com.solacesystems.jndi.SolJNDIInitialContextFactory
java.naming.provider.url = smf://<remote IP and port>
java.naming.security.principal=<username>
java.naming.security.credentials=<password>
Solace_JMS_VPN=<VPN Name>
in JMSPublisher sampler (in GUI)
Connection Factory = connectionFactory
Destination = (Queue Name )
Message Type (radio button---Byte message)
Content encoding -- RAW
in text area ---> (Byte message)
Note : I have used actual values of IP/port/username/port/queuename/bytemessage, cannot share those. Soljms jar is available in lib folder too.
getting error :
Response message: javax.naming.NamingException: JNDI lookup failed - 503: Service Unavailable [Root exception is (null) com.solacesystems.jcsmp.JCSMPErrorResponseException: 503: Service Unavailable]
Though it is working perfectly fine when did with java spring boot. There used properties files in place of JNDI.
It would be great if anyone can guide me , please do not give activeMQ JNDI am actively looking for posting on solace queue or create connection to solace appliances through Jmeter.
I don't think you should be putting your Byte message into the textarea as it accepts either plain text or an XStream object, consider providing your payload via binary file(s) instead
If you're capable of sending the message using Java code you should be able to replicate the same using:
JMeter's JSR223 Sampler with Groovy language (Java syntax will work)
Or JUnit Request sampler if you need "strict" java
I setup prometheus on my machine and tested metrics for the default endpoint on which prometheus runs i.e localhost:9090.It worked fine.Now after changing the target to an endpoint of a server running locally,I am getting error and thus not able to get any metrics for the endpoint.
New endpoint - http://0.0.0.0:8090/health
Error Message:
level=warn ts=2019-10-16T07:12:28.713Z caller=scrape.go:930 component="scrape manager" scrape_pool=prometheus target=http://0.0.0.0:8090/health msg="append failed" err="expected value after metric, got \"MNAME\""
Attaching a screenshot of the prometheus.yml file to verify the configurations.
Are you sure your /health endpoint produces Prometheus metrics? Prometheus expects to scrape something that looks like this:
# HELP alertmanager_alerts How many alerts by state.
# TYPE alertmanager_alerts gauge
alertmanager_alerts{state="active"} 7
alertmanager_alerts{state="suppressed"} 0
# HELP alertmanager_alerts_invalid_total The total number of received alerts that were invalid.
# TYPE alertmanager_alerts_invalid_total counter
alertmanager_alerts_invalid_total{version="v1"} 0
alertmanager_alerts_invalid_total{version="v2"} 0
Is that similar to what you see if you open http://host:8090/health in your browser? Based on the error message you're seeing, I seriously doubt it.
I am getting a schema validation error when i host my SWAGGER.JSON file on my local server that runs the swagger.html.
But, when i put the file in
https://raw.githubusercontent.com/novastorm123/abapswaggerNW7.0/master/swagger_70.json
It works.
I don't get any console error in both cases.
I treid to disable the online validator error from swagger.
I used
validatorUrl : null
But it is not working and i am still getting the error at the bottom of the page.
On clicking of the error i am getting the message
{"schemaValidationMessages":[{"level":"error","message":"Can't read from file
/api/swagger.json"}]}
Whenever i am pasting my JSON in the online editor, it is accepting it as a valid JSON Input
Since a few days ago, I'm no longer able to submit my dataflow jobs, they fail with the error below.
I tried to submit the simple WordCount job and it succeeded. Even with a very simplified version of my own job, everything is fine. But when I add more code (adding GroupByKey transform), I'm no longer able to submit it.
Does anybody have any idea what does this error mean?
Thanks,
G
Exception in thread "main" java.lang.RuntimeException: Failed to create a workflow job: Invalid JSON payload received. Unknown token.
{ 8r W
^
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner.run(DataflowPipelineRunner.java:219)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.run(BlockingDataflowPipelineRunner.java:96)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.run(BlockingDataflowPipelineRunner.java:47)
at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:145)
at snippet.WordCount.main(WordCount.java:165)
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 400 Bad Request
{
"code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "Invalid JSON payload received. Unknown token.\n\u001F \b\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \t{ 8r\u0000 W\n^",
"reason" : "badRequest"
} ],
"message" : "Invalid JSON payload received. Unknown token.\n\u001F \b\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \t{ 8r\u0000 W\n^",
"status" : "INVALID_ARGUMENT"
}
To debug this issue, we want to validate that the request that is being made is valid and find the invalid portion of the JSON payload. To do this we will:
Increase logging verbosity
Re-run the application and capture the logs
Find the relevant section within the logs representing the JSON payload
Validate the JSON payload
Increasing logging verbosity
By adding the following lines to your main before you construct your pipeline, you will tell the Java logger implementation to increase the verbosity for the "com.google.api" package. This in turn will log the HTTP request/responses to Google APIs.
import java.util.logging.ConsoleHandler;
import java.util.logging.Level;
import java.util.logging.Logger;
public class MyDataflowProgram {
public static void main(String[] args) {
ConsoleHandler consoleHandler = new ConsoleHandler();
consoleHandler.setLevel(Level.ALL);
Logger googleApiLogger = Logger.getLogger("com.google.api");
googleApiLogger.setLevel(Level.ALL);
googleApiLogger.setUseParentHandlers(false);
googleApiLogger.addHandler(consoleHandler);
... Pipeline Construction ...
}
Re-run the application and capture the logs
You will want to re-run your Dataflow application and capture the logs. This is dependent on your development environment, what OS and/or IDE that you use. For example, when using Eclipse the logs will appear within the Console window. Saving these logs will help you maintain a record of the issue.
Find the relevant section within the logs representing the JSON payload
During re-execution of your Dataflow job, you will want to find the logs related to submission of the Dataflow job. These logs will contain the HTTP request followed by a response and will look like the following:
POST https://dataflow.googleapis.com/v1b3/projects/$GCP_PROJECT_NAME/jobs
Accept-Encoding: gzip
... Additional HTTP headers ...
... JSON request payload for creation ...
{"environment":{"clusterManagerApiService":"compute.googleapis.com","dataset":"bigquery.googleapis.com/cloud_dataflow","sdkPipelineOptions": ...
-------------- RESPONSE --------------
HTTP/1.1 200 OK
... Additional HTTP headers ...
... JSON response payload ...
You are interested in the request payload as the error you are getting indicates that it is the source of the problem.
Validate the JSON payload
There are many JSON validators which can be used but I prefer to use http://jsonlint.com/ because of its simplicity. If you are able, please share your findings either by updating the question or if you get stuck, feel free to send me a private message.