We've suddenly started seeing time-outs on our API-based PR creation requests in Bitbucket. I'm able to create a PR in the UI, but any attempt to do so via API results in a 504 Gateway Time-out.
We're on the free tier of Bitbucket, so I cannot submit an issue to them... Are any other Bitbucket users here experiencing this?
Request:
Endpoint: https://api.bitbucket.org/2.0/repositories/{workspace}/{repo-slug}/pullrequests
{
"title": "Staging merge to Release Candidate",
"source": {
"branch": {
"name": "staging"
}
},
"destination": {
"branch": {
"name": "release-candidate"
}
}
}
Response:
<html>
<head>
<title>504 Gateway Time-out</title>
</head>
<body>
<center>
<h1>504 Gateway Time-out</h1>
</center>
</body>
</html>
The bitbucket API endpoint https://api.bitbucket.org/2.0/repositories/{workspace}/{repo-slug}/pullrequests currently returns an 504 Gateway Time-Out when there aren't any code changes between two branches.
Source:
https://jira.atlassian.com/browse/BCLOUD-22009
Do you have changes between the branches staging and release-candidate?
Related
I have a Dataflow template that I can use for a Dataflow job running as a service account of my choosing. I've actually used one of Google's provided samples: gs://dataflow-templates/latest/GCS_Text_to_BigQuery.
I now want to schedule this using Cloud Scheduler. I've set up my scheduler job like so:
When the scheduler job runs it errors with PERMISSION_DENIED:
{
"insertId": "1kw7uaqg3tnzbqu",
"jsonPayload": {
"#type": "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished",
"url": "https://dataflow.googleapis.com/v1b3/projects/project-redacted/locations/europe-west2/templates:launch?gcsPath=gs%3A%2F%2Fdataflow-templates%2Flatest%2FGCS_Text_to_BigQuery",
"jobName": "projects/project-redacted/locations/europe-west2/jobs/aaa-schedule-dataflow-job",
"status": "PERMISSION_DENIED",
"targetType": "HTTP"
},
"httpRequest": {
"status": 403
},
"resource": {
"type": "cloud_scheduler_job",
"labels": {
"job_id": "aaa-schedule-dataflow-job",
"project_id": "project-redacted",
"location": "europe-west2"
}
},
"timestamp": "2021-12-16T16:41:17.349974291Z",
"severity": "ERROR",
"logName": "projects/project-redacted/logs/cloudscheduler.googleapis.com%2Fexecutions",
"receiveTimestamp": "2021-12-16T16:41:17.349974291Z"
}
I have no idea what permission is missing or what I need to grant in order to make this work and am hoping someone here can help me.
In order to reproduce the problem I have built a terraform configuration that creates the Dataflow job from the template along with all of its prerequisites and it executes successfully.
In that same terraform configuration I have created a Cloud Scheduler job that purports to execute an identical Dataflow job and it is that which fails with the error given above.
All this code is available at https://github.com/jamiet-msm/dataflow-scheduler-permission-problem/tree/6ef20824af0ec798634c146ee9073b4b40c965e0 and I have created a README that explains how to run it:
I figured it out, the service account needs to be granted roles/iam.serviceAccountUser on itself
resource "google_service_account_iam_member" "sa_may_act_as_itself" {
service_account_id = google_service_account.sa.name
role = "roles/iam.serviceAccountUser"
member = "serviceAccount:${google_service_account.sa.email}"
}
and roles/dataflow.admin is required also, roles/dataflow.worker isn't enough. I assume that's because dataflow.jobs.create is required which is not provided by roles/dataflow.worker (see https://cloud.google.com/dataflow/docs/concepts/access-control#roles for reference)
resource "google_project_iam_member" "df_admin" {
role = "roles/dataflow.admin"
member = "serviceAccount:${google_service_account.sa.email}"
}
Here is the commit with the required changes: https://github.com/jamiet-msm/dataflow-scheduler-permission-problem/commit/3fd7cabdf13d5465e01a928049f54b0bd486ed73
In our project we use Firebase cloud messaging for push notification and we encountered the problem of duplication of messages. Our process looks as follow:
our client side based on iOS device and we use follow sdk
Xamarin.Firebase.iOS.CloudMessaging 3.1.2
Xamarin.Firebase.iOS.InstanceID 3.2.1
Xamarin.Firebase.iOS.Core 5.1.3
when user login the application request the token
application send this token to server which is subscribe this token to topic
Subscribe user for topic reuqest
POST https://iid.googleapis.com/iid/v1:batchAdd
request body
{
"to" : "/topics/test",
"registration_tokens" : ["..user_registration_token.."]
}
server send periodically notifications to the topics
Send notification for topic subscribers request
POST https://fcm.googleapis.com/v1/projects/our_project_id/messages:send
request body
{
"message":
{
"topic":"test",
"notification":
{
"title":"test-6",
"body":"test-6"
}
}
}
when user logout from application, server perform unsubscribing user token from topics
POST https://iid.googleapis.com/iid/v1:batchRemove
{
"to": "/topics/test",
"registration_tokens" : ["..user_registration_token.."]
}
But, when user login again and request brand new token, device still received push notifications which are sending to the old token, and if we send notifications by the topic such users received duplicate push notifications.
If we try get information for old token from api method
GET https://iid.googleapis.com/iid/info/token.....
we get response
<HTML>
<HEAD>
<TITLE>Internal Server Error</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>Internal Server Error</H1>
<H2>Error 500</H2>
</BODY>
</HTML>
try to add ?details=true to your uri.
Be sure to use Authorization key in your header.
Output expected is
{
"error": "No information found about this instance id." }
or
{
"application": "com.chrome.windows",
"subtype": "wp:http://localhost:8089/#xxx-xx-xx-xx-xx-x",
"scope": "*",
"authorizedEntity": "xxxx",
"rel": {
"topics": {
}
}
},
"platform": "BROWSER"
}
I created a simple Rest Endpoint in java using jersey as you see in the following:
#Path("/study")
public class CreateRestEndpoint {
private static String endpoint;
#PUT
#Consumes("application/fhir+json")
#Produces(MediaType.APPLICATION_JSON)
public Response getPut(IBaseResource list){
System.out.println("UpdatedResource.: "+list);
return Response.status(200).build();
}
#PUT
public Response getTest(String str) {
System.out.printf(str);
return Response.status(200).build();
}
When I use postman and I send a PUT request to jersey-servlet, everything is ok and the jersey-servlet gets the message immediately.
But I created jersey-servlet to get a message which is sent by FHIR server (my FHIR server is running in docker) via Subscription resource. Actually, I'm trying to use subscription mechanism to be notified when a List resource is updated.:
{
"resourceType": "Subscription",
"id": "9",
"meta": {
"versionId": "2",
"lastUpdated": "2019-11-08T09:05:33.366+00:00",
"tag": [
{
"system": "http://hapifhir.io/fhir/StructureDefinition/subscription-matching-strategy",
"code": "IN_MEMORY",
"display": "In-memory"
}
]
},
"status": "active",
"reason": "Monitor Screening List",
"criteria": "List?code=http://miracum.org/fhir/CodeSystem/screening-list|screening-recommendations",
"channel": {
"type": "rest-hook",
"endpoint": "http://localhost:8080/notification/study",
"payload": "application/fhir+json"
}
}
When I change the List resources in FHIR, I expected to arrive a message in the jersey-servlet but unfortunately I get the following error (when I set the endpoint to a test rest-hook like webhook.site samples, I can get the message from FHIR side):
fhir_1 | 2019-11-08 18:48:40.688 [subscription-delivery-rest-hook-9-13] INFO c.u.f.j.s.m.i.S.SUBS6 [SubscriptionDebugLogInterceptor.java:162] Delivery of resource List/4/_history/17 for subscription Subscription/9 to channel of type RESTHOOK - Failure: ca.uhn.fhir.rest.server.exceptions.ResourceNotFoundException: HTTP 404 Not Found
fhir_1 | Exception in thread "subscription-delivery-rest-hook-9-13" org.springframework.messaging.MessagingException: Failure handling subscription payload for subscription: Subscription/9; nested exception is ca.uhn.fhir.rest.server.exceptions.ResourceNotFoundException: HTTP 404 Not Found, failedMessage=ca.uhn.fhir.jpa.subscription.module.subscriber.ResourceDeliveryJsonMessage#330c0fdb[myPayload=ca.uhn.fhir.jpa.subscription.module.subscriber.ResourceDeliveryMessage#38a1c8a2[mySubscription=ca.uhn.fhir.jpa.subscription.module.CanonicalSubscription#1d55d025[myIdElement=Subscription/9,myStatus=ACTIVE,myCriteriaString=List?..........
..................................................
What is the problem? I tried a lot with different parameters but no solution found.
I changed #Path to #Path("/study/List/{var}"), but I got the same failure again. Actually my FHIR server is running in docker and probability the problem would be inside the Docker.
After setting the Proxy in Docker, everything fine...
Conclusion: I had to change the #path to #Path("/study/List/{var}")and set the Proxy in Docker.
Has anyone successfully integrated Netflix Conductor with AWS SQS?
I have tried below steps but the workflow is not triggered.
Create SQS queue
Added AWS creds to environment
Registered tasks, workflows and the event listener below
{
"name": "sqs_event_listener",
"event": "sqs:name_of_sqs_queue",
"condition": "true",
"active": true,
"actions": [{
"action": "start_workflow",
"start_workflow": {
"name": "mywf"
}
}]
}
I know this is too late to help the original poster, but adding a response to improve the hive mind of SO:
In your Conductor application.properties file, make sure you have the following values
conductor.default-event-queue.type=sqs
conductor.event-queues.sqs.enabled=true
conductor.event-queues.sqs.authorized-accounts=(your AWS account number)
We need to update annotations-processor/awssqs-event-queue/src/main/java/com/netflix/conductor/SQSEventQueueConfiguration.java
#Bean
AWSCredentialsProvider createAWSCredentialsProvider() {
return new DefaultAWSCredentialsProviderChain();
}
With this configuration in Conductor, you can now restart your instance, and your event should receive events from the SQS message queue.
For a full post with workflow and tasks SENDING ans RECEIVING SQS messages - check out: https://orkes.io/content/docs/how-tos/Tasks/SQS-event-task
On-premises TFS 2015 u2. I want to create an HTTP service hook subscription for a release creation event. As directed here and here, I'm sending a POST request to
http://tfs.mycompany.com:8080/tfs/MyCollection/_apis/hooks/subscriptions?api-version=1.0
with the following JSON:
{
"publisherId": "rm",
"eventType": "ms.vss-release.release-created-event",
"resourceVersion": "1.0-preview.1",
"consumerId": "webHooks",
"consumerActionId": "httpRequest",
"publisherInputs":
{
"projectId": "aaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
},
"consumerInputs":
{
"url": "http://someserver/somefolder/"
}
}
I get back the following error message:
{
"innerException": null,
"message": "No publisher could be found with id \"rm\".",
"typeName": "Microsoft.VisualStudio.Services.ServiceHooks.WebApi.PublisherNotFoundException, Microsoft.VisualStudio.Services.ServiceHooks.WebApi, Version=14.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a",
"typeKey": "PublisherNotFoundException",
"errorCode": 0,
"eventId": 4501
}
And indeed, if you request a list of publishers, there's only one, with ID "tfs". There's no "rm" publisher there. Requesting the same subscription from the "tfs" publisher yields an "unknown event" error.
Do I have to enable that publisher somehow? Is it supported in on-prem TFS? If so, since which version?
Would it hurt Microsoft to annotate their TFS REST API docs with supported versions, like the rest of their API docs do?
A publisher is a service that publishes events to service hooks. In TFS 2015 update2, if you request a list of publishers, you will not get publish ID with "rm". It's not support for TFS2015 for now.
Moreover, actually the resource version for rm should be 3.0
"resourceVersion": "3.0-preview.1",