Google Cloud Tasks not dispatching HTTP requests - google-cloud-run

I'll start this by saying I'm new to using Google Cloud Tasks, so please forgive me if this is an obvious issue.
I've created a new Cloud Task queue using gcloud with the command:
gcloud tasks queues create default
I've then proceeded to add tasks to the queue from a Ruby on Rails applciation, and from the command-line using this command:
gcloud tasks create-http-task --queue=default --url=https://google.com --method GET
I then see the tasks being added to the queue, but the HTTP requests are never made. As well the queue itself says that there's no "Tasks In Queue" even though the ones I've made are listed in the tasks list right below this message:
I've enabled logging with:
gcloud tasks queues update default --log-sampling-ratio=1.0
and can see the tasks being created in the logs, but there are no logs for the individual tasks.
The Cloud Run service I'm invoking has been made publicly accessible, and if I manaully POST the task payload to the url in the task it works. I'm using GET google.com as I assume it's reliably accessible.
Is anyone able to tell me what I'm doing wrong? This is the last item I need to sort to wrap up our projects move to Google Cloud! Thank-you!

In case anyone else runs upon this, there's one more trick to enabling Google Cloud Tasks.
After making sure that App Engine is setup on your project, you also need to make sure that the application itself has not been disabled! It turns out the project I was on was using App Engine many years ago and the only application was disabled in the App Engine settings. Enabling this again made everything work as you'd expect.
You can find the setting by going to "App Engine", "Settings", then checking the "Disable Application" setting.
Good-luck to anyone reading this!

Related

Send Docker Entrypoint logs to APP in realtime

I'm looking for ideas to send Docker Logs for each runs to be sent to my application in realtime. I'm looking ways this can be done. Please let me know how this can be done.
Let me know if you have done this already or know how this can be achieved. I want to build feature similar to Netlify or vercel where they show you all build log on UI in realtime. I want something similar for my node application.
You can achieve this with Vercel and Log Drains.
Log Drains make it easy to collect logs from your deployments and forward them to archival, search, and alerting services by sending them via HTTPS, HTTP, TLS, and TCP once a new log line is created.
At the time of writing, we currently support 3 types of Log Drains:
JSON
NDJSON
Syslog
Along with Log Drains, we are introducing two new open-source integrations with logging services for you to start using them today: LogDNA and Datadog.
Install the integration: https://vercel.com/integrations?category=logging
See the announcement blog post: https://vercel.com/blog/log-drains
Note that Vercel does not allow Docker deployments, but does support Serverless Functions.

How to schedule a rake tasks in Google Cloud?

I had a Rails application on Heroku, and to schedule some rake tasks I used an add-on called Scheduler. I had to change my application to Google Cloud and I do not know how to schedule the same rakes. Could someone help me?
Reference:
https://cloud.google.com/appengine/docs/flexible/ruby/scheduling-jobs-with-cron-yaml
This will allow you to setup cron scripts that call out to a web endpoint. My suggestion would be to add an API endpoint that can trigger the code you need ran. If security is an issue, you can always add http basic auth behind the endpoint and pass it along in the URL payload from the cron.
If you wanted to get dirty with it, you could trigger the rake code from the controller itself, though I wouldn't recommend this approach as it's bad design, but instead just move all the code your executing in rake to the controller.
If the above approach doesn't fit your needs, then the next best option would be to setup a sidekiq instance and use that to schedule and run code from your codebase.

Running code repeatedly in Ruby on Rails with Heroku for an indefinite period

I am attempting to build a web application with Ruby on Rails that users site up for and get an email alert when a certain event happens.
As such, I need to be able to make an API call and then based on the JSON response, send the alert, but I need a way to have this API call happen repeatedly for an indefinite amount of time automatically. I am also using Heroku at this time if that needs to be taken into account.
Thanks for your help.
This sound like a cron job in plain old linux. Heroku calls this addon Scheduler. You have to define the task withing lib/tasks/scheduler.rake
For further information read the heroku docs for scheduler here

How to generate the URL which shows the live logs in mesos for a job

I am developing a UI in which I need to show the live logs (stdout and stderr) of jobs running in a mesos slave. I am finding out a way in which I will be able to generate a URL which will point to the mesos logs for the job. Is there a way to do the same? Basically, I need to know the slave id, executor id, master id etc. for generating the URL. Is there a way to find these information?
The sandbox URL is of the form http://
$slave_url:5050/read.json?$work_dir/work/slaves/$slave_id/frameworks/$framework_id/executors/$executor_id/runs/$container_id/stdout, and you can even use the browse.json endpoint to browse around within the sandbox.
Alternatively, you can use the mesos tail $task_id CLI command to access these logs.
For more details, see the following mailing list thread: http://search-hadoop.com/m/RFt15skyLE/Accessing+stdout%252Fstderr+of+a+task+programmattically
How about using reverse approach. You need to present live logs from stderr and stdout. How about storing them outside of mesos slave e.g., elastic-search? You will get nearly live updates, old logs available after, nice search options.
From version 0.27.0 Mesos supports ContainerLogger. You can write your own implementation of ContainerLogger that will push logs to central logs repository (Graylog, Logstash, e.t.c) and then expose it in your UI.
Mesos offers a REST interface where you get the information you want. Visit with your browser http://<MESOS_MASTER_IP>:5050/help (using default port) to check the options you have to query (for example, you can get the information you need from http://<MESOS_MASTER_IP>:5050/master/state.json). Check this link to see an example using it.

Msmq and WCF Service

I have created a WCF service using the NetMsmq binding for which i created a private queue on my machine and executed the project. This works fine as such and my WCF service is started and accesses the message using the queue in the debugging environment. Now, I wanted to host the service using the windows service and for the same I created a new project and windows installer as well (This service runs under Local System Account). Then I tried installing this windows service using the InstallUtil command through the command prompt. When installation is happening and during the service host opening, I get an exception saying:
There was an error opening the queue. Ensure that MSMQ is installed and running, the queue exists and has proper authorization to be read from. The inner exception may contain additional information.
Inner Exception System.ServiceModel.MsmqException: An error occurred while opening the queue:Access is denied. (-1072824283, 0xc00e0025). The message cannot be sent or received from the queue. Ensure that MSMQ is installed and running. Also ensure that the queue is available to open with the required access mode and authorization.
at System.ServiceModel.Channels.MsmqQueue.OpenQueue()
at System.ServiceModel.Channels.MsmqQueue.GetHandle()
at System.ServiceModel.Channels.MsmqQueue.SupportsAccessMode(String formatName, Int32 accessType, MsmqException& msmqException)
Could anyone suggest the possible solution for the above issue? Am I missing any permissions to be set for the queue as well as the windows service, if so could you suggest where should these permissions be added?
Tom Hollander had a great three-part blog series on using MSMQ from WCF - well worth checking out!
MSMQ, WCF and IIS: Getting them to play nice (Part 1)
MSMQ, WCF and IIS: Getting them to play nice (Part 2)
MSMQ, WCF and IIS: Getting them to play nice (Part 3)
Maybe you'll find the solution to your problem mentioned somewhere!
Yes, it looks like a permissions issue.
Right click on your private queue from the Server Manager, and select Properties. Proceed to the Security tab, and make sure you have the right permissions in there for your Local System Account.
This is also confirmed in Nicholas Allen's article: Diagnosing Common Queue Errors, where the author defines the error code 0xC00E0025 as a permissions problem.
I ran into same problem, here is the solution.
Right click "My Computer" --> Manage. In Computer Management window go to "Services and Applications --> Message Queueing --> ur queue", select ur queue and access properties. Add the user running ur WCF application and give full access. This should solve the issue.
Can simple be that the service can't find the it's queue.
The queue name must exact match the endpoint address.
Example:
net.msmq://localhost/private/wf.listener_srv/service.svc
points to local queue
private$\wf.listener_srv\service.svc
If queue name and endpoint are according to each other, then is most like that the credentials defined on the IIS pool don't grant access to the queue.

Resources