Is there any way to automate the purchase order receiving if PO item is cross referenced to a Job In infor syteline? - erp

Currently we are using Infor Syteline ERP Version 9.
Is there any way to automate the Job material transaction process while receiving the PO when the PO item is cross reference to a Job?
When I do the Purchase order receiving process for the items which are cross referenced to a job its navigating to the job material transaction form to process the transaction.

We can by pass the job material transaction processing section by making modification in the receiving form scripts, but to automate the process we would need to create stored procedure to bypass the process.

Related

Publish dataflow job status once completed onto Google Pub/Sub

Currently i am using Flex Template to launch a job from a microservice. I am trying to find a better way (than using job polling method) to get the dataflow job status. Basically trying to publish dataflow job status onto pubsub by datflow job itself when completed.
Can someone help me with this?
There is currently no way to make a Dataflow job itself send its status to a Pub/Sub topic.
Instead, you can create logs exports to send your Dataflow logs to a Pub/Sub topic with inclusion and exclusion filters and further perform text searches on the Pub/Sub messages to deduce the status of your job. For example, you can create an inclusion filter on dataflow.googleapis.com%2Fjob-message", and, among the received messages, one that contains a string like "Workflow failed." is from a batch job that failed.

ordering message pub/sub GCP

I am new to Dataflow and pub-sub tools in GCP.
Need to migrate current on prem process to GCP.
Current Process is as follows:
We have two types of data feeds
Full Feed – its adhoc job – Size of full XML is ~100GB (Single XML – very complex one – Complete data – ETL Job process this xml and load it into ~60 tables)
Separate ETL jobs are there to process full feed. ETL job process
full feed and create load ready files and all tables will be truncate
and re-load.
Delta Feed - Every 30 min need to process delta files(XML files – it will have only changes with in last 30 min)
Source system push XML files in every 30 mins(More than one, file has timestamp), scheduled ETL process will pick all the files which are produced by source system and process all the xml files and create 3 load ready files insert, delete and update for each table
Schedule – ETL Jobs are scheduled to run every 5 min, if current process is running more than 5 min, next run will not trigger until current process completes
Order of the file processing is very important(ETL Job will take care of this). Need to process all the files in sequence.
At the end of ETL process load the load ready files into tables (Mainframe)
I was asked to propose the design to Migrate this to GCP. Need to have two process in GCP as well full and delta. My proposed solution should be handle/suitable for both the feeds.
Initially I thought below design.
Pub/sub -> DataFlow -> mySQL/BigQuery
Then came to know that pub/sub will not give the guarantee to process the files in sequence/order. After doing some research learn that recently google introduced ordering key concept for pub/sub, which will make sure to process the messages in order. In google cloud docs it was mentioned that, this feature is in Beta.
I have two questions:
Whether any one used ordering key concept in pub/sub in production environment. If yes, did you face any challenges while implementing this
Is this design is suitable for the above requirement or is there any better solution in GCP
is there any alternative for DataFlow?
Came to know that pub/sub can handle maximum 10MB size of messages, for us each XML size is more than ~5G.
As was mentioned by #guillaume blaquiere, Beta product launching phase brings some restrictions but they are mostly related to the product support:
At beta, products or features are ready for broader customer testing
and use. Betas are often publicly announced. There are no SLAs or
technical support obligations in a beta release unless otherwise
specified in product terms or the terms of a particular beta program.
The average beta phase lasts about six months.
Commonly, Cloud Pub/Sub message ordering feature works as intended, once you have something for developers attention it is highly appreciated to send a report via Google Issue tracker.

Sidekiq multiple dependant jobs, when to completed or retry?

In my Rails application, I have a model called Report
Report has one or many chunks (called Chunk) that would generate a piece of content based on external service calls (APIs, etc.)
When user requests to generate a report, by using Sidekiq, I queue the "chunk's jobs" in order to run them in the background and notify user that we will be emailing them the result once the report is generated.
Report uses a state machine, to flag whether or not all the jobs are successfully finished. All the chunks must be completed before we flag the report as ready. If one fails, we need to either try again, or give up at some point.
I determined the states as draft (default), working, finished The finish result is a combination of all the services pieces together. 'Draft' is when the chunks are still in the queue and none of them has started generating any content.
How would you tackle this situation with Sidekiq? How do you keep a track (live) which chunk's services are finished, or working or failed, so we can flag the report finished or failed?
I'd like to see a way to periodically check the jobs to see where they are standing, and change a state when they all finished successfully, or flag it fail, if all the retries give up!
Thank you
We had a similar need in our application to determine when sidekiq jobs were finished during automated testing.
What we used is the sidekiq-status gem: https://github.com/utgarda/sidekiq-status
Here's the rough usage:
job_id = Job.perform_async()
You'd then pass the job ID to the place where it will try to check the status of the job
Sidekiq::Status::status job_id #=> :working, :queued, :failed, :complete
Hope this helps.
This is a Sidekiq Pro feature called Batches.
https://github.com/mperham/sidekiq/wiki/Batches

Drain DataFlow job and start another one right after, cause to message duplication

I have a dataflow job, that subscribed to messages from PubSub:
p.apply("pubsub-topic-read", PubsubIO.readMessagesWithAttributes()
.fromSubscription(options.getPubSubSubscriptionName()).withIdAttribute("uuid"))
I see in docs that there is no guarantee for no duplication, and Beam suggests to use withIdAttribute.
This works perfectly until I drain an existing job, wait for it to be finished and restart another one, then I see millions of duplicate BigQuery records, (my job writes PubSub messages to BigQuery).
Any idea what I'm doing wrong?
I think you should be using the update feature instead of using drain to stop the pipeline and starting a new pipeline. In the latter approach state is not shared between the two pipelines, so Dataflow is not able to identify messages already delivered from PubSub. With update feature you should be able to continue your pipeline without duplicate messages.

Jenkins - monitoring the estimated time of builds

I would like to monitor the estimated time of all of my builds to catch the cases where this value is shown as 'N/A'.
In these cases the build gets stuck (probably due to network issues in my environment) and it won't start new builds for that job until killed manually.
What I am missing is how to get that data for each job, either from api or other source.
I would appreciated any suggestion.
Thanks.
For each job, you can click "Trend" on the job run history table, and it will show you the currently executing progress along with a graph of "usual" execution times.
Using the API, you can go to http://jenkins/job/<your_job_name>/<build_number>/api/xml (or /json) and the information is under <duration> and <estimatedDuration> fields.
Finally, there is a Jenkins Timeout Plugin that you can use to automatically take care of "stuck" builds

Resources