We recently added "Publish To Apple News" plug-in to our Word Press and getting "Date_Not_Recent" error. After talking to the developers and checking their support pages, it looks like it is a a time zone issue between the WP server (Oregon, US with UTC timezone) and Apple News Publisher (which till this moment Apple can't tell me where it is located, i assume California, but who knows for sure?)
Solution, is to sync those time zones together. My questions:
1- Anyone had any issues rezoning their aws servers?
2- If you had this issue before, what did you do?
I am just trying not fix one thing and break 10 other things in return. just being cautions. We all know that Murphy's law roles in IT.
Thanks in advance
We figured it out. it was an NTP issue.
We host on AWS, and the time sync packages (NTP) were a little bit out of sync, so we deleted them and installed chrony instead and that worked.
For anyone who is interested, here are the instructions (under configuring the Amazon time sync services):
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html
Related
For the last few weeks our office has experienced intermittent and fairly crippling connection issues to our on premises TFS 2015 (w/update3). When it occurs visual studio basically becomes useless and web view pages of TFS either don't load the page, load the header toolbars or take several minutes and eventually open. Queries take minutes to run and so on. Then suddenly all will be well and it works again perfectly.
There are no errors shown when this happens, either to the user or in the TFS application event logs. The system is not overloaded on any resources. I have tried various things like; rebooting (obviously!), iisrest, cleared cache on app tier and who knows what else at this point.
Are there any other logs I could be looking at or things I could try to diagnose?
Worth noting: Users have all recently migrated to a new domain but the TFS servers are still on the old domain. However we had migrated in my office long before these issues occurred. Other offices who connect in have only recently migrated into new domain.
System setup is VMware 6.0, TFS app tier with separate SQL data tier and analysis database.
According to your description: There are no errors shown when this happens, either to the user or in the TFS application event logs.
Agree with Daniel in the comment, this kind of issue should not related to TFS server side. If the server always be slowly, this should be a performance issue. However, you said suddenly all will be well and it works again perfectly. Then the most possibility should be network related issue. Suggest your team use some Network Analyzer Tools to trouble shoot it.
For example, double check the DNS related area. Your TFS server is on one domain and some users are on another. First make sure the domains are trusted each other, double check that if the slow performance was being caused by an authentication issue.
One of the fix is to have users log in using the full domain name. A sample: if they are currently logging in with DEV\MyUserAccount, then they should instead log in with DEV.COM\MyUserAccount.
It has something to so with how the TFS server is looking up the accounts when a short domain name is used. It is pre-pending the name to all of the dns suffixes which in turn ends up creating bad ones and causes delays as it's not finding any valid domains.
Besides, about the performance issue, you could also take a look at this great answer from jessehouwing.
I have read a dozen articles on here, and tried over 50 different syntax variations in task scheduler, and bat files with several different syntax variations, and it still will not work. All of the articles' answers fail, along with everything else I have tried.
Here's the task: Monday through Friday at 4:55 AM, launch a browser and navigate without user interaction to westcoastswing.radio.net. It doesn't matter which browser. Task must run Monday through Friday only. Task must run whether anyone is logged in or not, and must wake the computer to run the task. Monday through Friday at 6:15 AM, kill the browser.
Updating to a newer version of Windows is not an option at this time. A third party product is only an option once it's determined that Microsoft is deliberately blocking this functionality. Using Windows Media Player and a specialized URL found via F12 might be an option, but I have spent an even greater amount of time trying to get that strategy to work, without success.
Thanks for any help or advice. Please don't mark this as a duplicate, I have tried the existing similar articles and their solutions do not work.
Found a method that works, but only because I use MSIE only for this and nothing else.
Specify full path to IE, in quotes, in start program.
Specify URL, not in quotes, in optional arguments.
Highest possible permission.
Don't run "hidden".
I don't know why that worked, or why out of all the combinations I tried before I didn't hit on that exact one, but sorry for posting a pointless question.
I have been hosting a site on Heroku for a few months that is very soon to go into production.
Since I began with them, there have been at least three significant outages, one of which was the disastrous Amazon outage last month and another of which is a multi-hour outage happening today.
I believe in Heroku's vision and I think they are a great company, but I am faced with the ultimate problem: if they can't keep sites up and running, everything I like about them doesn't really matter.
Is Heroku a reliable provider to run a production site on Rails?
Are there any other providers I might look into that have a better reputation for reliability than Heroku?
In my opinion, downtime can happen with almost any provider. What you need to see is how well or badly the host handles the downtime and the effort they make in keeping the customer updated about possible resolution.
In my opinion Heroku is a great place to host your app. The advantages and ease of deploying there covers up for the recent (and rare) downtime FOR ME.
I am user of Heroku with Amazon RDS plugin for the past 7-8 months and my conclusion is there is nothing to appreciate about Heroku except their architecture. Here is why I think:
Even though it is sold for $250 million+ they were still NOT using the Amazon multiple zones feature of Amazon. Below is the link how SmugMug survived amazon crash by using Amazon's multiple zones feature.
http://don.blogs.smugmug.com/2011/04/24/how-smugmug-survived-the-amazonpocalypse/
No phone contact support in the event of issues (not application but Heroku's), lot to learn from Rackspace
The application I am hosting, people will starve if it goes down for few hours on Friday forget about 60 hours downtime.
I see intermittent deployment and connectivity issues. Please visit this link for a confirmation:
http://status.heroku.com/
I know developers love it because they throw a cheap web process called 'dyno' for free.
So far Heroku does not offer multiple availability zone redundancy. If you want something more reliable than Heroku you can create your own EC2 instances in multiple availability zones. Of course this will require significantly more server upkeep, admin, and deployment time.
I have seem Heroku to be reliable. I highly recommended it for starting out and validating your idea. I believe when you start your project you want get it out quickly (to customer or to public).
As mentioned in other comments at some point you might need to switch over to EC2 as you might need zone redundancy and it might actually become cheaper to run of EC2 especially if you already have an SA in the company.
No. It is not. As a customer I've experienced multiple critical outages. These things happen and I get that. But what makes Heroku unreliable is their nearly non-existent support when things do go wrong. I would use caution when evaluating Heroku or any provider for that matter and really understand what you're paying for. Paying as much as I did for Heroku I expected more.
As an example one of their databases went offline early on a Sunday. I immediately was made aware, not from Heroku but from our customers and new relic alerts. I contacted Heroku support just to get the ball rolling as I began to troubleshoot. 24 hours later I had literally no responses from Heroku. I could not fork, follow, or take snapshot of the database as they suggest (because they were experiencing issues) so I basically sat on my hand and waited. Hoping that somebody would respond as I frantically attempt to recover somehow, someway.
Was this their fault. No. Not at all. I should/could have done something to mitigate this failure. But as much as I pay for their servies each month I expected something resembling a response to my critical issue.
Our our app is hosted by Heroku and went down mutliple times over the last 12 months.
Two times it was caused by one of the third-party apps that Heroku offers:
We used Zerigo (recommended by Heroku) for our DNS. This has caused our site to go down twice - one time it took over 12 hours te recover. This is absolutely crazy for something like DNS, so we have switched to a more reliable provider.
The Redistogo app went down once.
Heroku does bring some benefits, but be careful about the apps you select.
In my org i build simple SPA productivity apps, and have been using Heroku to host them for the last year after migrating away from a physical box server to cloud VMs.
I've had multiple days lost due to Heroku development hindering outages. Usually while running apps stay online, and work, when Heroku goes down you can't push updates or restart apps.
Lets also not forget the ridiculous times for scheduled maintenance (usually 2PM EST, midweek....REALLY?)
As of writing this, the Logging system for Heroku has now been acting up (more or less down) for over 24 hour.
Thankfully my apps aren't mission critical. While I like Heroku's ease of use, it's just not worth this much headache for what is nothing other than an AWS middle-man.
That said, I'm moving over to just pure AWS EC2 instances.
We're attempting to generate payments in an Agresso 5.5 system. The mechanism we've been told to use is to write new payment data into table acrbatchinput where it will be picked up and processed by a regular job running in agrbibat.dll. We have code that worked on a previous version of Agresso but following the upgrade our payments get rejected by the agrbibat job. Sometimes it generates useful messages in the log, sometimes it doesn't, and working through failures without good information is becoming a bit of a slog.
Is there some documentation we're missing? In particular it would be useful to have a full list of validation rules the job is using so we can implement these ourselves rather than trying to infer them from the log. I can't find any - there's not a lot for acrbatchinput on Google. Does this list or some other documentation exist? Is agribibat something easily decompilable, e.g. .NET?
Thanks. The test system we have is running against Oracle on Solaris with the Agresso jobs hosted on Windows. We have limited access to the Oracle and Agresso systems because (I think!) the same Oracle server is hosting the live payment system, but I could probably talk finance into giving us agrbibat.dll if that might help. We're unlikely to get enough access to their servers to debug it in place.
It turns out that our problem is partly because the new test system we've been given access to wasn't set up correctly, so we might be able to progress this without extra information - we're waiting on the financial team here for input.
However we're still interested in acrbatchinput or agrbibat documentation or information. You've missed the bounty I set but ticks, votes and gratitude still available.
I know this is an ancient old question, but here's my response anyway for anyone else that finds it.
The only documentation is the usual Agresso help files from within the desktop client. Meaningful information is only gleaned through trial and error, however!
The required fields differs depending on whether a given record is a GL, AP/AR or tax transaction. (That much is, at least, explained in the help).
In addition to using the log file, it's often helpful to look at GL07's report output for errors.
I've been struggling with this one for a couple of days now. My current Windows Azure WebRole is stuck in a loop where the status keeps changing between Initializing, Busy, Stopping and Stopped.
It never goes live, and I can can never see the website as a result. The WebRole is an "out of the box" MVC 2 application with Copy Local set to true on the Mvc dll and I haven't even tried hooking up a storage or WorkerRole yet, and there is nothing really happening inside the Start method that I can see would crash.
I've really tried going back to basics to ensure nothing can complicate the process and the website launches without a problem on the Dev Fabric and yes it looks just like the standard "Home", "About" MVC app - just can't get it running in the cloud!
Funny thing is, a few days ago, this exact package worked on the staging area in the cloud, and I could even see it in the browser - but could never get it swapped over to production, so I deleted everything and started from scratch, and now I can't even get it running on staging...
Does anyone have any ideas on what I could do to diagnose this problem myself because since logging this problem on the forums 2 days ago, there has been no improvement or feedback.
Any help appreciated,
Regards,
Rob G
Turns out there are a number of things that can cause this to happen. A full thread on the Microsoft forums goes through most of them and details my adventures in the arena.
http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/1482c1af-16e3-46ca-846e-14f511c35750
Hope this helps...
I think the best starting point is enabling remote desktop on all role instances.
Saves a lot of heart ache wondering why the heck isn't the diagnostics aren't logging anything.
By remoting in you can eye ball the event logs and find lots of reasons for azure unhappness