Error: Failed to deploy COMPUTER_NAME :The Bootstrap service is not found on remote machine
Hello Stack overflow users. Google yields no results...
I configured a Wonderware Intouch for System Platform Runtime Client remote computer according to best practices found in the README files and other documentation.
On the GR in the ArchestrA IDE I created a platform then a view-engine and then an instance of the Intouch app underneath that.
I successfully deployed the application to the remote computer and everything was working as expected.
Afterwards I had to change the remote computer name as per our policy because the computer would be used at another site.
First I undeployed everything including the platform from the remote runtime computer. Then I changed the remote runtime computer name. Then I renamed all the objects in the IDE on the GR accordingly and changed the computer host name in the platform object instance accordingly.
I then tried to deploy the platform again, no success, I only receive an error message:
"Error: Failed to deploy COMPUTER_NAME :The Bootstrap service is not found on remote machine"
I have tried:
- Restarting the computer.
- Changing the IP Address.
- Google and AVEVA Knowledge base.
- Platform removal tool (which cannot be accessed since the platform does not display in the SMC manager)
The version of System Platform is 2014 R2 SP1.
When I look in the SMC log file, the Bootstrap service is starting on the remote runtime client.
I am not to keen to format the computer and start over so I would like to know if there may be some files that I need to delete or something else I could do to fix this problem.
EDIT:
I have also as mentioned in the comments and additionally tried:
- Uninstall Wonderware completely followed by a new fresh Install.
I had this problem recently. The problem seems to be that when creating a platform in the IDE it associates an ID for it and if you change the name of the machine and change the name of the platform it can try to search for the old platform through the id. To solve this, create a new platform and try to deploy it, System Platform can associate an old ID with the new platform, then create platforms until you have a unique id not yet used and deploy it. The platform id can be verified in the SMC. When deploying, leave the SMC open next to the IDE and note the ID of the platform being deployed.
Related
I've been writing an EventHubs message processor that just connects to EventHubs and processes messages on the EventHub. I've been developing in Visual Studio on Windows using .NET 6. Things work as expected on Windows; I can:
Connect to EventHubs
Receive messages
Do the message processing I want
Great. I then wanted to scale my message processor horizontally and decided that I would Dockerize it, and since .NET 6 runs on Linux, I would cross-compile it for Linux and eventually deploy multiple instances of my message processor on Docker Desktop as a next step. I eventually want to stick it on Kubernetes to scale up by an order of magnitude or two.
It was easy to Dockerize my Project in Visual Studio. I simply right-clicked the Project and selected Add -> Docker Support. Visual Studio detected I had Docker Desktop installed and generated all the config files I needed, and added an appropriate build configuration so that I could compile a binary, build a Docker image with it, and automatically deploy it to my local Docker Desktop instance.
.NET 6 also compiled without errors, which was great. However, when my container spins up, I get hit with the following runtime error:
System.Security.Authentication.AuthenticationException: The remote certificate is invalid because of errors in the certificate chain: PartialChain
and there is a stack trace (omitted here for brevity) stemming from something in the EventHubs processor library:
<...many layers...> at Azure.Messaging.EventHubs.Primitives.EventProcessor-1.RunProcessingAsync(CancellationToken cancellationToken)
I am correctly passing my EventHubs connection string to my container, but what I surmise is that my container is missing an SSL certificate or has a misconfigured SSL certificate. I suppose Visual Studio has helpfully silently gone ahead and installed a development certificate when I developed my message processor on Windows so that EventHubs connections "just work" in my development environment, but that SSL certificate is not available to my container, since it isn't part of the build output.
I know I probably should be using Azure key vault or whatever secret management service they provide, but how else can I resolve this SSL certificate issue as quickly or painlessly as possible? It would be nice if I can just keep my connection string in my appsettings.json (It's fine. Toy project, only using Azure free credits anyway.)
The easiest way forward would be to register a handler that participates in certificate validation and can, if desired, override normal handling and force acceptance. This, of course, comes with the warning that you're bypassing standard security checks and may be putting your network and host in danger.
You don't mention which client you're using, but each takes a set of options in their constructor. The options for each type have a member named ConnectionOptions which returns an EventHubsConnectionOptions instance that allows you to register a CertificateValidationCallback.
The Event Hubs Influencing SSL certificate validation sample demonstrates how to use it. More information is also available in the .NET documentation for RemoteCertificateValidationCallback.
I have downloaded and attempted to install the 1.0.6 version of the Neo4j Desktop (Graph Platform) under Windows 10. The silent installation happens quickly without any prompting about where to install or what to expect installation-wise. Upon installation the Neo4j Desktop icon is added to the Windows desktop and the Desktop automatically opens with the appearance that it is all ready to go.
I select the default MyProject and click the New Database panel. The Local/Remote dialog pops up, I select Local. I accept the default Database name, don't add an optional description and click the Create button at which time I a red error dialog pops up saying:
`Database failed to create: Error: invalid central directory file header signature: 0xef42a50`
I get this result whether I create a new project and then attempt to create a new database in that new project. I get the same result if I change default names of proposed new databases. I can't attach to a remote new database because Neo4j does not appear to be running at this point and I have been given no indication where the installation even put anything.
When I ferret out the installation, I find it has silently been put in the typically hidden user subdirectories of "../AppData/Local/Programs/neo4j-desktop" and "..AppData/Roaming/Neo4j Desktop".
In the Roaming neo4j subdirectory, I find the log.log file which contains the single line of warning [2017-11-28 16:42:30:0451] [warn] auth is not initialised yet.
Any insights/recommendations appreciated. I am a "retired" #CitizenScientist and around most of the time to answer additional questions or to try any recommended fixes.
BTW, I have been using Neo4j Community Edition on my Windows dev box for years, mostly the manually installed version that provides support for running as a Windows service.
Also, my issue seems similar to this installation failure cited here: neo4j windows desktop: Unrecoverable authentication error.
I'm setting up a brand new system and decided to install TTU 15.10.04 (my old machine had TTU 14). When I run my apps, I get this error:
The 'TDOLEDB' provider is not registered on the local machine.
I used to get this error on earlier versions, but all I had to do was make sure my app was running in 32-bit mode. After checking everything multiple times and not being able to isolate the problem, I searched for the OLE DB installation folder on my new machine, but have not been able to find it. So I checked my old machine and found that it was installed here:
C:\Program Files (x86)\Teradata\Client\14.00\OLE DB Provider for Teradata
I have no such equivalent folder on my new machine. The only thing I have is OLE DB Access Module, but I am sure that's not it. I have concluded that I do not have the OLE DB provider installed at all and cannot seem to find out where to get it. It's as if it has disappeared. Any help would be great
You can download the teradata oledb provider for ttu15 HERE
Hope this helps
We are setting up Tridion 2011 SP1 CDS (.net based) on one of our servers.
We are sruck at point 'Installing Monitoring as a Wndows Service'.
Evern after running the batch file 'StartCDInstaller.bat' and following the procedure, we cannot locate the Tridion Monitoring Agent service in the windows services.
Are we missing something?
Also another question regarding CDS, can we change the location of config files (Deployer, Storage_conf etc) post Installation? Or do we need to re-run the installer?
Update:
The same error even after reinstalling the monitoring service.
"The Event log details: The description for Event ID 100 from source TCDmonitor cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted"
You can change the location of tridion folder structure by setting the System variable in environment variable(Control panel -> system and security -> System -> Advance -> Environment variables -> system variables) called "Tridion_Home" and value as "d:\tridion" (this is the path of the tridion folder ). You can test this after going to run and type %tridion_home% .
If for some reason the Tridion Monitoring Service was not installed by the installer, you can do it manually.
Locate the file cd_monitor.exe (it is on the installation CD in Content Delivery\roles\monitoring\windows).
Copy it to a location where you keep your executables (e.g. to the Tridion\bin folder)
Start a command prompt as administrator and type 'cd_monitor -install'
Go to the services console and start the monitoring service
Had a similar issue too, it was an x64 Content Delivery server with x64 Java installed. Manually installed the monitoring service but refused to start with:
The description for Event ID 100 from source TCDmonitor cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
Could not load Java runtime libraries at
the message resource is present but the message is not found in the string/message table
The monitoring service requires 32bit Java, as soon as that is installed it started with no problem.
Hope that helps.
Scott
I'm using the ArtifactDeployer plugin to deploy the build job artifacts to a remote location (Windows share SMB).
However Jenkins never manages to succeed. Throwing errors like:
[ArtifactDeployer] - Starting deployment from the post-action ...
[ArtifactDeployer] - [ERROR] - Failed to deploy. Can't create the directory ... Build step
[ArtifactDeployer] - Deploy artifacts from workspace to remote directories' changed build result to FAILURE
Local deployment works fine.
The Jenkins machine OS is Windows 7 32-bit Prof.
Jenkins is running as a service using a local system account.
I tried using another account, my user account but the service failed to start (Windows error 1069: the service did not start due to a logon failure).
The network service account did run but than Jenkins throws errors it can't access the .NET framework.
When manually trying the remote copy, this works fine. I can create directories and write to it. On the same machine of course.
I tried two different remote reference in Jenkins:
1) \\targetdirectory
2) I:\ - by mapping a drive letter to the remote dir in windows
No success...
Any tips or suggestions? Thanks!
Update 15/02/2012:
Still no solution or workaround for this issue.
It's not only the plugin, I hit also this issue using "Execute Windows batch command".
I found a bug report that I want to share.
Solution
I found a solution. You have to grant access persmission to the computer in a domain instead of the user of that machine. Seems very logic if you look back to it.
A 2nd solution is to run the service using a domain user account. Above I made a mistake by using the local user .\user in stead of DOMAIN\user.
If you don't have a domain, the following will work for sure. This should work even if you have a domain.
Background Info:
You need your mapped drive to be mapped for the same account that the service is using AND be available at the right time. Normally mapped drives are mapped only for the logged in user, at the time that they log in. Service user contexts don't get "logged in" per se -- for example, if I map a drive as MyUser and the service runs as MyUser, the drive won't be available until I actually log in by typing in my password. However, we can use a script to map the drive at startup (instead of login) for a particular user. Jenkins normally runs as Local System Account, so if you don't want to change that, you'll need to run the script below as the SYSTEM user. You can instead create a specific user for Jenkins to run as, if you don't want to grant this mapped drive to all services/processes that run as SYSTEM, and run both the service and the script below as that user (this is probably more secure).
Solution Steps:
In ArtifactDeployer you want to deploy to a mapped network drive. In my case this is S:.
There is no special setup for permissions on the remote share. (In my case, a Windows Server 2008 share with a username and password that is used for mapping the drive.)
Write a batch file MapDrives.bat in a place that your chosen user (default: SYSTEM) has access to, with the following in it:
net use S: "\\server_name\share_name" /persistent:yes password_here /USER:username_here
Note that I am mapping to S: in that line.
Via Task Scheduler, create a task that runs as the same user as the service (default: SYSTEM), triggers At Startup, and as it's action, runs the batch file MapDrives.bat.
Reboot and it should work!
Citations:
After diving through many pages and many tests, ultimately, the best suggestions were found here, and led me to the above solution.
https://stackoverflow.com/a/4763324/150794
Make sure your 'local system account' has access rights to the remote directory (including write access). Then use the notation
\\targetdirectory
Mapping drive letters to remote directories only applies to the user account you are currently working with. The drive letter mapping will not be available to any other account.