ASP.NET PowerShell scripts don't run - asp.net-mvc

I write the script directly.(Import-Module ActiveDirectory). It does not work.
Error Message:
(Failed to generate proxies for remote module 'ActiveDirectory'. Files cannot be loaded because running scripts is disabled on this system. Provide a valid certificate with which to sign the files).
Please, tell me the solution.
Also I've tried to execute the command Set-ExecutionPolicy Unrestricted in both cmd 32 and cmd 64.
var shell = PowerShell.Create();
shell.Commands.AddScript("New-Item -Path 'C:\\Distrib\\file.txt' -ItemType 'File'");
This script works:
(PowerShell powerShellInstance = PowerShell.Create())
{powerShellInstance.AddCommand(AppDomain.CurrentDomain.BaseDirectory + \\Powershell\\test.ps1");
test.ps1 does not work:
(New-Item -Path 'C:\\Distrib\\file.txt' -ItemType 'File')

This is not a PowerShell code issue. It is an environment setting.
If you are doing this in an enterprise, then your org has set the policy in your machine for a reason.
Though what JPBlanc would work, unless you have the ability to change this setting by your org admins at the machine level, then you can't change it anyway. You can change it for your user or process level. There are several levels to set for EP and this is well documented.
Set-ExecutionPolicy
Set-ExecutionPolicy
[-ExecutionPolicy] <ExecutionPolicy>
[[-Scope] <ExecutionPolicyScope>]
[-Force]
[-WhatIf]
[-Confirm]
[<CommonParameters>]
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope LocalMachine
Get-ExecutionPolicy -List
Scope ExecutionPolicy
----- ---------------
MachinePolicy Undefined
UserPolicy Undefined
Process Undefined
CurrentUser RemoteSigned
LocalMachine RemoteSigned
There is little to no reason to ever use unrestricted for day to day PowerShell use case, especially at the machine level. I always recommend CurrentUser or Process.
So, create a new Powershell shortcut to startup using the CurrentUser or Proces scope using the RemoteSigned (which is the current default in Windows 10 even at the machine level).
Even when calling this thru code, use the CurrentUser or Process scope and RemoteSigned.

Related

Make PowerShell recommendation like Linux-bash (E.g. docker)

I have windows 10 OS with WSL enabled and docker for windows installed.
When I type docker in PowerShell and hit tab, it suggests me with the corresponding folders and files in my working directory.
here AndroidStudioProjects is a directory in my working directory.
On the other hand,
When I type docker in WSL Ubuntu and hit tab, it suggests the available docker commands themselves. (My expected behavior)
I want PowerShell to also recommend like WSL ubuntu.
Presumably:
docker on WSL comes with tab-completion for POSIX-compatible shells such as bash, installed via the shell's initialization files.
no such support is provided for PowerShell, but there are third-party solutions - see below.
Installing PowerShell tab-completion for docker:
Install the DockerCompletion module from the PowerShell Gallery:
# Install the module in the scope of the current user.
Install-Module DockerCompletion -Scope CurrentUser
# Import the module into the session.
# Add this line to your $PROFILE file to make the tab-completion
# available in future sessions.
Import-Module DockerCompletion
Installing PowerShell tab-completion for all supported programs (CLIs):
The posh-cli meta-module - whose repo is here - offers a convenient way to automatically install tab-completion support for all locally installed CLIs for which application-specific tab-completion modules are available:
# Install the meta-module in the scope of the current user.
Install-Module posh-cli -Scope CurrentUser
# This looks for locally installed CLIs for which tab-completion
# modules are available, installs them, and adds
# Import-Module commands to your $PROFILE file.
Install-TabCompletion
See the README for more information.

Cannot find path because it does not exist

PS D:\> cd gs:\
cd : Cannot find drive. A drive with the name 'gs' does not exist.
PS D:\> Get-GcsBucket
PS D:\> cd gs:\mybucket
Why I can not change drive to gs:\ before Get-GcsBucket?
PS gs:\mybucket> mkdir NewFolder
PS gs:\mybucket> cd .\NewFolder
cd : Cannot find path 'gs:\mybucket\NewFolder' because it does not exist.
PS gs:\mybucket> ls
Name Size ContentType TimeCreated Updated
---- ---- ----------- ----------- -------
NewFolder
Why I can not change directory?
Why I can not change drive to gs:\ before Get-GcsBucket?
Unlike Cmdlets and Functions, Providers and the drives they add can not be discovered until the module they are part of is imported into the current PowerShell session. This can be done explicitly with Import-Module, or implicitly by calling a Cmdlet or Function that is discoverable, such as Get-GcsBucket.
Why are Cmdlets discoverable but drives aren't? Because the module manifest lists the Cmdlets, but does not have an entry for drives, and also because the Cmdlet names are stored in assembly metadata (as attributes) that can be read without loading the assembly, while the drive comes directly from code that can only be run after loading the assembly.
Why I can not change directory?
It looks like a bug, but I have not been able to reproduce it. If you can provide more information, I encourage you to submit an issue on the Google Cloud Powershell issues page.
I'm going to guess this is a bug in the Cloud Tools for PowerShell module.
When you launch PowerShell it loads a manifest file (GoogleCloud.psd1) which provides a declaration for every cmdlet that the module contains. This allows PowerShell to delay loading the actual cmdlets assembly until it is actually needed. And thereby speeding up startup time considerably.
The actual list of which cmdlets are found in the module is determined as part of the build and release process. Some info here.
Anyways, that manifest is not declaring the existence of the Cloud Storage PowerShell provider (the cd gs:\ bits.) So PowerShell doesn't know that it exists until after it loads the GoogleCloud PowerShell module, which is done after you invoke Get-GcsBucket (or I assume any cmdlet in the module) at least once.

PsExec is not recognized as an internal or external command

I have a job that needs to run a script on a remote computer. I'm doing so by using psexec via "Execute windows batch command":
C:\PsExec.exe \\computername -u username -p password -accepteula c:\xxx.exe
When I run the job I get the following error:
c:\PsExec.exe is not recognized as an internal or external command
** PsExec.exe is located under c:\
Any ideas?
First Define psexec.exe path in environment varaiable "PATH" or else place psexec.exe file in C:\Windows\System32\
And to Download Psexec.exe file
https://download.sysinternals.com/files/PSTools.zip
One possible explanation is the version of PsExec.exe: 32bits or 64bits.
If you have the 32 one on a 64bits machine, that command would not be recognized indeed. PsExec64.exe would.
I can see the age of this question and my answer may not be relevant to this topic since I was technically trying to solve a different problem, but maybe this will help other people who are stuck.
c:\PsExec.exe is not recognized as an internal or external command
I was trying to disable the Maintenance Configurator with PSExec (my problem is the never ending maintenance bug) and kept running into the same error as the OP BUT I got PSexec64 to run this command:
C:\PsExec64.exe -s schtasks /change /tn >"\Microsoft\Windows\TaskScheduler\Maintenance Configurator" /DISABLE
BY checking the "Run this program as an administrator" option under the Compatibility settings for "PsExec64.exe"
Don't know if this has solved my problem yet, but I think the OP would have been able to run his process if he had done this. Dear OP did you ever solve that?

RDP "ClientName" Environment Variable is null, if run as administrator

We have developed a windows application and deployed in terminal server / citrix environment.
We have used the Enviornment.GetEnvironmentVariable("CLIENTNAME") for getting the client name
from where the RDP is accessed.
If I run the application with normal privilege (double cliking the application), then i am getting
correct value in the "ClientName" Env Variable.
But when I run the same application with administrator privilege (right click and run as administrator),
then then "ClientName" Env Variable returns null.
Note: I wrote a small application and get all the environment variables exists in the virtual machine (RDP)
using "Environment.GetEnvironmentVariables()". "ClientName" Env variables is shown only when it is executed with normal privilege
and the same variable is hidden if executed with administrator privilege.
Can anyone let us know why the "ClientName" Env variable is hidden on administrator privilege?
Regards,
Guru
This sounds like this might be your problem:
When connecting remotely with Remote Desktop Connection, the
environment variables CLIENTNAME and SESSIONNAME are added to each
process that is started.
If you set the Folder Option "Launch folder windows in a separate
process" and later launch an application from an additional Explorer
window, the application will not see these additional environment
variables.
To fix the issue:
If your application relies on these variables, remove the folder
option "Launch folder windows in a separate process".
MS Article: https://support.microsoft.com/en-us/kb/2509192
$sessionID = (Get-Process -PID $pid).SessionID
$PC = (Get-ItemProperty -path ("HKCU:\Volatile Environment\" + $sessionID) -name "CLIENTNAME").CLIENTNAME

Powershell ISE appears to hang with interactive commands.

I've just downloaded Powershell 2.0 and I'm using the ISE. In general I really like it but I am looking for a workaround on a gotcha. There are a lot of legacy commands which are interactive. For example xcopy will prompt the user by default if it is told to overwrite a file.
In the Powershell ISE this appears to hang
mkdir c:\tmp
cd c:\tmp
dir > tmp.txt
mkdir sub
xcopy .\tmp.txt sub # fine
xcopy .\tmp.txt sub # "hang" while it waits for a user response.
The second xcopy is prompting the user for permission to overwrite C:\tmp\sub\tmp.txt, but the prompt is not displayed in the ISE output window.
I can run this fine from cmd.exe but then what use is ISE? How do I know when I need which one?
In a nutshell, Interactive console applications are not supported in ISE (see link below). As a workaround, you can "prevent" copy-item from overwriting a file by checking first if the file exists using test-path.
http://blogs.msdn.com/powershell/archive/2009/02/04/console-application-non-support-in-the-ise.aspx
Why would you be using XCOPY from PowerShell ISE? Use Copy-Item instead:
Copy-Item -Path c:\tmp\tmp.txt -Destination c:\tmp\sub
It will overwrite any existing file without warning, unless the existing file is hidden, system, or read-only. If you want to overwrite those as well, you can add the -force parameter.
See the topic "Working with Files and Folders" in the PowerShell ISE help file for more info, or see all the commands at MSDN.

Resources