I've got the following task:
Create a script (cmdlet) to output the network adapters information and properties. Must include the -Name switch allowing to immediately output info on a particular network adapter, the -File switch allowing to save the info to a file. Add an instruction to call with the -? switch.
Thanks in advance.
Related
PS D:\> cd gs:\
cd : Cannot find drive. A drive with the name 'gs' does not exist.
PS D:\> Get-GcsBucket
PS D:\> cd gs:\mybucket
Why I can not change drive to gs:\ before Get-GcsBucket?
PS gs:\mybucket> mkdir NewFolder
PS gs:\mybucket> cd .\NewFolder
cd : Cannot find path 'gs:\mybucket\NewFolder' because it does not exist.
PS gs:\mybucket> ls
Name Size ContentType TimeCreated Updated
---- ---- ----------- ----------- -------
NewFolder
Why I can not change directory?
Why I can not change drive to gs:\ before Get-GcsBucket?
Unlike Cmdlets and Functions, Providers and the drives they add can not be discovered until the module they are part of is imported into the current PowerShell session. This can be done explicitly with Import-Module, or implicitly by calling a Cmdlet or Function that is discoverable, such as Get-GcsBucket.
Why are Cmdlets discoverable but drives aren't? Because the module manifest lists the Cmdlets, but does not have an entry for drives, and also because the Cmdlet names are stored in assembly metadata (as attributes) that can be read without loading the assembly, while the drive comes directly from code that can only be run after loading the assembly.
Why I can not change directory?
It looks like a bug, but I have not been able to reproduce it. If you can provide more information, I encourage you to submit an issue on the Google Cloud Powershell issues page.
I'm going to guess this is a bug in the Cloud Tools for PowerShell module.
When you launch PowerShell it loads a manifest file (GoogleCloud.psd1) which provides a declaration for every cmdlet that the module contains. This allows PowerShell to delay loading the actual cmdlets assembly until it is actually needed. And thereby speeding up startup time considerably.
The actual list of which cmdlets are found in the module is determined as part of the build and release process. Some info here.
Anyways, that manifest is not declaring the existence of the Cloud Storage PowerShell provider (the cd gs:\ bits.) So PowerShell doesn't know that it exists until after it loads the GoogleCloud PowerShell module, which is done after you invoke Get-GcsBucket (or I assume any cmdlet in the module) at least once.
I am having a yocto project which builds fine and runs as expected (on my BBB). The image is configured to autostart an application and print the output to the console (serial via FTDI). What I am trying to do in general is to disable the autostart application (already done) and instead run an interactive shell.
My question would now be, just in general, what do I need to do to enable the serial console prompt for my yocto image? Like enable additional features in local.conf or even machine features or simply add a shell to IMAGE_INSTALL? Hope someone can tell me some details about that.
My question would now be, just in general, what do I need to do to enable the serial console prompt for my yocto image? Like enable additional features in local.conf or even MACHINE_FEATURES or simply add a shell to IMAGE_INSTALL? Hope someone can tell me some details about that.
Appendix:
Here is my uEnv.txt:
bootpart=0:1
bootfile=zImage
console=ttyO0,115200n8
fdtaddr=0x88000000
fdtfile=zImage-${DTB_FILE}
loadaddr=0x82000000
mmcroot=/dev/mmcblk0p2 ro
mmcrootfstype=ext4 rootwait
optargs=consoleblank=0
mmcargs=setenv bootargs console=\${console} \${optargs} root=\${mmcroot}
rootfstype=\${mmcrootfstype}
loadfdt=run findfdtfile; load mmc \${bootpart} \${fdtaddr}
\${bootdir}/\${fdtfile}
loadimage=load mmc \${bootpart} \${loadaddr} \${bootdir}/\${bootfile}
uenvcmd=if run loadfdt; then echo Loaded \${fdtfile}; if run loadimage; then run mmcargs
bootz \${loadaddr} - \${fdtaddr}; fi; fi;
From what I see here, it is already enabled.
I don't have the opportunity now to build an image, but check UART is enabled in the uEnv.txt. This is not a Yocto specific problem, but a BBB one.
I am new to Coverity Analysis. I need to add Stream in Coverity, how can I achieve this.
Below is my script
-solution:'nameofsolution.sln' -targets:"Rebuild" -configuration:"Release" -platform:"x64" -coverityHost:"%system.CoverityHost%" -coverityPort:%system.CoverityPort% -coverityUser:"%system.CoverityUser%" -coverityPassword:"%system.CoverityPassword%" -coverityStream:"TEST" -coverityOutputDir:"%env.CoverityWorkFolder%" -triggerType:'%teamcity.build.triggeredBy%' %ForceCoverity%.
Now, where and how can I add stream "TEST" in Coverity. Thanks for your help !!
cov-manage-im --host "<YOUR_HOST>" --user="<USER_NAME>" --password="<PASSWORD>" --mode streams --add --set --name:<STREAM_NAME>
For future reference, pass the "--help" flag to cov-manage-im to see if it has what you need.
I am not sure whether your script is right, your workflow and what kind of script it is.
To create a new stream just navigate your browser to Coverity connect and create one. Make sure you actually have permissions to add streams to your project.
In coverity connect you have one option like configuration in right most top corner.In that you can find Projects and stream which already created. You have to add(+stream) and name the stream name.Then It will be added.
You can use that stream in your script.
Example:
cov-commit-defects --dir /Users/admin/Coverity_intermediate_file_directory --host 192.178.196.125 --port 8080 --user admin --password admin123 -stream stream_name
Replace stream_name with your streams name which you have created in coverity connect.
I've mkdir commands in a batch file but only admins have permissions to create directory, so how to pass credentials from Jenkins job to the batch file.
mkdir \\%%S.domain.com\c$\Test
Select the "use secret text(s) or file(s) and then add a binding. See screenshot :
Yes Daniel, it might be done using such utility tools but my organization doesn't allow me to use third party tools without approvals. So, we have configured server with WinRM that allows to connect to server remotely using credentials.
Just to add to #Marc's answer, use the secret text Bindings as suggested to store and pass the username and password as environment variables.
The set the username variable to USERNAME and Password Variable to PASSWORD, then in your batch file use the net use command like so.
net use "\\server\share" %PASSWORD% /user:%USERNAME%
\* whatever you need to do on that share, e.g. xcopy, mkdir *\
net use "\\server\share" /delete
I've just downloaded Powershell 2.0 and I'm using the ISE. In general I really like it but I am looking for a workaround on a gotcha. There are a lot of legacy commands which are interactive. For example xcopy will prompt the user by default if it is told to overwrite a file.
In the Powershell ISE this appears to hang
mkdir c:\tmp
cd c:\tmp
dir > tmp.txt
mkdir sub
xcopy .\tmp.txt sub # fine
xcopy .\tmp.txt sub # "hang" while it waits for a user response.
The second xcopy is prompting the user for permission to overwrite C:\tmp\sub\tmp.txt, but the prompt is not displayed in the ISE output window.
I can run this fine from cmd.exe but then what use is ISE? How do I know when I need which one?
In a nutshell, Interactive console applications are not supported in ISE (see link below). As a workaround, you can "prevent" copy-item from overwriting a file by checking first if the file exists using test-path.
http://blogs.msdn.com/powershell/archive/2009/02/04/console-application-non-support-in-the-ise.aspx
Why would you be using XCOPY from PowerShell ISE? Use Copy-Item instead:
Copy-Item -Path c:\tmp\tmp.txt -Destination c:\tmp\sub
It will overwrite any existing file without warning, unless the existing file is hidden, system, or read-only. If you want to overwrite those as well, you can add the -force parameter.
See the topic "Working with Files and Folders" in the PowerShell ISE help file for more info, or see all the commands at MSDN.