I have a color scheme for enhanced editor in SAS 9.2.
How can i share this scheme with others?
Where does file of scheme be found?
Thanks!
The editor coloring scheme is stored in SAS registries.
You can export and import registry entries to share the scheme definitions.
There is a SASHELP and SASUSER (user defined) part of registry. I didn't try it, so I'm just guessing - based on whether you modified original color scheme or defined your own, it's stored in either SASHELP or SASUSER part. For that, you may need to use or not use USESASHELP option of PROC REGISTRY to export the definition.
Here's how you do it.
proc registry export="C:\eeditor_scheme.sasxreg"
startat="CORE\EDITOR\SCHEMES" usesashelp;
run;
proc registry import="C:\eeditor_scheme.sasxreg";
run;
Anyway, modifing the registry is a low-level intervention to the system, so I recommend you make a full backup of your registry before importing a registry file:
proc registry usesashelp export="C:\reg_backup_sashelp.sasxreg";
run;
proc registry export="C:\reg_backup_sasuser.sasxreg";
run;
Related
In my ADO build pipline, I have a secure file download step. When we branch versions, we use powershell to do the heavy lifting with cloning build definitions and updating settings/info in the cloned pipeline.
One issue I've run into is that the Secure File Download step doesn't accept variables, and in the UI you can only select names of files that already exist, so we've had to manually update it after every new branch we create.
I've grabbed the definition task step in powershell (as $step) and was hoping I could set the $step.inputs.fileInputs to a variable I assign to something like cert-$newVersion, however it currently is set to a guid.
Does anyone know if it possible to get the guid of secure files in ADO via the API or have a solution?
Does anyone know if it possible to get the guid of secure files in ADO via the API or have a solution?
Yes. This API exists.
You could try to use the following Rest API:
Get https://dev.azure.com/{OrganizationName}/{ProjectName}/_apis/distributedtask/securefiles?api-version=6.1-preview.1
Result:
You could get the secure file GUID based on the file name.
I'm wondering how I can check that a docker image exists in a private registry (in eu.gcr.io), without pulling it.
I have a service, written in golang, which needs to check for the existence of a docker image in order to validate a config file passed to it by a user.
Pulling the image using the go docker client, as shown here, works. However, I don't want to pull down images just to check they exist, as they can be large.
I've tried using Client.ImageSearch, but his just searches for public images. the cloud.google.com/go package also doesn't seem to have anything for dealing with the container registry.
There's possibly this and the crane tool it contains, but I'm really struggling to figure out how it works. The documentation is... not great.
I'd like the solution to be host agnostic, and the only option I have found is to simply make a http request and use the logic from this answer.
Are there any docker or other packages able to do this in a cleaner way?
Just realised the lib I've been using has an unhelpfully named client method DistributionInspect (link), which will just return the image digest and manifest, if it's found. so the image doesn't get pulled down.
I'm using the --workspace_status_command with stable status variables similarly to the Kubernetes test-infra usage.
I would like to expose the STABLE_* variables to custom Skylark rules. How should I do that?
rules_docker supports stamping from the workspace status files. It looks like it uses ctx.info_file and ctx.version_file to access them: https://github.com/bazelbuild/rules_docker/blob/4d8ec6570a5313fb0128e2354f2bc4323685282a/container/layer_tools.bzl#L83
They aren't in the published docs but the Bazel source code seems to show that those are the right thing: https://github.com/bazelbuild/bazel/blob/0.12.0/src/main/java/com/google/devtools/build/lib/analysis/skylark/SkylarkRuleContext.java#L987-L1011
A configuration file (.yml) is being used for a rest api developed with Dropwizard (0.9.2 - latest release). Most of the credentials needed by the api such as database password secret key etc., are stored in the configuration file.
We have implemented most of the things based on the items mentioned in the reference found at dropwizard configuration reference .
The question is clear. How secure is it (storing these information in a configuration file as plain text.)? If not, what is the proper way of doing this?
Yes, it's not secure indeed. Even worse if the configuration file is committed to a public repository or for that matter any repository (version control). One way which I follow is to maintain a local copy (not to be committed to any repository) of the config (.yml) file which has all the sensitive keys & details etc and maintain another example config file which has the sensitive details masked (some dummy strings instead of actual values). This example config can be committed to your repository as it has sensitive details masked.
For all purposes of running your code locally or elsewhere use the local config file. This way you don't risk it to exposing sensitive data on a repository. There is an overhead though in keeping your example config in sync with your local copy whenever you make any modifications.
I just looked for the solution for the similar issue. I want to find an solution to not include the keystore password in the config file. Finally I got an solution for it.
Just stored credential keys in the config file. And then use a substitutor to replace the keys with it's related values. But this need a secure key value services to get the values of the keys.
Overriding server connector config with env variables with dropwizard
Jenkins artifact URLs allow abstracting the "last successful build", so that instead of
http://myjenkins.local/job/MyJob/38/artifact/build/MyJob-v1.0.1.zip
we can say
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/MyJob-v1.0.1.zip
Is it possible to abstract this further? My artifacts have their version number in their filename, which can change from build to build. Ideally I'd like to have a some kind of "alias" URL that looks like this:
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/MyJob-latest.zip
MyJob-latest.zip would then resolve to MyJob-v1.0.1.zip.
If Jenkins itself can't do this, perhaps there's a plugin?
Never seen any such plugin, but Jenkins already has a similar functionality built-in.
You can use /*zip*/filename.zip in your artifact path, where filename is anything you choose. It will take all found artifacts, and download them in a zipfile (you may end up with a zip inside a zip, if your artifact is already a zip file)
In your case, it will be:
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/*zip*/MyJob-latest.zip
This will get you the contents of /artifact/build/ returned in zipped archive with name MyJob-latest.zip. Note that if you have more than just that zip file in that directory, other files will be returned too.
You can use wildcards in the path. A single * for a regular wildcard, a double ** for skipping any number of preceding directories.
For example, to get any file that starts with MyJob, ends with .zip, and to look for it in any artifact directory, you could use:
/lastSuccessfulBuild/artifact/**/MyJob*.zip/*zip*/MyJob-latest.zip
Edit:
You cannot do something like this without some form of a container (a zip in this case). With the container, you are telling the system:
Get any possible [undetermined count] wildcard match and place into this container, then give me the container. This is logical and possible, as there is only one single container, whether it is empty or not.
But you cannot tell the system:
Give me a link to a specific single file, but I don't know which one or how many there are. The system can't guarantee that your wildcards will match one, more than one, or none. This is simply impossible from a logic perspective.
If you need it for some script automation, you can unzip the first level zip and be still left with your desired zipped artifact.
If you need to provide this link to someone else, you need an alternative solution.
Alternative 1:
After your build is complete, execute a post-build step that will take your artifact, and rename it to MyJob-latest.zip, but you are losing versioning in the filename. You can also chose to copy instead of rename, but you end up with double the space used for storing these artifacts.
Alternative 2 (recommended):
As a post-build action, upload the artifact to a central repository. It can be Artifactory, or even plain SVN. When you upload it, it will be renamed MyJob-latest.zip and the previous one would be overwritten. This way you have a static link that will always have the latest artifact from lastSuccessfulBuild
There is actually a plugin to assign aliases to build you've run, and I have found it pretty handy: the Build Alias Setter Plugin.
You can use it for instance to assign an alias in the form of your own version number for a build, instead (or rather in addition) to the internal Jenkins-assigned build number.
I found that it is usually most practical to use it in conjunction with the EnvInject plugin (or your favorite variant): you would export an env variable (e.g. MY_VAR=xyz) with a value to the target version or moniker, and then use the form ${ENV,var="myvar"} in the "Token Macro alias" config that the plugin provides in your job config.
You can also use it to assign aliases in the form of "lastSuccesful" if you have such a need, which allows you to distinguish between different types of successful (or other state) builds.
Wait thee's more! You can also use the /*zip*/ trick in conjunction with the alias setter as well.