I need to create two releases of my electron application:
A silent installing exe where electron-updater is configured to automatically update in the background.
An msi where the installed application does not automatically update, but just alerts the user that a new version has been released.
One way for me to do this would be to copy some sort of config into the build directory before I run electron-builder for each of the two builds, and read this config in the application to identify how to handle electron-updater events.
Before I do that I'm trying to identify whether that's the best way or not. Ideally I'd have a variable that I could send in to electron-builder that toggles the electron-updater functionality, but I don't think such a thing exists.
So the question really is:
Is it possible to use build-time variables with an electron application? If so, how?
A solution I came up with myself was to create a config.json file that I require-d into the js files I needed the data in.
Part of my build process for different package types involved overwriting the config.js file with the build-specific version in the build directory before it was all packaged.
This is not ideal, because it means that I can't build all of them with one electron-builder command, but as it happens I couldn't build the msi on my Mac anyway, so ended up issuing separate commands anyway.
Related
Is it possible to run bentoml build without importing the services.py file during the process?
I'm trying to put the bento build and containarize steps in our CI/CD server. Our model depends on some OS packages installed and some python packages. I thought I could run bentoml build to package the model code and binaries that are present. I'd leave the dependencies especification to the contanairize step.
To my surprise the bentoml build process tried to import the service file during the packaging and the build failed since I didn't have the dependencies installed in my CI/CD machine.
Can I prevent this importing while building/packaging the model? Maybe I should ignore the bento containarize and create my bento container by hand and just execute the bentoml serve inside.
I feel that the need to install by hand the dependencies is doubling the effort to specify them in the bentofile.yaml and preventing the reproducibility of my environment.
This is not currently possible. The community is working on an environment management feature, such that an environment with the necessary dependencies will be automatically created during build.
I have a Jenkins job that gets the code from version control and builds (like what a normal pipeline do), I was doing is that after building the project, I download the build and use FTP to transfer that build to the client's server then I unzip it and then copy the whole build because I copy whole build my application's down time is very high. (I have to use FTP because as a service provider we have some limitations and can't change this policy)
What I wanted to do is that Jenkins know what is changed when it is building so Jenkins will create a package with all the changes and with the correct path where the file should go, and I can download that package and copy that package and just run the package so whatever was changed only that should get updated.
Is that possible? Is there any plugin that I can use?
This really depends on the build tool/language you are using to build you application. I dont think there is a generic jenkins plugin.
Other idea would be to upload your package to a local Nexus server. Download after the next build and the compare the files from old and new build. With this information you can create a patch package for your clienst server.
Good afternoon,
As I understand Jenkins, if I need to install a plugin, it goes to Jenkins Plugins
The problem I have is Jenkins is installed on a closed network, it cannot access the internet. Is there a way I can download all of the plugins, place them on a web server on my local LAN, and have Jenkins reach out and download plugins as necessary? I could download everything and install one plugin at a time, but that seems a little tedious.
You could follow some or all of the instructions for setting up an artifactory mirror for the plugin repo.
It will need to be a http/https server and you will find that many plugins have a multitude of dependencies
The closed network problem:
You can take a cue from the Jenkins Docker install-plugins.sh approach ...
This script takes as input a list of plugins, and optionally versions (eg: $0 workflow-aggregator:2.6 pipeline-maven:3.6.5 job-dsl:1.70) and will download all the plugins and dependencies into a working directory.
Our approach is to create a file (under version control) and redirect that to the command line input (ie: install-plugins.sh $(< plugins.lst).
You can download from where you do have internet access and then place on your network, manually copying them to your ${JENKINS_HOME}/plugins directory and restart the instance.
The tedious list problem:
If you only specify top-level plugins (ie: what you need), every time you run the script, it will resolve the latest dependencies. Makes for a short list, but random dependencies if they get updated at https://updates.jenkins.io. You can use a two-step approach to address this. Use the short-list to download the required plugins and dependencies. Store the generated explicit list for future reference or repeatability.
I am trying to deploy a sample project with tfs release management vNext. I tried a lot of things (for example: VS RM – vNext Template for On-Premise Target Server in Un-trusted Domain - although I am in a trusted domain) but am now totally lost. My vNext deployment tells me:
ROBOCOPY - ERROR 3 (0x00000003) Accessing Source Directory
\rmServer\ReleaseManagementShare\15b27b05-d176-492d-b534-268af1845a36\2\ComponentName\
The system cannot find the path specified.
And this is true. The folder with the id does not exist.
Concrete questions:
Who is generating the id 15...36?
Who is creating this folder?
Why does it not exist and how can I change that? :)
In the tfs frontend build definition - what is the correct value for 'Artifact Type' and 'Artifact Name'?
Can somebody help out?
The ReleaseManagementShare folder is generally created by the installer when you set up the RM server -- or at least I recently observed that behavior in RM 2015 Update 1, I'm not sure if older versions did that. If it doesn't exist, you can create it yourself. Make sure your RM Server service account has read/write access to it. This folder typically isn't used.
The ReleaseManagementShare folder is only used if you're using a XAML build and have the build output set to go to Server instead of a file share. It may be used for the new build system as well when you choose to store your artifacts on the server, but I haven't tested that scenario. If you push your binaries to a file share, this folder is completely irrelevant. See this for more details:
https://blogs.msdn.microsoft.com/visualstudioalm/2014/11/11/whats-new-in-release-management-for-vs-2013-update-4/
Basically, there are two potential UNC shares involved:
One is for the build server. It puts binaries there, and the target servers reach out to that location to grab them.
The other is this ReleaseManagementShare. It comes into play when you don't have the share outlined in #1, and instead are storing your binaries in TFS. The targets servers still need to get the binaries somehow, so the release management server will "stage" them in the ReleaseManagementShare so the target machines can grab them via the same mechanism they would use to grab them from the build artifact share.
The ID is just a random GUID.
I'm assuming you're using the new build system since you're asking about artifacts. For the Artifact Type, I know for a fact that File Share works. I'm not 100% certain that Server works, however.
The artifact name can be anything you want, but it's important to note that the component name that you define in RM server must match the artifact name, otherwise it will fail to find the binaries.
I'm using Jenkins v1.546, hosted on a Windows Server 2008 R2 SP1 machine.
I've set up a fairly simple job for building a Maven Java project. It polls the SCM with no schedule and picks up remote build triggers, requiring an authentication token. It uses Subversion and performs clean checkouts with svn update. Additionally, it has a post-build step that archives some build artifacts (i.e., the resulting WAR and WSDLs).
The issue I'm experiencing is that the builds that it stores on the filesystem itself contain invalid characters in their filenames. This causes our automatic backup process to blow up, it being unable to alter or remove those directories/files with the '$'. I myself cannot move/delete those folders or files either, but if I rename it and remove the $, then things work fine. Oh, and if I try to follow one of these links with the $ in it, it doesn't resolve. None of the other jobs seem to do this - just my job, of course. Anyone know why this may be occurring and what I can do to resolve this?
I've attached multiple screenshots that show the bad filename and my Jenkins job setup. I had to white out some company information. If I can provide any additional information to help troubleshoot this, just let me know.
Also, as an update, I did some additional research, looking through the changelogs for each released version of Jenkins since my version (latest is 1.557). I saw three possible issues in the changelogs that could be related, but it's hard for me to tell. I cannot simply upgrade our Jenkins to test out this theory, since I'll need to provide a reason for upgrading beyond a hunch.
https://issues.jenkins-ci.org/browse/JENKINS-21023
https://issues.jenkins-ci.org/browse/JENKINS-20534
https://issues.jenkins-ci.org/browse/JENKINS-21958
The $ is a perfectly valid character in Windows directory name. You can manually make a folder with it, and delete it without any problems.
The com.company$moduleName syntax is used by Jenkins Maven-style job to separate modules of your build. If you don't see this structure for other people's jobs, it is because they are either not building a Maven job, or they don't have multiple modules in a single job.
What is strange though it that these are symlinks (I don't see that in my environment). It is possible that the location that is referenced by the symlink is deleted, but the link remains. In this case, you would not be able to navigate to that location through the link (this is what you are experiencing)
Is it possible that your backup software is deleting the target directories before deleting the links?
In any case, do a simple dir on the directory with the links to see what they link to. And then verify those target locations exists. If they don't, you need to figure out who/what is deleting the links' targets
Edit:
This seems to be more related to the issue that you are facing. Unfortunately, it's marked as "unresolved"
https://issues.jenkins-ci.org/browse/JENKINS-20725
The issue stems from the fact that the symlinks are referencing to targets with / instead of \
My Maven plugin (not Maven version) is 2.6. See if upgrading your Maven plugin in Jenkins will help you. Also, I am running Maven 3.2.2 from the automatic installers. Try with that, as I don't see symlinks in my modules.