electron how to create delta file - electron

I use electron-builder to build my app and succeed to build the first version which contains three outputs: foosetup.exe, foo-0.0.1-full.nupkg and RELEASES.Now I want to implement the auto-update and I have deployed a back-end service by using electron-release-server.
The auto-update need to set a feedURL which will be used to fetch updates,but the problem is that I don't know what the updates exactly means?Is it the foo-0.0.1-full.nupkg or the foo-0.0.1-delta.nupkg or another file?
The second problem is that I don't know how to create the delta file.I can just find an option remoteReleases in electron-builder which is a URL to your existing updates.If given,these will be downloaded to create delta file.But what's the URL exactly means?I find a example i which "remoteRelease": "https://github.com/user/repo",and it creates some releases and uploads many extra files for each release such as foosetup.exe, foo-xx-full-nupkg, RELEASES.I guess electron-builder will fetch the ${remoteReleases/release/download/some-version/xxx} to download file and then diff the two file to create delta file,but I can't upload RELEASES when I create release on github,it reports that they don't support this file type.
Is there anyone can help?There're to few docs to follow for a beginer

For electron-release-server please take a look at the docs.
The delta-file will be create automatically if you use electron-builder. But in order for this to work remoteReleases must be set to a valid (and reachable) URL plus there must at least an empty file called RELEASES. So for the very first build just create an empty file and call it RELEASES.
On every future build there will be a RELEASES file created for you. Threw all the generated files in your release server (overwrite existing RELEASES) and it'll be fine.
Attention: For electron-release-server you do not need the RELEASES generated by electron-builder. electron-release-server will create one by itself.
To get started with auto-updates I'd recommend that you set up a dead-simple release-server locally. I. e.:
Create a directory and throw an empty file RELEASES in there.
Then start a simple webserver pointing at that directory (e. g. cd into/your/dir && php -S 0.0.0.0:80).
Edit your package.json: "remoteRelease": "http://localhost"
Then build your installer: npm run dist
It should successfully build and you should see some GET requests on your local server.
Take the generated files and stuff them into the directory you created.
Now increment your version and start another build: npm run dist
You should see some GET requests again and there should be an addition delta-file being created.
Again stuff all those things into the directory (or for electron-release-server upload the assets .nupkg, .exe and delta into a new release).
Hope that helps. Feel free to comment if something is unclear.

Check out this sample app that I have created https://github.com/electron-delta/electron-sample-app
It uses two npm packages.
#electron-delta/builder
#electron-delta/updater
More details https://github.com/electron-delta/electron-delta#installation

Related

How is electron-updater able to find my repository?

I have a repository that utilizes electron-updater for auto update. The weird thing is, it has no any code whatsoever pointing where the release updates are stored (I store it in GitHub releases), but somehow the autoUpdater.checkForUpdatesAndNotify() still works. There is github remote origin but I doubt it's being used by electron-updater to find the repository. I don't use any GitHub token either.
The way I release update:
Increase the version in package.json
Run electron-builder, producing .AppImage
Create new release draft in my repository's GitHub releases
Upload the .AppImage file to the draft's assets and modify the draft's tag
Download the previous release, and then open it
Voila! The update works. But how?
It's worth mentioning that if latest-linux.yml is missing from the latest release's asset, it will throw 404 error and refuse to update despite knowing the latest version's tag.
Here's the repository I'm talking about: https://github.com/SnekNOTSnake/fresh-update/releases
Also, is this how normal people release their electron app? I tried the electron-builder --publish way, but it's troublesome compared to the manual steps above.
Thanks to Caramiriel in the comment section above for the enlightenment.
How electron-updater knows where to find the repository is from resources/app-update.yml inside the produced .AppImage file.
The app-update.yml file is produced by electron-builder using the information from git remote get-url origin (if available).
I proved it by changing the origin's url to https://github.com/SnekNOTSnake/tofu-tracker.git and build the AppImage, and (surprisingly enough) the repo's value became tofu-tracker.

How do I use this config.yml file to run a web scraper that someone else built?

My end goal: I want to fetch data from a retail site on an hourly schedule to see if a specific product is back in stock or not.
I tried using xpath in python to scrape the site myself, but I'm not too familiar, and why reinvent the wheel if someone built a scraper already? In this case, Diggernaut has a github repo.
https://github.com/Diggernaut/configs/tree/master/bananarepublic.gap.com
I'm using the above github repo to try and run a pre-existing web scraper on the banana republic retail site. All that's included in the folder is a config.yml file. I don't even know where to start to try and run it... I am not familiar with using .yml files at all, barely know my way around a terminal (I can do basic "ls" and "cd" and "brew install", otherwise, no idea).
Help! I have docker and git installed (not that I know how to use docker). I have a Mac version 10.13.6 (High Sierra).
I'm not sure why you're looking at using Docker for this, as the config.yml is designed for use on Diggernaut.com and not as part of a docker container deployment. In fact, there is no docker container for Diggernaut that exists as far as I can see.
On the main Github config page for Diggernaut they list the following instructions:
All configs can be used with Diggernaut service to retrieve products information.
You need to create free account at Diggernaut
Login to your account
Create a project with any name and description you want
Get into your new project by clicking it and create new digger with any name
Then you will see 3 options suggested to you, you need to use one where you will use meta-language
Config editor will open and you can simply copy and paste config code and click on save button.
Switch mode for digger from Debug to Active and then run your digger.
Wait for completion.
Download data.
Schedule your runs if required.

finding lib directory during common test

My question is, how should my Erlang app reliably find a binary in the priv directory, not just in production; when installed properly, but during common test?
I realised today when I added a travis-ci configuration to an old Erlang app and pushed it to git-hub, that the process by which it works locally for me, is a little more fragile than I thought. The travis-ci build failed because it, not unreasonably, checked out my repo into a directory named after the repo, which is of the form erlang-APP. Locally my app is in a directory called APP-VSN though.
The result of this is that a call to code:lib_dir(APP) returns a correct result during the common test run locally, but if I rename my current directory to erlang-APP instead of APP-VSN (or just APP works too) my local build fails, just like it does for travis-ci, because code:lib_dir(APP) returns {error,bad_name}. The behaviour as though .. is added to the library path for rebar ct.
Renaming my github repo from erlang-APP to APP resolves the travis-ci build failure... but knowing the build tests only pass depending on the name of the directory the repo is checked out into doesn't sit right with me.
One way could be to use a soft link (either in the repo under version control, or created when initializing the tests), and make your Erlang code path go via the link. E.g., "./APP" -> ".", or "./lib/APP" -> "..".

iOS Google Tag Manager Integration: How to add multiple containers per App environment?

I completed the integration of the latest Google Tag Manager (v5) for iOS together with Firebase (https://developers.google.com/tag-manager/ios/v5/).
The big change here is that the default container file is not binary anymore, it is plain JSON.
The integration requires that you have a folder (not group!) with the name "container" inside your app workspace. Within this folder the container file should be located. This raises my issue: We have two different GTM Containers, one for the testing/development app and one for production.
By using a folder it is not possible for me to add a different container file and set target references.
I can not create an additional folder since GTM requires the folder on root level and with the exact name "container"
Does anybody have an idea how this can be solved?
Thanks,
Fahim
You should be able to configure an XCode "run script" build step that clears the container directory and copies the correct container into place.
Sample Run Script (if somebody has the same issue):
rm -vf ${SRCROOT}/root_folder/container/*
cp "${SRCROOT}/root_folder/target/test/GTM-XXXXX.json" "${SRCROOT}/root_folder/container/"
It is important that this copy job is done at first within Build Phases, otherwise some other precompiling stuff of GTM does not recognize the container.

Trigger.io continuous development

I'd like to know if there is any way to develop continuously with Trigger.io and avoid the forge build step with every file change I want to test in my browser or simulator.
I was faced with the same problem and I've got a working solution that uses watchr and watch to automatically rebuild each time I make a change to a source file. If you are running a "web" version of your app you can make a change to a source file and go directly to your browser and see the effect of your changes fairly quickly depending on how long the build takes.
Prerequisites: Ruby, watchr, Unix 'watch', and a terminal.
gem install watchr.
create a new ruby file for watchr to know what files to monitor and what to do when it sees a change. I named my file 'my_watch.rb': https://gist.github.com/3153167
open two terminals. Terminal 1 will run watchr and Terminal two will run 'forge build ...'.
In terminal 1 run 'watchr my_watch.rb' making sure the path to my_watch.rb is correct and make sure you've edited my_watch.rb according to your setup so that the path inside watch(...) reflects the files to be watched. My example watches all files in the same directory (and beneath) as the my_watch.rb script. You can place my_watch.rb in the 'src' folder of your Trigger.io app if you want to match my example and run watchr my_watch.rb directly from the src folder. Also not the shell command and path in the block need to be updated to reflect your environment. Again, in my example 'my_watch.rb' is inside 'src/' so when a change is detected we go up one directory and call 'forge build'.
I tend to develop actively with the 'web' version of my app so I can just open terminal 2 to my forge project directory and 'forge run web'. When I am testing in simulators and on devices, yes I have to run forge build every time I want to see a change. However, I typically don't have to wait for forge build to finish because watchr kicked off the build as soon as I made a change and it happens pretty quickly.
I know this is not an ideal solution but so far developing new features in the 'web' version first and then implementing in mobile versions has been very smooth for me. I've never needed to kill the 'web' version after a build but I maybe just lucky. As for running build each time you want to test the mobile versions if you are good with your keyboard shortcuts it really isn't bad at all. XCode makes you build and run after changes are made to source code when creating native iOS apps so I don't think Trigger is unique in requiring this build step.
I hope this helps and that my answer isn't too specific to me and my setup.
The build phase makes some changes to your source to enable the forge.* APIs - therefore, trying to just use the raw files in your src directory won't work.
You may be tempted to change files directly in the development directory, but this is a pretty bad idea: we delete those files with impunity when we need to!
We have plans on our medium-term roadmap to add a file-system watcher to start builds automatically when changes have occurred.
In the meantime, I just use forge build && forge run PLATFORM which tends to only take a few seconds...
while not perfect... this works for me.
go into development/web
rm src
link to your root src, ie ln -s ../../src src
copy the all.js from the web/forge and add to your index.html
ie
start nodemon web.js
open in browser.
note you will need to comment out the all.js script tag for non web builds.

Resources