Same VSCode-Settings for the whole crew? - docker

We want to have the same VSCode settings for the whole crew of developers. Also it would be fine to have a oneline command to tear VSCode down and restart it from scratch with predefined settings and plugins so that you do not have to worry about trying out plugins and getting beck to the known state. Kind of Config-as-Code for VSCode.
I already found:
https://code.visualstudio.com/docs/editor/extension-gallery#_command-line-extension-management
https://github.com/microsoft/vscode-dev-containers
https://marketplace.visualstudio.com/items?itemName=Shan.code-settings-sync&ssr=false#qna
https://github.com/gantsign/ansible-role-visual-studio-code-extensions
https://code.visualstudio.com/docs/remote/containers
https://github.com/gantsign/ansible-role-visual-studio-code
But non of these provides a good solution to me
We are using Mac and Windows machines and develop most of the time locally (not remotely in the cloud or the like).
I imagine like having a script like
.... projectname up
or
.... projectname reset
(or
.... projectname down)
to receive/reset the configured settings and newest plugins that have been configured for the project.
Have any ideas or use a similar solution already?

After doing a lot of research, playing with Docker, Ansible and so on... it seems that although I excluded it at first the plugin Settings Sync Plugin from Shan Khan is the way to go. It has round about 1 million installs!
Only dependency - you need a GitHub account to host your configs. That is what held me back at first - but it should be not that much of a problem to get one for everyone in the team and connect it to like a company-github-account.

Copy the files settings.json and keybindings.json to your target machine(s) to copy the settings. You can find those files here:
Win: ~\AppData\Roaming\Code\User
Mac: ~/Library/Application Support/Code/User/
Linux: ~/.config/Code/User
You can copy extentions from ~/.vscode/extensions or C:\Users\username\.vscode\extensions from linux/mac or windows respectively.

FalcoGer's answer should explain how to copy the files in a way VS Code will pick them up. If you only need to copy the config files once, this solution would be fine.
If you need to "sync" these config files on a regular basis, I would advise to create a Git repository where all config files will be stored.
When cloning the repo to local machines, you can symlink the files to the config destinations (see FalcoGer's anwser). Then when you need to "sync", you only have to run git pull and restart VS Code to apply the changes.
For your other script-related question, you could create a CLI for this. Python would be the most friendly way to do this. You can find an example here.

Related

Configuring Packer with Neovim

I'm trying to switch over my current setup for Neovim (using Vim Plug) to Packer and I'm having trouble.
My Neovim is loaded from ~/.config/nvim/init.lua which sources all of my plugin and other settings. They live mostly inside of a ~/lua folder (the "main Imports") section of my configuration, including my actual plug-plugins.lua file that references all of my plugins.
-- Main Imports
require("settings")
require("colors")
require("mappings")
require("functions")
require("autocommands")
require("plug-plugins")
...
Later in the same init.lua file, I'm sourcing plugin specific settings for all of these plugins. In order to get my directory working currently, I'm installing everything with :PlugInstall and it works fine.
...
-- Plugin-specific settings
require("plugin-settings/fzf")
require("plugin-settings/fugitive")
require("plugin-settings/ultisnips")
require("plugin-settings/coc")
require("plugin-settings/treesitter")
require("plugin-settings/miscellaneous")
require("plugin-settings/toggle-terminal")
Installing Packer
The installation steps for Packer are pretty sparse, and merely state that you should clone the repository to somewhere in your "packpath" but I'm not really clear what that means. When I'm inside Neovim, and I press :set packpath? I get the following paths:
packpath=~/.config/nvim,/etc/xdg/nvim,~/.local/share/nvim/site,/usr/local/share/nvim/site,/usr/share/nvim/site,/usr/local/Cellar/neovim/HEAD-b74916c_1/share/nvim/runtime,/usr/local/Cel
lar/neovim/HEAD-b74916c_1/lib/nvim,/usr/share/nvim/site/after,/usr/local/share/nvim/site/after,~/.local/share/nvim/site/after,/etc/xdg/nvim/after,~/.config/nvim/after
This makes me think that I'm able to just clone the respository to ~/.config/nvim which is the first path listed. I'm not really sure what to do next though, or if this is even right.
Can anyone help? What are the basic steps to getting Packer installed (I'm on MacOS 11.6).
i did recently moved from vim-plug to packer, as per docs when you do git clone of the repo the path provided in readme for installation is ~/.local/share/nvim/site/pack/packer.After successful clone you can start using packer in your plugins.lua as below.
return require('packer').startup(function()
use 'wbthomason/packer.nvim'
end)
you can check the installation by running :PackerSync this will fetch (git clone) the plugin in to the packerpath which is ~/.local/share/nvim/site/pack/packer
Hope this is what you looking for?
I had the exact same situation and it turned out to just be a naming conflict. I had named my local nvim config file lua/packer.lua and changing that fixed the issue.

How do I use this config.yml file to run a web scraper that someone else built?

My end goal: I want to fetch data from a retail site on an hourly schedule to see if a specific product is back in stock or not.
I tried using xpath in python to scrape the site myself, but I'm not too familiar, and why reinvent the wheel if someone built a scraper already? In this case, Diggernaut has a github repo.
https://github.com/Diggernaut/configs/tree/master/bananarepublic.gap.com
I'm using the above github repo to try and run a pre-existing web scraper on the banana republic retail site. All that's included in the folder is a config.yml file. I don't even know where to start to try and run it... I am not familiar with using .yml files at all, barely know my way around a terminal (I can do basic "ls" and "cd" and "brew install", otherwise, no idea).
Help! I have docker and git installed (not that I know how to use docker). I have a Mac version 10.13.6 (High Sierra).
I'm not sure why you're looking at using Docker for this, as the config.yml is designed for use on Diggernaut.com and not as part of a docker container deployment. In fact, there is no docker container for Diggernaut that exists as far as I can see.
On the main Github config page for Diggernaut they list the following instructions:
All configs can be used with Diggernaut service to retrieve products information.
You need to create free account at Diggernaut
Login to your account
Create a project with any name and description you want
Get into your new project by clicking it and create new digger with any name
Then you will see 3 options suggested to you, you need to use one where you will use meta-language
Config editor will open and you can simply copy and paste config code and click on save button.
Switch mode for digger from Debug to Active and then run your digger.
Wait for completion.
Download data.
Schedule your runs if required.

apache karaf clean start

I'm trying to clean start karaf on Windows using clean option.
It does delete data directory with bundles cache but it copy new bundles into data directory from system directory not local maven repository. But system directory has stale jars in comparison to local maven repository which cause karaf to start with stale bundles.
Is this a 'feature' of clean option? Am I missing something? How could I start Karaf with latest code from maven repo not dealing with file system?
You can't as the system directory is per default the one to use.
The clean does mean to clear up bundles in a awkward state and is only rarely used. Sometimes this happens if you start and stop the karaf container very close to each other then bundle might be in an incomplete state and since those bundle state is kept only a clean will help. Another way of cleaning is to delete the data folder.
So what you look like to be intending is to update certain bundles that are installed from the systems folder. For that you need to install the never version cause Karaf nows which versions are in the systems folder, those bundles are defined in the framework feature which is the basic feature to be used by Karaf itself.
If you have your own bundles in the system folder there is no way of updating those as those are regarded to be bootfeatures. In case you want to update those you'll need to make sure those features aren't boot features anymore and after that just do install the never versions of your bundles and uninstall the older ones. This can be done with the command shell.
One side note, it's usually best to ask those questions on the users mailinglist of Karaf, you get more people to answer your questions there.

Grails application that copies and unzippes files from remote server to another remote server using SSH

I'm new in JAVA\Grails\Groovy. Just began to create simple apps.
I've got a task to create grails app that:
1) shows a list of source zip files on a remote server, that is available by FTP and SSH
2) shows a list of destination remote servers with predefined target folders, that are available only by SSH
3) after choosing source zip and dest server it copies zip to target server\folder and unzippes. Progress bar must be shown.
4) performs some additional commands, such as ls or something like that
All configurations must by either in config files or in the database.
No information should be hardcoded in app.
Please help me to choose approach, plugin or framework.
Any help would be appreciated
I've used JSch a lot for SCP file transfer and remote exec using SSH and works very well. You could use it directly like you would in a Java app, by adding a dependency for the jar in BuildConfig.groovy
compile 'com.jcraft:jsch:0.1.51'
but the most trivial Google search I could manage that included "Grails" and "SSH" tells me that there's this plugin which looks great, and this plugin which also looks great, and this blog post which looks great, and also this plugin which uses a different library but also looks great.
Those options cover the ssh and scp/sftp parts, and you can use the JDK support for Zip files, e.g. java.util.zip.ZipFile and the other related classes in that package, to unzip the files. The rest is pretty straightforward, but if you need more help ask more questions (one question per question).

Trigger.io continuous development

I'd like to know if there is any way to develop continuously with Trigger.io and avoid the forge build step with every file change I want to test in my browser or simulator.
I was faced with the same problem and I've got a working solution that uses watchr and watch to automatically rebuild each time I make a change to a source file. If you are running a "web" version of your app you can make a change to a source file and go directly to your browser and see the effect of your changes fairly quickly depending on how long the build takes.
Prerequisites: Ruby, watchr, Unix 'watch', and a terminal.
gem install watchr.
create a new ruby file for watchr to know what files to monitor and what to do when it sees a change. I named my file 'my_watch.rb': https://gist.github.com/3153167
open two terminals. Terminal 1 will run watchr and Terminal two will run 'forge build ...'.
In terminal 1 run 'watchr my_watch.rb' making sure the path to my_watch.rb is correct and make sure you've edited my_watch.rb according to your setup so that the path inside watch(...) reflects the files to be watched. My example watches all files in the same directory (and beneath) as the my_watch.rb script. You can place my_watch.rb in the 'src' folder of your Trigger.io app if you want to match my example and run watchr my_watch.rb directly from the src folder. Also not the shell command and path in the block need to be updated to reflect your environment. Again, in my example 'my_watch.rb' is inside 'src/' so when a change is detected we go up one directory and call 'forge build'.
I tend to develop actively with the 'web' version of my app so I can just open terminal 2 to my forge project directory and 'forge run web'. When I am testing in simulators and on devices, yes I have to run forge build every time I want to see a change. However, I typically don't have to wait for forge build to finish because watchr kicked off the build as soon as I made a change and it happens pretty quickly.
I know this is not an ideal solution but so far developing new features in the 'web' version first and then implementing in mobile versions has been very smooth for me. I've never needed to kill the 'web' version after a build but I maybe just lucky. As for running build each time you want to test the mobile versions if you are good with your keyboard shortcuts it really isn't bad at all. XCode makes you build and run after changes are made to source code when creating native iOS apps so I don't think Trigger is unique in requiring this build step.
I hope this helps and that my answer isn't too specific to me and my setup.
The build phase makes some changes to your source to enable the forge.* APIs - therefore, trying to just use the raw files in your src directory won't work.
You may be tempted to change files directly in the development directory, but this is a pretty bad idea: we delete those files with impunity when we need to!
We have plans on our medium-term roadmap to add a file-system watcher to start builds automatically when changes have occurred.
In the meantime, I just use forge build && forge run PLATFORM which tends to only take a few seconds...
while not perfect... this works for me.
go into development/web
rm src
link to your root src, ie ln -s ../../src src
copy the all.js from the web/forge and add to your index.html
ie
start nodemon web.js
open in browser.
note you will need to comment out the all.js script tag for non web builds.

Resources