I am trying to get Highcharts working with Node/Puppeteer. Edited - just not able to get highcharts recognized by Node script.
var Highcharts = require('highcharts');
var fs = require('fs')
var puppeteer = require('puppeteer')
console.log('Highcharts.version=' + Highcharts.version)
console.log('fs W_OK=' + fs.W_OK)
console.log('Puppeteer preferred revision=' + puppeteer._launcher._preferredRevision)
Output:
Highcharts.version=undefined
fs W_OK=2
Puppeteer preferred revision=624492
Had installed highcharts via:
npm install -g highcharts
My issue with this simple script was not understanding npm install. I finally got my script to find 'highcharts' and see the version, though I had to do Highcharts().version instead of Highcharts.version. My misunderstanding had to do with installing locally or globally when installing with NPM.
Related
At the beginning I got a problem with the french date in the antd calendar. I use vite so I install the antd_dayjs_vite_plugin to switch from Moment.js to Day.js. It worked well but this morning the vite build process is in error. I tried to update the antd_dayjs_vite_plugin version (was 1.1.4) and now I got the same problem when I try to lunch a yarn dev as you can see :
$ yarn dev
yarn run v1.22.15
$ vite
failed to load config from vite.config.ts
error when starting dev server:
TypeError: (0 , import_antd_dayjs_vite_plugin.default) is not a function [...]
Here is the code in vite.config.ts :
import reactRefresh from '#vitejs/plugin-react-refresh';
import antdDayjs from 'antd-dayjs-vite-plugin';
import { defineConfig } from 'vite';
// https://vitejs.dev/config/
export default defineConfig({
plugins: [reactRefresh(), antdDayjs()],
server: {
host: process.env.HOST || '127.0.0.1',
},
resolve: {
alias: [{ find: '#', replacement: '/src' }],
},
define: {
__APP_VERSION__: JSON.stringify(process.env.npm_package_version),
},
build: {
commonjsOptions: {
transformMixedEsModules: true,
},
},
});
The problem also appears in antd-dayjs-vite-plugin 1.1.4 version or the 1.2.2. I also already tried to update vite to 3.1 (was in 2.5).
I don't understand the code seems to be exactly the same as the usage in the Readme package.
Thanks in advance for your help. 🙏🏻
Seams that a default export is expected by vite.js (tried to replace import statement with import {antdDayjs} from 'antd-dayjs-vite-plugin'; without success)
I was able to create a workaround using patch-package with the below steps:
modifiy node_modules/antd-dayjs-vite-plugin/dist-node/index.js
at the very end of that file, add exports.default = antdDayjs;
create a patch for antd-dayjs-vite-plugin
ensure you have the postinstall script (refer to patch-package doc)
I'm currently using Neovim 6.0. And I also use the following neovim-config : https://github.com/rafi/vim-config.
After installation, I created a python program to test and a problem encountered which are as follows:
treesitter/highlighter: Error executing lua: ...im/0.6.0/share/nvim/runtime/lua/vim/treesitter/query.lua:161: query: invalid node type at position 5622
~ pdb~ ⮡ Snippet [VSnip] st
I had a similar issue. I ran :TSUpdate in Neovim to update Treesitter plugin and the error message disapear after relaunching.
I just solved it using :TSInstall vim.
Actually, run :checkhealth and the error in there would help in figuring out what is missing.
Remember, in treesitter 'c', 'help', 'lua', and 'vim' are part of the core functionality of Neovim. But that means if you are seeing this error, a sure fire way to make sure they are all installed is to run:
:TSInstall c help lua vim
For me helped adding cmake to ensure_installed in the treesitter config section of .config/nvim/init.lua:
-- [[ Configure Treesitter ]]
-- See `:help nvim-treesitter`
require('nvim-treesitter.configs').setup {
-- Add languages to be installed here that you want installed for treesitter
ensure_installed = { 'c', 'cpp', 'go', 'lua', 'python', 'rust', 'typescript', 'help', 'cmake' },
Configuration was based on https://github.com/nvim-lua/kickstart.nvim
I would like to use puppeteer inside worker threads in my electron app. When building the bundle, I use extraFiles to copy worker code to Resources/bin. But unfortunately, it throws exception: "Cannot find module puppeteer" after running. What I had already tried:
Import puppeteer normally:
const puppeteer = require('puppeteer');
Import puppeteer in app.asar.unpack:
const puppeteerPath = path.resolve(
process.resourcesPath,
'app.asar.unpacked/node_modules/puppeteer/index.js'
);
const puppeteer = require(`${puppeteerPath}`);
Import puppeteer in app.asar:
const puppeteerPath = path.resolve(
process.resourcesPath,
'app.asar/node_modules/puppeteer/index.js'
);
const puppeteer = require(`${puppeteerPath}`);
Here is the repo which reproduce my case: https://github.com/alfredalfie123/test_worker
Could you please help me?
you need to copy all puppetter related deps to asar.unpack:
https://github.com/electron/electron/issues/18540#issuecomment-660679649
I am trying to get my Faster R-CNN model into an Container Instance on ACI. For that I need my docker image to posses python version 3.5.*. I specify that in my conda yaml file, but every time I spin an instance up and docker run -it *** /bin/bash into it I see that it only has Python 3.6.7.
https://user-images.githubusercontent.com/21140767/50680590-82b20b80-1008-11e9-9bfe-4a0e71084ce0.png
How can I get my Docker image to have Python version 3.5.*? I already tried conda installing Python version 3.5.2, but that didn't work as eventually it didn't posses 3.5.2, but only 3.6.7. (dfimage lets you see the dockerfile from which the image was created, https://hub.docker.com/r/chenzj/dfimage/).
https://user-images.githubusercontent.com/21140767/50680673-d6245980-1008-11e9-9d48-71a7c150d925.png
My yaml:
name: project_environment
dependencies:
- python=3.5.2
- pip:
- matplotlib
- opencv-python==3.4.3.18
- azureml-core==1.0.6
- numpy
- cntk
- cython
channels:
- anaconda
Notebook cell:
from azureml.core.conda_dependencies import CondaDependencies
svmandss = CondaDependencies.create(python_version="3.5.2", pip_packages=[
"matplotlib",
"opencv-python==3.4.3.18",
"azureml-core",
"numpy",
"cntk",
"cython"], )
svmandss.add_channel('anaconda')
with open("fasterrcnn.yml","w") as f:
f.write(svmandss.serialize_to_string())
Another notebook cell with ContainerImage specifications.
image_config = ContainerImage.image_configuration(execution_script="score_fasterrcnn.py",runtime="python",conda_file="./fasterrcnn.yml",dependencies=listdir("utils"),docker_file="./Dockerfile")
service = Webservice.deploy_from_model(workspace=ws,
name='faster-rcnn',
deployment_config=aciconfig,
models=[Model(workspace=ws, name='Faster-RCNN')],
image_config=image_config)
service.wait_for_deployment(show_output=True)
Note
For better readability see my GitHub issue: (https://github.com/Azure/MachineLearningNotebooks/issues/163).
Currently, the version of Python is fixed to what's in Azure ML's base image, when deploying the web service. We're investigating removing this limitation in future.
Since this is one of the top Google answers when searching for "azureml python version" I'm posting the answer here. The documentation is not very clear when it comes to this, but the following will work:
from azureml.core import Workspace
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
ws = Workspace.from_config()
# This is the important part
conda_dep = CondaDependencies(conda_dependencies_file_path="pipeline/environment.yml")
aml_run_config = RunConfiguration(conda_dependencies=conda_dep)
# Define compute target - must be preconfigured in th workspace
compute_target = ws.compute_targets['my-azureml-target']
aml_run_config.target = compute_target
from azureml.pipeline.steps import PythonScriptStep
script_source_dir = "./pipeline"
step_1_script = "test.py"
step_1 = PythonScriptStep(
script_name=step_1_script,
source_directory=script_source_dir,
compute_target=compute_target,
runconfig=aml_run_config,
allow_reuse=True
)
from azureml.pipeline.core import Pipeline
# Build the pipeline
pipeline1 = Pipeline(workspace=ws, steps=[step_1])
from azureml.core import Experiment
# Submit the pipeline to be run
pipeline_run1 = Experiment(ws, 'Test-pipeline').submit(pipeline1)
pipeline_run1.wait_for_completion(show_output=True)
This assumes the following directory structure:
root/
create_pipeline.py
pipeline/
test.py
environment.yml
where create_pipeline.py is the file above, test.py is the script you would like to run and environment.yml is the conda environment file - including the python version.
I was able to change the Python version by registering the environment in Azure ML Workspace:
from azureml.core.environment import Environment, Workspace
environment = Environment.from_conda_specification(name='myenv', file_path='environment.yml')
environment.python.user_managed_dependencies = False
workspace = Workspace.from_config()
environment = environment.register(workspace=workspace)
env_build = environment.build(workspace=workspace)
Then, configure the endpoint for publishing as follows:
from azureml.core.model import InferenceConfig
environment = Environment.get(workspace=workspace, name='myenv')
inference_config = InferenceConfig(
entry_script='inference.py',
source_directory='.',
environment=environment
)
This is using Azure ML SDK 1.29.0. Perhaps this has already been fixed and the original method works as well, but I didn't test that.
EDIT:
This is no longer an issue for me. I found another way to get my code to work with python version 3.6.7.
This is however still an issue if you ask me. If in the future I do need python version 3.5 then there will not be a solution as of now.
You can still post an answer if you would like.
I'm trying to create an easy workflow for package development and was hoping someone could pint me in the right direction. In short, once a css file has been updated I want to be able to run a command (php artisan vendor:publish --force) to automatically publish the files.
Can this be done with Elixir and if so could anyone point me in the right direction?
regards
This was incredibly useful to get me on the right track, but I found a slightly simpler solution that I thought I would share:
var elixir = require('laravel-elixir');
var gulp = require("gulp");
var shell = require("gulp-shell");
elixir(function(mix) {
mix.task('publish_assets', ['resources/assets/**/*.scss', 'resources/assets/**/*.js']);
});
gulp.task('publish_assets', shell.task([
"php ../../../../artisan vendor:publish --tag=public --force"
]));
It watches all of the js and sass files for changes and runs the publish assets task. Mine is set to only publish the "public" tagged files, and a quick note as well, you will need to install the gulp-shell utility (https://www.npmjs.com/package/gulp-shell) for either of these.
if anyone is looking, this works nicely:
var gulp = require("gulp");
var shell = require("gulp-shell");
var elixir = require("laravel-elixir");
elixir.extend("publish", function() {
var baseDir = this.assetsDir;
gulp.task("publish_assets", function() {
gulp.src("").pipe( shell( [
"php ../../../artisan vendor:publish --force",
"cd ../../../ ; gulp"
]));
});
this.registerWatcher("publish_assets", [
baseDir + '**/*.scss',
baseDir + '**/*.js'
]);
return this.queueTask("publish_assets");
});