my neovim setup but i dont know how to fix error - lua

hello i setup my neovim from 'https://github.com/craftzdog/dotfiles-public'.
but An error has occurred, and I don't know how to solve it. please help me!!
error is 'attempt to call field 'init_lsp_saga' (a nil value)'
lspsaga.rc.lua
local status, saga = pcall(require, "lspsaga")
if (not status) then return end
saga.init_lsp_saga {
server_filetype_map = {
typescript = 'typescript'
}
}
local opts = { noremap = true, silent = true }
vim.keymap.set('n', '<C-j>', '<Cmd>Lspsaga diagnostic_jump_next<CR>', opts)
vim.keymap.set('n', 'K', '<Cmd>Lspsaga hover_doc<CR>', opts)
vim.keymap.set('n', 'gd', '<Cmd>Lspsaga lsp_finder<CR>', opts)
-- vim.keymap.set('i', '<C-k>', '<Cmd>Lspsaga signature_help<CR>', opts)
vim.keymap.set('i', '<C-k>', '<cmd>lua vim.lsp.buf.signature_help()<CR>', opts)
vim.keymap.set('n', 'gp', '<Cmd>Lspsaga peek_definition<CR>', opts)
vim.keymap.set('n', 'gr', '<Cmd>Lspsaga rename<CR>', opts)
how can i fix this error?

since you're using the latest version (newer than 0.2.3 which has the following changes PR: https://github.com/glepnir/lspsaga.nvim/pull/586
see specific change in init.lua)
You need to do saga.setup instead of saga.init_lsp_sag and you should be good to go!
local status, saga = pcall(require, "lspsaga")
if (not status) then return end
saga.setup {
server_filetype_map = {
typescript = 'typescript'
}
}
instead of
local status, saga = pcall(require, "lspsaga")
if (not status) then return end
saga.init_lsp_saga {
server_filetype_map = {
typescript = 'typescript'
}
}

seems lsp-saga have break change
I use this commit and work well.
currently, lsp-saga have a lot of refactoring change, I prefer to use the old version til it is stable.
"lspsaga.nvim": {
"commit": "b7b4777"
},

Related

How to get babel to work with new threejs versions

I am having trouble updating threejs to the new es6 class version that they introduced because I am having trouble with babel.
I have the following code where I am extending Object3D
import {
Object3D,
} from "three";
type Props = {
myProp:string
};
export default class MyBox extends Object3D {
constructor(props: Props = {}) {
super();
console.log("HERE");
this.init(props);
console.log("Done");
}
init(props){
// Do stuff
}
Now this works in almost every case just fine, except when I am trying to load it in an ios webview. In that case I drilled down and saw that my code is transpiled to
function e() {
var e,
o = arguments.length > 0 && void 0 !== arguments[0] ? arguments[0] : {};
return e = n.call(this) || this, console.log("HERE"), e.init(o), console.log("DOne"), e
Which on the ios webview throws an error saying:
TypeError: Cannot call a class constructor without |new|
Which to me means since Object3D is a class it cannot be called like the transpiled version wants to.
{
"presets": ["#babel/preset-flow", ["#babel/preset-env",
{
"targets": ">1%"
}], "#babel/react"],
"plugins": [
"#babel/transform-runtime",
"#babel/plugin-syntax-flow",
"#babel/plugin-transform-flow-strip-types",
"#babel/plugin-proposal-class-properties"]
}
I have tried playing with the targets property and other packages, but have had no luck. My understanding is the threejs is not getting transpiled, whereas the rest of my code is.
Edit: I was wrong about the cause, it was actually due to Meteor build systems misdetecting whether this was a legacy case or not
Answer for me ended up being:
import { setMinimumBrowserVersions } from "meteor/modern-browsers";
setMinimumBrowserVersions(
{
"mobileSafariUI/WKWebView": 10,
},
"classes"
);

Parse server return invalid session token for public records and only when calling functions

ok this is something that start to show a few weeks ago, I have a parse server running on an ubuntu machine with version 2.3.3, I have a bunch of cloud functions running, nothing fancy just querying some specific Class, all the data in that class is public and all work most of the time just fine. However from time to time, calling a cloud function start to return invalid sessions error 209, even with a user logged or not, super weird, but what is even strange is that when that happens no one else can run the function, and every user start to got the same error.
The only way I can make it work again is restarting the server, also only happens with cloud functions and from the ios app, I saying this cause we have some other part calling functions from php but it seems those are not starting the problem
2017-02-21T01:26:57.676Z - Failed running cloud function partnersv2 for user undefined with:
Input: {"k":"","searchType":"","category":"comida"}
Error: {"code":141,"message":{"code":209,"message":"invalid session token"}}
2017-02-21T01:26:57.669Z - invalid session token
2017-02-21T01:26:55.738Z - ParseError { code: 209, message: 'invalid session token' }
2017-02-21T01:26:55.737Z - Error generating response. ParseError {
code: 141,
message: ParseError { code: 209, message: 'invalid session token' } }
I have no idea why is this happening, also I don't think is related to the legacy session cause this server and the user are new, we start developing this a few months ago is not a ported app from the old service
One thing we are doing a lot is removing from the dashboard sessions at will, cause we are testing and developing, not sure if this could be a reason, but what about when the user is undefined, it shouldn't even try to use session I think, or maybe a user that was actually logged could be the culprit, setting the server to VERBOSE is not telling anything else as well, just the params and the call which it doesn't look weird to me, am looking for someone that maybe can put me in the right direction of maybe how the session work or something, thank you for any help
EDIT 1:
This is the cloud function that is trowing the error
Parse.Cloud.define('partnersv2', function (req, res) {
var searchType = req.params.searchType,
k = req.params.k,
category = req.params.category,
query;
query = new Parse.Query('Partner');
query.addDescending('is_open');
query.equalTo('enabled', true);
query.equalTo('category', category);
query.select(['name', 'phone', 'delivery_eta', 'keys', 'price_range', 'is_new', 'cover', 'recommended', 'open_time', 'min_order', 'delivery_rank', 'logo', 'comingsoon', 'category', 'is_open']);
if (searchType !== '' && searchType !== undefined && k !== '' && k !== undefined) {
if (searchType === 'Tag') {
query.equalTo('tags', k);
} else {
query.equalTo('name', k);
}
}
query.limit(1000);
query.find({
success: function (results) {
if (results.length > 0) {
res.success(results);
} else {
res.error('404 not found');
}
},
error: function (error) {
res.error(error);
}
});
});
and this is a screenshot of the ACL col
Use Master Key
Parse.Cloud.define('partnersv2', function (req, res) {
//var token = req.user.getSessionToken() // getSession Token return r: (yourkey);
var searchType = req.params.searchType,
k = req.params.k,
category = req.params.category,
query;
query = new Parse.Query('Partner');
query.addDescending('is_open');
query.equalTo('enabled', true);
query.equalTo('category', category);
query.select(['name', 'phone', 'delivery_eta', 'keys', 'price_range', 'is_new', 'cover', 'recommended', 'open_time', 'min_order', 'delivery_rank', 'logo', 'comingsoon', 'category', 'is_open']);
if (searchType !== '' && searchType !== undefined && k !== '' && k !== undefined) {
if (searchType === 'Tag') {
query.equalTo('tags', k);
} else {
query.equalTo('name', k);
}
}
query.limit(1000);
query.find({
useMasterKey: true, // sessionToken: token
success: function (results) {
if (results.length > 0) {
res.success(results);
} else {
res.error('404 not found');
}
},
error: function (error) {
res.error(error);
}
});
});

Action Cable issue: messages occasionally not delivering or occasionally sent to the wrong channel

I'm using Action Cable for a chat application with many different channels. Everything is set up with Redis and Postgresql (to save messages) on Heroku and 95% of the time works great in development and production.
Occasionally though, messages sent do not show up. The message is sent, it is saved in the database, but then it never shows up on the front end unless I refresh. Or, the message shows up in another channel for which it is not directed. Again, everything is properly saved in the Postgres database. It just gets wonky on the front end. Somewhere the ActionCable seems to get confused.
This issue happens so rarely for me, that it is very difficult to replicate to debug properly. But I have a bunch of users that are regularly reporting the issue and I'm struggling to figure out how to track it down.
Here is some of my code:
javascripts/channels/channels.js
class PodsChannel < ApplicationCable::Channel
def subscribed
stream_from "pods_channel_#{params['pod_slug']}"
end
def unsubscribed
# Any cleanup needed when channel is unsubscribed
end
def speak(data)
#after_create_commit callback fires to create a job to broadcast message
pod_message = PodMessage.create! pod_slug: data['pod_slug'], message_body: data['message'], user_id: data['user_id'], message_type: data['msg_type']
end
end
channels/pods_channel.rb
$(document).on("ready",function(){
var pod_slug = $("#pod_slug_value").val();
// hack to prevent double posting of messages
if (!App.pods || JSON.parse(App.pods.identifier).pod_slug != pod_slug){
App.pods = App.cable.subscriptions.create(
{ channel: 'PodsChannel', pod_slug: pod_slug },
{
received: function(data) {
if ( ($(".chat-stream").length) && data.pod_slug == pod_slug ){ // hack to prevent msgs going accross pods
//#CLEAN: this is a super hackish way of preventing double messages
if(!$("#msg_" + data.msg_id).length){
$(data.message).appendTo($(".chat-stream")).find('.msg-text a').attr('target', '_blank');
$("#msg_" + data.msg_id + " .msg-text").html(Autolinker.link($("#msg_" + data.msg_id + " .msg-text").html(), { mention: "sutra" }));
$(".chat-stream").scrollTop($(".chat-stream")[0].scrollHeight);
}
}
},
speak: function(message, pod_slug, user_id, msg_type) {
return this.perform('speak',{
message: message,
pod_slug: pod_slug,
user_id: user_id,
msg_type: msg_type,
});
}
});
};
if ( $(".chat-stream").length ) {
$(".chat-stream").scrollTop($(".chat-stream")[0].scrollHeight);
};
captureMessage();
});
function captureMessage(){
$(document).on('click', "#btn-submit-msg", {}, function(){
var raw_text = $("#msg-input-text").val();
if (raw_text != ""){
submitMessage(raw_text, "pod_message");
}
});
$(document).on('keydown', '#msg-input-text', {}, function(event) {
if (event.which == 13 && !event.shiftKey && !event.ctrlKey && !event.metaKey) {
event.preventDefault();
event.stopPropagation();
if (event.target.value != "") {
submitMessage(event.target.value, "pod_message")
}
}
});
}
function submitMessage(raw_text, msg_type){
var message = raw_text;
var pod_slug = $("#pod_slug_value").val();
var user_id = $("#current_user_id_value").val();
var msg_type = msg_type;
if (App.pods.consumer.connection.disconnected == false) {
App.pods.speak(message, pod_slug, user_id, msg_type);
if (msg_type != "attachment") {
$("#msg-input-text").val("");
}
}
}
models/pod_message.rb
class PodMessage < ApplicationRecord
after_create_commit { MessageBroadcastJob.perform_now self }
end
jobs/message_broadcast_job.rb
class MessageBroadcastJob < ApplicationJob
queue_as :default
def perform(pod_message)
stream_id = "pods_channel_#{pod_message.pod_slug}"
ActionCable.server.broadcast stream_id, message: render_message(pod_message), msg_id: pod_message.id, pod_slug: pod_message.pod_slug
end
private
def render_message(pod_message)
renderer = ApplicationController.renderer.new
renderer.render(partial: 'pod_messages/pod_message', locals: { pod_message: pod_message })
end
end

Angular Rails Karma testing Controller with factory (with backend api calls) dependency

I have a projectFactory:
#app.factory "projectFactory", ['$http', ($http) ->
factory = {}
factory.loadProject = (projectId) ->
$http.get( endpoint(projectId) )
(endpoint is a method that generates the backend api url)
I then have a projectCtrl that is dependent on that factory:
#app.controller 'ProjectCtrl', ['$scope','$routeParams', 'projectFactory', ($scope, $routeParams, projectFactory) ->
$scope.projectId = $routeParams.projectId
$scope.loadProject = (projectId) ->
projectFactory.loadProject(projectId)
.success((data)->$scope.project = data.project)
I then have my project_control_spec test:
'use strict'
describe "ProjectCtrl", ->
beforeEach module 'app'
ProjectCtrl = {}
$scope = {}
projectFactory = {}
beforeEach ->
module($provide) ->
$provide.factory "projectFactory", projectFactory
module inject($controller, $rootScope) ->
$scope = $rootScope.$new()
ProjectCtrl = $controller 'ProjectCtrl', {
$scope : $scope,
$routeParams: {projectId: 1},
}
it "should instantiate a PC", ->
expect(ProjectCtrl).toBeDefined()
it "should have access to the projectId via the routeParams", ->
expect($scope.projectId).toEqual(1)
it "should have access to projectFactory", ->
expect($scope.projectFactory).toBeDefined()
it "should create $scope.project when calling loadProject", ->
expect($scope.project).toBeUndefined();
expect($scope.loadProject(1)).toBe(1)
expect($scope.project).toEqual({//a project object})
I am getting the error ReferenceError: Can't find variable: $provide, when trying to require my projectFactory
You cannot inject $provide on line module inject($controller, $rootScope, $provide) ->. It is also not used or needed in any case.
You should also test this case with $httpBackend. Check the first example.
What if you try this ?
(It is in javascript, sorry I can't write coffee yet)
beforeEach( inject (function( $injector ){
$provide = $injector.get('$provide');
// ...
}));
I'm not familiar with CoffeeScript, but this is how I would do it in plain old JS:
var projectFactory = {};
beforeEach(function () {
module('app', function ($provide) {
$provide.factory('projectFactory', projectFactory);
});
});
When taking some of your code and running it through a Coffee to JS interpreter, I get the following result:
describe("ProjectCtrl", function() {
var $scope, ProjectCtrl, projectFactory;
beforeEach(module('app'));
ProjectCtrl = {};
$scope = {};
projectFactory = {};
beforeEach(function() {
module($provide)(function() {
$provide.factory("projectFactory", projectFactory);
});
});
});
Basically you're trying to load a second module called $provide, when what you actually want to do is open up a config block for the first module (app) and inject $provide into the configuration block.
If you were using angular.module for your actual implementation of app, it'd look something like this:
angular.module('app', []).config(function ($provide) {
$provide.value('..', {});
$provide.constant('...', {});
/** and so on **/
});
Whereas in your specs when using angular-mocks, a config block gets set like this:
module('app', function ($provide) {
$provide.value('..', {});
/** and so on **/
});
I hope that helps. I'm not sure how you would actually write this in CoffeeScript, but I'm sure you can figure that one out. When taking the result of the Coffee to JS interpreter of your code, I received the same result - there is no variable $provide.

Can I load custom jsm modules in bootstrap.js of a restartless add-on?

I'm trying to load a custom module in a restartless add-on, using the following:
chrome/content/modules/Test.jsm:
var EXPORTED_SYMBOLS = [ 'Test' ];
let Test = {};
chrome.manifest:
content test chrome/content/
bootstrap.js:
const Cu = Components.utils;
// Tried this first, but figured perhaps chrome directives aren't loaded here yet
// let test = Cu.import( 'chrome://test/modules/Test.jsm', {} ).Test;
function install() {
let test = Cu.import( 'chrome://test/modules/Test.jsm', {} ).Test;
}
function uninstall() {
let test = Cu.import( 'chrome://test/modules/Test.jsm', {} ).Test;
}
function startup() {
let test = Cu.import( 'chrome://test/modules/Test.jsm', {} ).Test;
}
function shutdown() {
let test = Cu.import( 'chrome://test/modules/Test.jsm', {} ).Test;
}
However, I get the following types of WARN messages (this one was for shutdown(), but basically identical for all functions and in the earlier attempt in the global scope):
1409229174591 addons.xpi WARN Exception running bootstrap method
shutdown on test#extensions.codifier.nl: [Exception... "Component
returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE)
[nsIXPCComponents_Utils.import]" nsresult: "0x80070057
(NS_ERROR_ILLEGAL_VALUE)" location: "JS frame ::
resource://gre/modules/addons/XPIProvider.jsm ->
file:///test/bootstrap.js :: shutdown :: line 21" data: no] Stack
trace: shutdown()#resource://gre/modules/addons/XPIProvider.jsm ->
file:///test/bootstrap.js:21 <
XPI_callBootstrapMethod()#resource://gre/modules/addons/XPIProvider.jsm:4232
<
XPI_updateAddonDisabledState()#resource://gre/modules/addons/XPIProvider.jsm:4347
<
AddonWrapper_userDisabledSetter()#resource://gre/modules/addons/XPIProvider.jsm:6647
< uninstall()#extensions.xml:1541 < oncommand()#about:addons:1 <
Are chrome.manifest directives not yet available in bootstrap.js? Or is what I am attempting some kind of security violation, perhaps? Or am I simply doing something trivially wrong?
What I was hoping to achieve, is that I could do something like the following:
chrome/content/modules/Test.jsm:
var EXPORTED_SYMBOLS = [ 'Test' ];
let Test = {
install: function( data, reason ) {
},
/* etc */
bootstrap: function( context ) {
context.install = this.install;
context.uninstall = this.uninstall;
context.startup = this.startup;
context.shutdown = this.shutdown;
}
}
bootstrap.js:
const Cu = Components.utils;
Cu.import( 'chrome://test/modules/Test.jsm' );
Test.bootstrap( this );
Perhaps it's a bit over the top to begin with, but I just kind of like the idea of hiding implementations in modules and/or objects and keeping bootstrap.js super clean.
If you happen to have suggestions on how to achieve this by other means: I'm all ears.
Yes you can your path is wrong though.
Just do this:
let test = Cu.import( 'chrome://test/content/modules/Test.jsm', {} ).Test;
notice the /content/
You don't have to do the .Test unless you want the lower case test to hold it. You can just do:
Cu.import( 'chrome://test/content/modules/Test.jsm');
and use as Test.blah where blah is whatever is in the JSM module.
This code can go anywhere, it does not have to be in the install function.
Make sure to unload the custom JSM modules or else it can lead to zombie compartments which is bad for memory. Read here:
last paragraph here: https://developer.mozilla.org/en-US/docs/Extensions/Common_causes_of_memory_leaks_in_extensions
more reading but optional: https://developer.mozilla.org/en-US/docs/Zombie_compartments
Beyond #Noitidart's answer, you don't have to use chrome.manifest' and register a content package if your only concern is how to import your module.
function install(data, reason) {
Components.utils.import(data.resourceURI.spec + "relative/path/to/your/module.jsm");
}

Resources