Bower - Pinning to Major Versions Asks for Resolution - bower

Why is this asking me for resolutions when everything is pinned to ^ major versions? I imagine I'm doing something wrong in my releases on github because it only asks for resolutions on my repos and not for polymer which are two different minor versions.
I'm running bower install on this bower.json:
{
"name": "test1",
"homepage": "https://github.com/tylergraf/test1",
"version": "0.0.2",
"dependencies": {
"test2": "git+https://github.com/tylergraf/test2#^0.0.3",
"test3": "git+https://github.com/tylergraf/test3#^0.0.5"
}
}
test2 bower.json looks like this:
{
"name": "test2",
"homepage": "https://github.com/tylergraf/test2",
"version": "0.0.3",
"dependencies": {
"test3": "git+https://github.com/tylergraf/test3#^0.0.3",
"polymer": "git+https://github.com/polymer/polymer#^1.7.0"
}
}
test3 bower.json looks like this:
{
"name": "test3",
"homepage": "https://github.com/tylergraf/test3",
"version": "0.0.5",
"dependencies": {
"test2": "git+https://github.com/tylergraf/test2#^0.0.2",
"polymer": "git+https://github.com/polymer/polymer#^1.4.0"
}
}
Here's my output:
Unable to find a suitable version for test2, please choose one by typing one of the numbers below:
1) test2#^0.0.2 which resolved to 0.0.2 and is required by test3#0.0.3, test3#0.0.5
2) test2#^0.0.3 which resolved to 0.0.3 and is required by test1
Prefix the choice with ! to persist it to bower.json
? Answer 2
Unable to find a suitable version for test3, please choose one by typing one of the numbers below:
1) test3#^0.0.3 which resolved to 0.0.3 and is required by test2#0.0.3
2) test3#^0.0.5 which resolved to 0.0.5 and is required by test1
Prefix the choice with ! to persist it to bower.json
? Answer 2

I dug into bower and got to semver.
Major zero is a special case in the semver spec, anything less than 0.1.0 will always resolve exactly to itself.
^0.0.1 always points to 0.0.1.
Here's a node.js article:
CARET: MAJOR ZERO Given Node.js community norms around the liberal
usage of major version 0, the second significant difference between
tilde and caret has been relatively controversial: the way it deals
with versions below 1.0.0. While tilde has the same behaviour below
1.0.0 as it does above, caret treats a major version of 0 as a special case. A caret expands to two different ranges depending on whether you
also have a minor version of 0 or not, as we'll see below: MAJOR AND
MINOR ZERO: ^0.0.Z → 0.0.Z Using the caret for versions less than
0.1.0 offers no flexibility at all. Only the exact version specified will be valid. For example, ^0.0.3 will only permit only exactly
version 0.0.3.
special-case for 0.x in ^ is very counter-inutitive and rage-inducing

Related

Electron builder fails with: no 'object' file generated

I have a problem with electron-builder since upgrading to Electron 10.1.2. My build now fails at rebuild for keyboard-layout. The rebuild only fails for Windows, not Mac. I don't know where to open this issue so I am asking here :).
My setup:
angular: 9.0.7
electron: 10.1.2
electron-builder: 22.8.x
The problem started when I updated electron from 9.0.0 to 10.1.2. Nothing else changed.
The problem:
When calling electron-builder with command electron-builder.cmd --x64 -p always -w rebuild of keyboard-layout is called as one of the steps as:
> keyboard-layout#2.0.16 install C:\Users\<me>\<dir1>\<dir2>\dist\node_modules\keyboard-layout
> node-gyp rebuild
That fails with:
...
win_delay_load_hook.cc
c:\users\<me>\.electron-gyp\10.1.2\include\node\v8.h(5378): error C2220: warning treated as error - no 'object' file generated (compiling source file ..\src\keyboard-layout-manager-windows.cc) [C:\Users\<me>\<dir1>\<dir2>\dist\node_modules\keyboard-layout\build\keyboard-layout-manager.vcxproj]
c:\users\<me>\.electron-gyp\10.1.2\include\node\v8.h(5378): warning C4309: 'static_cast': truncation of constant value (compiling source file ..\src\keyboard-layout-manager-windows.cc) [C:\Users\<me>\<dir1>\<dir2>\dist\node_modules\keyboard-layout\build\keyboard-layout-manager.vcxproj]
c:\users\<me>\.electron-gyp\10.1.2\include\node\v8.h(5378): error C2220: warning treated as error - no 'object' file generated (compiling source file ..\src\keyboard-layout-manager.cc) [C:\Users\<me>\<dir1>\<dir2>\dist\node_modules\keyboard-layout\build\keyboard-layout-manager.vcxproj]
c:\users\<me>\.electron-gyp\10.1.2\include\node\v8.h(5378): warning C4309: 'static_cast': truncation of constant value (compiling source file ..\src\keyboard-layout-manager.cc) [C:\Users\<me>\<dir1>\<dir2>\dist\node_modules\keyboard-layout\build\keyboard-layout-manager.vcxproj]
Done Building Project "C:\Users\<me>\<dir1>\<dir2>\dist\node_modules\keyboard-layout\build\keyboard-layout-manager.vcxproj" (default targets) -- FAILED.
Done Building Project "C:\Users\<me>\<dir1>\<dir2>\dist\node_modules\keyboard-layout\build\binding.sln" (default targets) -- FAILED.
Build FAILED.
...
What I have tried that DID NOT help:
Change binding.gyp in node_modules/keyboard-layout to (chnages marked with <---):
['OS=="win"', {
"sources": [
"src/keyboard-layout-manager-windows.cc",
],
'msvs_settings': {
'VCCLCompilerTool': {
'ExceptionHandling': 1, # /EHsc
'WarnAsError': 'false', # <--- I chnaged this from true to false
},
},
'msvs_disabled_warnings': [
4018, # signed/unsigned mismatch
2220, # <--- I added this
4244, # conversion from 'type1' to 'type2', possible loss of data
4267, # conversion from 'size_t' to 'type', possible loss of data
4302, # 'type cast': truncation from 'HKL' to 'UINT'
4311, # 'type cast': pointer truncation from 'HKL' to 'UINT'
4530, # C++ exception handler used, but unwind semantics are not enabled
4506, # no definition for inline function
4577, # 'noexcept' used with no exception handling mode specified
4996, # function was declared deprecated
],
}], # OS=="win"
What I have tried that DID help:
Electron 10.x.y updated v8 to 8.5 (Electron 10.0.0 release notes) and looking at line that causes the error (...\.electron-gyp\10.1.2\include\node\v8.h(5378)) I see this:
static constexpr size_t kMaxLength =
internal::kApiSystemPointerSize == 4
? internal::kSmiMaxValue
: static_cast<size_t>(uint64_t{1} << 32); <--- Line 5378
When I compare v8.h files from ...\.electron-gyp\10.1.2\include\node\v8.h and ...\.electron-gyp\9.0.0\include\node\v8.h, there is a change in this exact line.
Same line in old version:
static constexpr size_t kMaxLength = internal::kApiSystemPointerSize == 4
? internal::kSmiMaxValue
: 0xFFFFFFFF;
If I chnage static_cast<size_t>(uint64_t{1} << 32) to 0xFFFFFFFF, build succeedes.
My understanding ends here.
Are the old and new lines not theoretically the same? One shifted for 32 bits results in 0xFFFFFFFF?
What can I do to fix this issue and what could be the reason for this change?
Why is this problem only on Windows?
What I have tried that DID NOT help:
'WarnAsError': 'false' should do the trick; however the error was reported for two different files (..\src\keyboard-layout-manager.cc and ..\src\keyboard-layout-manager-windows.cc) so you'd have to modify the build rules for both of them.
Disabling the warning should help too, but it'd have to be warning 4309 (not 2220) that you need to disable. Again, you'd have to do that for both files (or just for the entire compilation).
Are the old and new lines not theoretically the same? One shifted for 32 bits results in 0xFFFFFFFF?
No, 1 << 32 == 0x100000000 == 0xFFFFFFFF + 1).
What can I do to fix this issue?
turning off 'WarnAsError' should help
turning off warning 4309 should help
reverting that one line in your local checkout should help
using Clang instead of MSVC should help
possibly using a different (newer?) version of MSVC would also help
and what could be the reason for this change?
V8 now allows TypedArrays with up to 2**32 elements, which is one more element than before.
Why is this problem only on Windows?
Because warnings are compiler-specific, and MSVC is only used on Windows.
The weird part is that you're seeing this error in the first place. You compile with --x64; if that does what it sounds like, you should be compiling a 64-bit build, where internal::kApiSystemPointerSize == 8 and size_t has 64 bits just like uint64_t, so in the expression static_cast<size_t>(uint64_t{1} << 32); nothing gets truncated.
Even if for whatever reason this build tried to create a 32-bit build of V8, then the other branch should be taken (internal::kApiSystemPointerSize == 4) and the compiler should be smart enough not to warn about a branch that's statically dead anyway.
At any rate, this seems like a compiler bug/limitation. So appropriate workarounds are to either update your compiler, or disable the erroneous warning.

Cant' build OpenCV 3.2.0 (Mingw32)

I know... Another one of this... But no one else's error is the same as mine and I've been trying to build opencv with mingw32 for days now.
When building OpenCV with mingw the command mingw32-make fails at some point trying to compile sources\modules\ts\src\ts_gtest.cpp with error pic bellow:
I've tried following several tutorials, but none work cleanly and this is the best I could get stuff to work.
What I did:
Installed Mingw and added C:\Mingw\bin\ to PATH environment variable.
Installed CMake and added it too to PATH.
Extracted OpenCV to C:\ and created forlder C:\opencv\mingwBuild\
In CMake-GUI I define source folder as C:\opencv\sources\ and build folder as C:\opencv\mingwBuild\.
Hit Configure and select Mingw Makefiles, with 'Use default native compilers' (have also specified compilers explicitly and the result is the same.).
Hit Generate, which creates the Makefile.
I open C:\Mingw\msys\1.0\msys.bat to have a console with all variables loaded (have also tried directly from a simple cmd.exe, given that PATH is set for mingw, but I get the same error in compilation). Navigate to C:\opencv\mingwBuild\ and run mingw32-make.
And that's where the error shows up after a while. Any ideas?
Turns ou gTest was not compiling in Mingw for some reason.
As I don't intend to test my code (for now) I removed opencv_ts from instalation (by deselecting it in Cmake, after configuring and before generating).
Someone mentions, in the first link #Dan Masek refers, that GTest has this issue with type conversion under mingw. They say that you can edit ts_gtest.cpp to apply the correct conversion, according to error message. That may be a solution if you need this module.
Another comment in #Dan Masek's second link mentions that gcc's version 5 surpasses the issue, which version 4 has. So, getting a hold of such distro may also be a solution.
For me it seems to be fixed by applying this fix: https://github.com/msk-repo01/opencv/commit/9a1835ce6676836ce278d723da4ff55a8f900ff1
(Also see: https://github.com/opencv/opencv/issues/8105)
The fix basically replaces the "_RTL_CRITICAL_SECTION" by "_CRITICAL_SECTION" for MingW compilers in modules/ts/include/opencv2/ts/ts_gtest.h in the following way:
The lines
// assuming CRITICAL_SECTION is a typedef of _RTL_CRITICAL_SECTION.
// This assumption is verified by
// WindowsTypesTest.CRITICAL_SECTIONIs_RTL_CRITICAL_SECTION.
struct _RTL_CRITICAL_SECTION;
(around line 723 in OpenCV 3.2.0 release from Dec. 2016) are replaced by
# if GTEST_OS_WINDOWS_MINGW
// MinGW defined _CRITICAL_SECTION and _RTL_CRITICAL_SECTION as two
// separate (equivalent) structs, instead of using typedef
typedef struct _CRITICAL_SECTION GTEST_CRITICAL_SECTION;
# else
// assuming CRITICAL_SECTION is a typedef of _RTL_CRITICAL_SECTION.
// This assumption is verified by
// WindowsTypesTest.CRITICAL_SECTIONIs_RTL_CRITICAL_SECTION.
typedef struct _RTL_CRITICAL_SECTION GTEST_CRITICAL_SECTION;
# endif
and
_RTL_CRITICAL_SECTION* critical_section_;
is replaced by
GTEST_CRITICAL_SECTION* critical_section_;

Neo4j 3.1.0 apoc.load.csv trouble

I keep trying to run a apoc.load.csv procedure in the newest version of Neo4j 3.1.0, and APOC 3.1.0.3.
CALL apoc.periodic.iterate('CALL apoc.load.csv("file:///data.csv",
{sep:",", header:TRUE}) yield map ','
with {map} as map MATCH (t:Tweet{id:toFloat(map.tweet_id)})
SET t.clean_text = map.clean_text,
t.positive_score = toInt(map.nb_positive),
t.negative_score = toInt(map.nb_negative),
t.sentiment_score = toInt(map.score)',
{batchSize:5000, parallel:true})
Error: Failed to invoke procedure apoc.periodic.iterate: Caused by:
org.neo4j.graphdb.QueryExecutionException: Failed to invoke procedure
apoc.load.csv: Caused by: java.lang.RuntimeException: Import from
files not enabled, please set apoc.import.file.enabled=true in your
neo4j.conf
I have tried just running the apoc.load.csv piece and I still get the same error telling me to add the statement to my neo4j.conf file, which I have. I've even restarted my computer.
I was able to run this exact same statement successfully in Neo4j 3.0.6 and APOC 3.0.4.1, but it doesn't work since I upgraded.
I think that this is likely a bug.
If you click on the 'star' in the browser and then under 'System' there is a link to 'Server Configuration'. Run this query to see what Neo4J thinks it has wrt setting.
Part of this looks like:
{
"isIs": "false ",
"name": "apoc.export.file.enabled",
"description": "Configuration attribute",
"type": "java.lang.String",
"isReadable": "true",
"value": "true",
"isWriteable": "false "
},
which indicates that the file import setting is there and correctly formatted.
The question is then why isn't this being honoured? This is as much as I've been able to determine facing the same problem.

HHVM non-deterministic behaviour of the typechecker

I've noticed that calling hh_client is not always returning correct result. For example: I have following pieces of code:
backend\ConvertMessage.hh:
<?hh // strict
namespace ApiBackend\ConvertMessage {
enum Status: int {
success = 0;
// ... error codes
};
// ... some other classes
};
other place in project:
throw new \SoapFault(
'Server',
\ApiBackend\ConvertMessage\Status::getNames()[$result->status]
);
Sometimes, after doing some changes in project I get following error message: Could not find static method getNames in type ApiBackend\ConvertMessage\Status (Typing[4090])
When I remove a semicolon after one of closing curly brackets, hh_client stops displaying error. But when I insert semicolon back on its place, typechecker still gives me No errors! message.
This is not the only file that causes this problem - it happens to all enums.
It seems to me that it is problem with some cache of either hh_client or hh_server.
Thanks in advance for helping me with solving this problem (and sorry if my english is not too good).
You are probably using an outdated version of HHVM. This problem sounds an awful lot like this race condition, which was fixed in HHVM 3.5.0 and newer (and was backported into the 3.3.3 LTS release). Notably, 3.4.x still had the bug.
What version of HHVM are you using?

Mongoid and UTF-8 issues in a JRuby on Rails app

I'm taking a JSON string that's the result from polling the Foursquare venue API:
{
"id"=>"4e404742c65b4ec27606deb4",
"name"=>"Sarah's Cheesecake & Cafe",
"contact"=>{
"phone"=>"4134436678",
"formattedPhone"=>"(413) 443-6678"
},
"location"=>{
"address"=>"180 Elm St",
"lat"=>42.44345873,
"lng"=>-73.23804678,
"distance"=>1063,
"postalCode"=>"01201",
"city"=>"Pittsfield",
"state"=>"MA"
},
"categories"=>[
{
"id"=>"4bf58dd8d48988d16d941735",
"name"=>"Café",
"pluralName"=>"Cafés",
"shortName"=>"Café",
"icon"=>{
"prefix"=>"https://foursquare.com/img/categories/food/cafe_",
"sizes"=>[
32,
44,
64,
88,
256
],
"name"=>".png"
},
"primary"=>true
}
],
"verified"=>false,
"stats"=>{
"checkinsCount"=>7,
"usersCount"=>5,
"tipCount"=>0
},
"hereNow"=>{
"count"=>0
}
}
As you can tell, there are some non-standard characters in there such as Cafés and that's breaking my Mongoid based Model in this JRuby on Rails app. When trying to to create an instance with MyModel.create, here's what I get.
jruby-1.6.5 :012 > FoursquareVenue.create(hash)
Java::JavaLang::NullPointerException:
from org.jruby.exceptions.RaiseException.<init>(RaiseException.java:101)
from org.jruby.Ruby.newRaiseException(Ruby.java:3348)
from org.jruby.Ruby.newEncodingCompatibilityError(Ruby.java:3323)
from org.jruby.RubyString.cat(RubyString.java:1285)
from org.jruby.RubyString.cat19(RubyString.java:1221)
from org.jruby.RubyHash$5.visit(RubyHash.java:727)
from org.jruby.RubyHash.visitAll(RubyHash.java:594)
from org.jruby.RubyHash.inspectHash(RubyHash.java:721)
from org.jruby.RubyHash.inspect(RubyHash.java:745)
from org.jruby.RubyHash$i$0$0$inspect.call(RubyHash$i$0$0$inspect.gen:65535)
from org.jruby.RubyClass.finvoke(RubyClass.java:632)
from org.jruby.javasupport.util.RuntimeHelpers.invoke(RuntimeHelpers.java:545)
from org.jruby.RubyBasicObject.callMethod(RubyBasicObject.java:353)
from org.jruby.RubyObject.inspect(RubyObject.java:408)
from org.jruby.RubyArray.inspectAry(RubyArray.java:1483)
from org.jruby.RubyArray.inspect(RubyArray.java:1509)
... 420 levels...
from org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:75)
from org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:190)
from org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:179)
from org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
from org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
from usr.local.rvm.rubies.jruby_minus_1_dot_6_dot_5.bin.jirb.__file__(/usr/local/rvm/rubies/jruby-1.6.5/bin/jirb:17)
from usr.local.rvm.rubies.jruby_minus_1_dot_6_dot_5.bin.jirb.load(/usr/local/rvm/rubies/jruby-1.6.5/bin/jirb)
from org.jruby.Ruby.runScript(Ruby.java:693)
from org.jruby.Ruby.runScript(Ruby.java:686)
from org.jruby.Ruby.runNormally(Ruby.java:593)
from org.jruby.Ruby.runFromMain(Ruby.java:442)
from org.jruby.Main.doRunFromMain(Main.java:321)
from org.jruby.Main.internalRun(Main.java:241)
from org.jruby.Main.run(Main.java:207)
from org.jruby.Main.run(Main.java:191)
from org.jruby.Main.main(Main.java:171)
If I strip out all the odd characters, everything works as expected and no exception is thrown. What's the proper way of handling this? Can I enabled my Mongoid/MongoDB documents to work with UTF-8? do I need to "asciify" them somehow first if that's not possible?
Could be an encoding bug in JRuby's 1.9 mode. Does the same thing happen when you run it in 1.8 mode? Either way, a stacktrace should be filed as a bug at http://bugs.jruby.org. Thanks!
gem install bson_ext might help.
Source: MongoDB, Ruby and UTF-8
If you are using ubuntu, then you need to do some extra steps with spidermonkey/mongodb installation:
Most pre-built Javascript SpiderMonkey libraries do not have UTF-8
support compiled in; MongoDB requires this.
Source: Building for Linux
MongoDB and mongoid handle utf-8 properly. I was doing the same thing with the Foursquare API not long ago via the Quimby wrapper.
As a result, I would suspect the bug is closely related to the use of JRuby.
Have you set up JRuby to use UTF8?
require 'jcode'
$KCODE = 'u'

Resources