webpack2 cjs require log messages - webpack-2

I'm seeing these log messages when starting web dev server in webpack2
[434] ./src/main/rootStore.js 1.86 kB {0} [built]
cjs require src/main/rootStore [392] ./src/main/index.jsx 15:17-46
I thought webpack2 uses es6 modules directly and doesn't need CJS conversion. Does anyone know what this 'cjs require' means?
Thanks.

Related

How to debug headless pdf printing problems in chrome?

Note: this is not (directly) a question about how to print PDF in chrome, instead this is a question about how to get more information when printing fails.
In short: I cannot solve a printing PDF problem, which occurs only for certain (presumably large) pages and could use some assistance in debugging the actual issue.
Background: I am using the chromedriver (v83) and chromium-browser (v83) to print PDF files from webpages by utilizing python selenium. I am building a docker image to contain the required dependencies for this. I have tried to use Debian (buster and stretch) as well as Alpine base images, but all eventually result in the same error, when trying to print some pages. The odd thing is that for other (smaller) pages the printing works, but when many assets and pages are to be printed, the printing fails. I might add that this docker images is eventually being run inside of a Kubernetes cluster, where I assigned up to 4GB of RAM.
What code am I running?
This project was written for python3, so here are some relevant code fragments. Please note that I removed all error handling and waiting for the page loads to complete here.
from selenium import webdriver
appState = {
"recentDestinations": [
{
"id": "Save as PDF",
"origin": "local"
}
],
"selectedDestinationId": "Save as PDF",
"version": 2
}
def get_chrome_options(headless: bool, enable_logging: bool) -> Options:
chrome_options = webdriver.ChromeOptions()
profile = {'printing.print_preview_sticky_settings.appState': json.dumps(appState)}
chrome_options.add_experimental_option('prefs', profile)
if headless:
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--window-size=1920,1080')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--disable-web-security')
chrome_options.add_argument('-–allow-file-access-from-files')
chrome_options.add_argument('--run-all-compositor-stages-before-draw')
chrome_options.add_argument('--kiosk-printing')
if enable_logging:
chrome_options.add_argument('--enable-logging')
return chrome_options
def print_the_page(url):
driver = webdriver.Chrome(chrome_options=get_chrome_options(headless, enable_logging))
driver.execute(driver_command=Command.GET, params={'url': url})
command_url = f"{driver.command_executor._url}/session/{driver.session_id}/chromium/send_command_and_get_result"
response = driver.command_executor._request('POST', command_url, json.dumps({'cmd': 'Page.printToPDF', 'params': {}}))
Then what happens?
For some pages this fails - meaning - there is this message in the response:
{'status': 500, 'value': '{"value":{"error":"unknown error","message":"unknown error: unhandled inspector error: {\\"code\\":-32000,\\"message\\":\\"Printing failed\\"}\\n (Session info: headless chrome=83.0.4103.116)","stacktrace":""}}'}
[UPDATE]
I have managed to produce some more error output when using the --print-to-pdf option directly, which seems to hint at an "out-of-memory" issue here:
[0923/135406.102857:WARNING:discardable_shared_memory_manager.cc(194)] Less than 64MB of free space in temporary directory for shared memory files: 23
[0923/135406.110108:WARNING:dns_config_service_posix.cc(341)] Failed to read DnsConfig.
[0923/135406.180892:WARNING:dns_config_service_posix.cc(341)] Failed to read DnsConfig.
[0923/135406.613221:FATAL:memory.cc(38)] Out of memory. size=796176
Received signal 6
r8: 00007fa6f39dadc4 r9: 0000000000000000 r10: 0000000000000008 r11: 0000000000000246
r12: 0000557efd1b0660 r13: 0000000000000000 r14: 00007fa6f39db240 r15: 0000000000000043
di: 0000000000000002 si: 00007fa6f39dac90 bp: 00007fa6f39dac90 bx: 0000000000000000
dx: 0000000000000000 ax: 0000000000000000 cx: 00007fa6fd347a71 sp: 00007fa6f39dac88
ip: 00007fa6fd347a71 efl: 0000000000000246 cgf: 002b000000000033 erf: 0000000000000000
trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
[end of stack trace]
Calling _exit(1). Core file will not be generated.
[0923/135406.626313:ERROR:headless_shell.cc(399)] Abnormal renderer termination.
I will note here that I have been running this docker container locally on my machine (which has more than enough RAM) as well as on a Kubernetes cluster where this image has requested 4GB RAM. I also monitored the RAM usage and it didn't seem to be an issue - although - that could be illusive if the RAM usage is so radically high that chrome just fails and you never really see that in the overall RAM usage.
[UPDATE 2]
I have tried to use the --print-to-pdf option again, but I am seeing issues with that as well. The resources are loading, but the printing still fails.
│ [0923/144355.169080:ERROR:bus.cc(393)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
...
│ [0923/141758.393923:WARNING:dns_config_service_posix.cc(341)] Failed to read DnsConfig. │
│ [0923/141758.401925:ERROR:zygote_host_impl_linux.cc(262)] Failed to adjust OOM score of renderer with pid 32: Permission denied (13) │
│ [0923/141758.413475:ERROR:zygote_host_impl_linux.cc(262)] Failed to adjust OOM score of renderer with pid 36: Permission denied (13)
... loading all the resources ...
│ [0923/141824.611661:ERROR:print_render_frame_helper.cc(1889)] Printing failed. │
│ [0923/141824.612439:ERROR:headless_shell.cc(562)] Print to PDF failed
What's the question(s)?
How can I get more information about why the "Printing failed" - unfortunately the "unknown error: unhandled inspector error" hasn't given me any ideas about how to proceed.
Are there potentially any additional flags to get more debug output from chrome or is there a log somewhere that I should be able to find?
What else have I tried?
I have initially been running this under Debian buster with the latest google-chrome and chromium binaries (v85). I have switched to an Alpine base image and chromium - hoping that this might change something, but it didn't.
I have experimented with setting up Xvfb ${DISPLAY} -screen ${SCREEN} ${RESOLUTION} & in Docker, but it didn't seem to have any effect either.
I have tried to switch to using the direct cli google-chrome --print-to-pdf= option, but since it's a page that requires to pass through login authentication, I could only get the login page to print and it also seems to have some not so nice formatting issues.
I have been running this on my machine, outside of Docker, and was able to print as expected, but as soon as I put the same code inside a Docker container, it fails.
Unfortunately, I cannot share the page where this fails with you.
The relevant warning from your logs seems to be this:
[0923/135406.102857:WARNING:discardable_shared_memory_manager.cc(194)] Less than 64MB of free space in temporary directory for shared memory files: 23
The problem appears to stem from Docker's mounted /dev/shm being too small for Chromium to do things like you're trying to do.
I found a closed bug report against chromium referencing this issue in certain limited environments such as AWS Lambda and Docker, it was fixed in chromium v65 behind a command line flag --disable-dev-shm-usage.
The last few comments reference another bug report (now closed) about this issue in chromium v83 where the command line flag was not working properly. It has been fixed in version 84 - per comment 28:
You can find the fix in current stable release of Chrome (version 84.0.4147.89 and above).
You've indicated you're using chromium v83, so you'll need to update at least version 84.0.4147.89, then use the command line flag --disable-dev-shm-usage.

Exception while configuring ruby on rails app on IIS

I have a Windows server 2012 R2 and I downloaded and installed Ruby Hosting Packing from Microsoft Web Platform installer. When I try to run an existing website, I get the following error:
Error
Helicon Zoo module has caught up an error. Please see the details below.
Worker Status
The process was created
Windows error
The pipe has been ended. (ERROR CODE: 109)
Internal module error
message: Application backend read Error. type: ZooException file: App\Jobs\JobBase.cpp line: 531 version: 3.1.98.538
STDERR
[tid-5360004] Couldn't run bundler/setup: cannot load such file -- bundler/setup (String) [tid-5360004] cannot load such file -- rack (LoadError) C:/Zoo/Workers/ruby/lib/app.rb:84:in eval' C:/Ruby19/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:inrequire' (eval):1:in assure_rack' C:/Zoo/Workers/ruby/lib/app.rb:84:ineval' C:/Zoo/Workers/ruby/lib/app.rb:84:in assure_rack' C:/Zoo/Workers/ruby/lib/app.rb:23:inbuild_app' C:/Zoo/Workers/ruby/lib/app.rb:16:in initialize' C:/Zoo/Workers/ruby/lib/worker.rb:4:innew' C:/Zoo/Workers/ruby/lib/worker.rb:4:in initialize' C:/Zoo/Workers/ruby/zoorack.rb:30:innew' C:/Zoo/Workers/ruby/zoorack.rb:30:in <module:Zack>' C:/Zoo/Workers/ruby/zoorack.rb:12:in'
Any idea on how to fix this?
Thanks.

Google API Services - Could not parse PKey: no start line in production (heroku) only

ANSWER: The two environment variables were mixed up in Heroku. CERT pointed to the private key and PRIVATE_KEY pointed to cert. Fixed that. Authorization now works.
Will mark this question answered when I can.
I am hooking into the Google Analytics Reporting API with a services account from my Rails 4 application. I have set up the authorization process and got it working in development. Upon deployment, I realized that using the .p12 key file was a problem - I either had to commit it to my repository or find a way around it. So I found a way around it, by breaking the key file into two .PEM files and storing the contents of those files as environment variables.
Works great in development (am using dotenv-rails to store environment variables there). But something must be happening to the variables on Heroku, because when I try to use my authorization method in production, I get the error ArgumentError: Could not parse PKey: no start line.
I've searched around for some clue about what's going on here - I've seen that related errors can happen due to version problems (someone downgraded from Ruby 2.2 to 1.9.8 and fixed a similar problem, although theirs just flat out didn't work anywhere, and mine works in development). Downgrading my Ruby version is not really an option. :/
Here's the related code:
def authorize_with_services
begin
self.client = Google::APIClient.new(application_name: 'Playground', application_version: '1')
p12 = OpenSSL::PKCS12.create('notasecret', 'descriptor',
problem spot --> OpenSSL::PKey.read(ENV['PRIVATE_KEY']),
OpenSSL::X509::Certificate.new(ENV['CERT']))
p12_binary = p12.to_der
self.client.authorization = Signet::OAuth2::Client.new(
token_credential_uri: 'https://accounts.google.com/o/oauth2/token',
audience: 'https://accounts.google.com/o/oauth2/token',
scope: 'https://www.googleapis.com/auth/analytics.readonly',
issuer: GA_EMAIL,
signing_key: Google::APIClient::PKCS12.load_key(p12_binary, 'notasecret')
).tap { |auth| auth.fetch_access_token! }
rescue Signet::AuthorizationError
self.client = nil
end
self.client
end
And then the ENV variables as stored in .env:
PRIVATE_KEY: "Bag Attributes\n friendlyName: privatekey\n localKeyID: 54 69 6D 65 20 31 34 33 35 36 38 34 30 33 31 30 32 39\nKey Attributes: <No Attributes>\n-----BEGIN RSA PRIVATE KEY-----\nKEY\n-----END RSA PRIVATE KEY-----\n"
CERT: "Bag Attributes\nfriendlyName: privatekey\nlocalKeyID: 54 69 6D 65 20 31 34 33 35 36 38 34 30 33 31 30 32 39\nsubject=/CN=636085506886-096q9j1uotf9kp1tv4evhn2crip6dec8.apps.googleusercontent.com\nissuer=/CN=636085506886-096q9j1uotf9kp1tv4evhn2crip6dec8.apps.googleusercontent.com\n-----BEGIN CERTIFICATE-----\nCERT\n-----END CERTIFICATE-----\n"
And I've set them exactly the same in heroku config:set PRIVATE_KEY and heroku config:set CERT.
Edit
Here's the session in the Heroku console which leads to the error:
# 11:15:36 (ruby-2.1.5#yt-ga-playground) ~/projects/confreaks/yt-ga-playground (master)$ heroku run console
Running `console` attached to terminal... up, run.8357
Loading production environment (Rails 4.2.0)
irb(main):001:0> querier = AnalyticsQuerier.new
=> #<AnalyticsQuerier:0x007f81fc6f5358 #options={"ids"=>"ga:104082366", "start-date"=>"2015-01-01", "end-date"=>"2015-07-04", "metrics"=>"ga:totalEvents"}, #ids="ga:104082366", #start_date="2015-01-01", #end_date="2015-07-04", #metrics="ga:totalEvents">
irb(main):002:0> querier.authorize_with_services
ArgumentError: Could not parse PKey: no start line
from /app/app/services/analytics_querier.rb:56:in `read'
from /app/app/services/analytics_querier.rb:56:in `authorize_with_services'
from (irb):2
from /app/vendor/bundle/ruby/2.1.0/gems/railties-4.2.0/lib/rails/commands/console.rb:110:in `start'
from /app/vendor/bundle/ruby/2.1.0/gems/railties-4.2.0/lib/rails/commands/console.rb:9:in `start'
from /app/vendor/bundle/ruby/2.1.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:68:in `console'
from /app/vendor/bundle/ruby/2.1.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:39:in `run_command!'
from /app/vendor/bundle/ruby/2.1.0/gems/railties-4.2.0/lib/rails/commands.rb:17:in `<top (required)>'
from bin/rails:8:in `require'
from bin/rails:8:in `<main>'
And the equivalent session in my development console:
# 11:17:47 (ruby-2.1.5#yt-ga-playground) ~/projects/confreaks/yt-ga-playground (master)$ be rails c
Loading development environment (Rails 4.2.0)
2.1.5 :001 > querier = AnalyticsQuerier.new
=> #<AnalyticsQuerier:0x007fe97d380928 #options={"ids"=>"ga:104082366", "start-date"=>"2015-01-01", "end-date"=>"2015-07-04", "metrics"=>"ga:totalEvents"}, #ids="ga:104082366", #start_date="2015-01-01", #end_date="2015-07-04", #metrics="ga:totalEvents">
2.1.5 :002 > querier.authorize_with_services
=> #<Google::APIClient:0x007fe97d36bc80 #host="www.googleapis.com", #port=443, #discovery_path="/discovery/v1", #user_agent="Playground/1 google-api-ruby-client/0.8.2 Mac OS X/10.10.3\n (gzip)", etc.>
And I have a controller method which uses the AnalyticsQuerier. I pushed it to production when I thought that it would work. Here's how that plays out in the logs:
[...] app[web.1]: Started POST "/events" for 104.231.12.227 at 2015-07-04 15:14:11 +0000
[...] app[web.1]: Completed 500 Internal Server Error in 16ms
[...] app[web.1]: Processing by EventsController#create as HTML
[...] app[web.1]: Parameters: {"authenticity_token"=>"1fvzDI1n+AlTssDybGFHAyiG4M+kir2UDLsA4dZGiAJvrWXdK6OnlAf1wK0V+/dI4QXz3lxonmRw15XbFjpOAQ=="}
[...] app[web.1]: (0.8ms) SELECT COUNT(*) FROM "analytics_calls"
[...] app[web.1]:
[...] app[web.1]: ArgumentError (Could not parse PKey: no start line):
[...] app[web.1]: app/services/analytics_querier.rb:56:in `read'
[...] app[web.1]: app/services/analytics_querier.rb:56:in `authorize_with_services'
[...] app[web.1]: app/controllers/events_controller.rb:10:in `create'
2015-07-04T15:14:11.542398+00:00 app[web.1]:
[...] app[web.1]: app/models/analytics_call.rb:19:in `get_batch'
[...] app[web.1]:
[...] heroku[router]: at=info method=POST path="/events" host=desolate-fortress-9280.herokuapp.com request_id=3822f668-6b8c-4cb2-9764-c171fc0dc67b fwd="104.231.12.227" dyno=web.1 connect=3ms service=33ms status=500 bytes=1754
Edit 2 - Possible lead in the right direction?
So, as pointed out by Val Asensio, there's a problem with the SSH setup provided by Heroku (this is more or less unfamiliar territory for me, but I'll try to word things as clearly as possible, given my understanding needs some fleshing out here).
I've found a gem called heroku-buildback-ssh, which definitely suggests that you need to do more than just load a private key into an env variable in order to use it properly. However, the readme specifies that the private cannot have a passphrase in order to be accessed. It also seems to assume you're using a file called something rsa, with no indication that this works for .pem files?
I found another discussion here about ssh tunneling from Heroku. I think this is the route to take. My understanding is that I need to be able to 'rebuild' the private key file using the env variable PRIVATE_KEY behind the scenes, and that I can write a script to do this when Heroku starts up.
However, my understanding here is so shaky, and the use-case described in the answer script is different enough that I'm not sure how to extrapolate how to go about building a solution using this kind of method.
What I want to know now is, can I build the .pem key file using a bash script that Heroku runs behind the scenes, and if so, how do I then make sure that that file is what is being read when my script runs OpenSSL::PKey.read? I guess I could also try to build the original .pk12 file itself instead?
ALSO - big question: am I going to screw anything up so terribly that I'm left in a tangled knot of confused code and have to trash the whole application (it's a code spike application, but I still want to know if anyone can provide an answer here) if I start poking around with writing scripts that create files and manage permissions on those files in my production environment?
For non-Heroku deployments
Although I've not used this particular Google API, this seems like an SSH key problem on the server- the PKey error tells us that. The error indicating "no start line" leads me to think that a key part of the SSH setup is missing on the server side.
App side
Make sure you have this in your deploy.rb
ssh_options[:forward_agent] = true
Server side
The server side SSH must be setup to access the Google API. Do you have an
~/.ssh/config file? Do you have the Google API SSH set there?
# Google API
Host hostservingAPIname.com
User yourusername
IdentityFile ~/.ssh/svn_id_rsa
# Your app
Host yourappsdomian.com
User yourrailsappusername
IdentityFile ~/.ssh/id_rsa
ControlMaster auto
I'm not sure this info will be helpful, but since it doesn't seem like anyone who has the definitive answer is going to chime in, I hope my suggestion will be helpful in moving you towards a solution.
Note: Someone may search for this question and their app is not hosted on Heroku, so they may find this answer useful, so I'll leave it here.
I had the two environment variables mixed up.
That's it.
That's all it was.

Difficulty in sourcing tcl files from sharepoint

I have tcl byte code on sharepoint with url like
https://share.abc.com/sites/abc/test.tcl
I want to source this file in another tcl file residing on my machine.
I don't want to copy the file from sharepoint.
Can anyone help me out here?
The source command only reads from the filesystem, but that can be a virtual filesystem. Thus, you can use the tclvfs package to make it so that HTTP sites can be mounted within the process, and then you can read from that.
# Add in HTTPS support
package require http
package require tls
::http::register https 443 ::tls::socket
# Mount the site; the vfs::urltype package won't work as it doesn't support https
package require vfs::http
# Double quotes only because of Stack Overflow highlighting sucking
vfs::http::Mount "https://share.abc.com/" /https.share.abc.com
# Load and evaluate the file
source /https.share.abc.com/sites/abc/test.tcl
This all assumes that you don't need any username/password credentials. If you do, you need to set them as part of the mount:
vfs::http::Mount "https://theuser:thepassword#share.abc.com/" /https.share.abc.com
Note that this currently requires that you're using HTTP Basic Auth (over HTTPS). That's sufficiently secure for almost any reasonable use.
This is quite a large stack of stuff. You can do it in rather less if you are willing to do some more of the work yourself:
package require base64
package require http
package require tls
::http::register https 443 ::tls::socket
proc source_https {url username password} {
set auth "Basic [base64::encode ${username}:${password}]"
set headers [list Authorization $auth]
set tok [http::geturl $url -headers $headers]
if {[http::ncode $tok] != 200} {
# Cheap and nasty version...
set msg [http::code $tok]
http::cleanup $tok
error "Problem with fetch: $msg"
}
set script [http::data $tok]
http::cleanup $tok
# These next two commands are effectively what [source] does (apart from I/O)
info script $url
uplevel 1 $script
}
source_https "https://share.abc.com/sites/abc/test.tcl" AzureDiamond hunter2

CouchDB 1.3.1 on Centos 6.4

I compiled CouchDB and installed. It seems to work great except when I use views on the database, then it just spins the wheel and nothing happens and the cpu load spikes to 100% and slowly it eats away all memory and starts to swap a lot which in return increases the cpu load.
I have tried both with the js-1.70-12 that comes with centos 6.4, as well as build and install my own js-1.85-1. All erlang packages are installed from epel :
erlang-crypto-R14B-04.2.el6.x86_64
erlang-syntax_tools-R14B-04.2.el6.x86_64
erlang-mnesia-R14B-04.2.el6.x86_64
erlang-ssl-R14B-04.2.el6.x86_64
erlang-cosProperty-R14B-04.2.el6.x86_64
erlang-asn1-R14B-04.2.el6.x86_64
erlang-cosEventDomain-R14B-04.2.el6.x86_64
erlang-eunit-R14B-04.2.el6.x86_64
erlang-erl_docgen-R14B-04.2.el6.x86_64
erlang-toolbar-R14B-04.2.el6.x86_64
erlang-debugger-R14B-04.2.el6.x86_64
erlang-tools-R14B-04.2.el6.x86_64
erlang-typer-R14B-04.2.el6.x86_64
erlang-megaco-R14B-04.2.el6.x86_64
erlang-oauth-1.1.1-1.el6.x86_64
erlang-stdlib-R14B-04.2.el6.x86_64
erlang-hipe-R14B-04.2.el6.x86_64
erlang-kernel-R14B-04.2.el6.x86_64
erlang-runtime_tools-R14B-04.2.el6.x86_64
erlang-snmp-R14B-04.2.el6.x86_64
erlang-public_key-R14B-04.2.el6.x86_64
erlang-inets-R14B-04.2.el6.x86_64
erlang-ibrowse-2.2.0-4.el6.x86_64
erlang-cosEvent-R14B-04.2.el6.x86_64
erlang-cosNotification-R14B-04.2.el6.x86_64
erlang-edoc-R14B-04.2.el6.x86_64
erlang-otp_mibs-R14B-04.2.el6.x86_64
erlang-cosFileTransfer-R14B-04.2.el6.x86_64
erlang-cosTransactions-R14B-04.2.el6.x86_64
erlang-inviso-R14B-04.2.el6.x86_64
erlang-jinterface-R14B-04.2.el6.x86_64
erlang-erl_interface-R14B-04.2.el6.x86_64
erlang-diameter-R14B-04.2.el6.x86_64
erlang-gs-R14B-04.2.el6.x86_64
erlang-tv-R14B-04.2.el6.x86_64
erlang-appmon-R14B-04.2.el6.x86_64
erlang-odbc-R14B-04.2.el6.x86_64
erlang-wx-R14B-04.2.el6.x86_64
erlang-et-R14B-04.2.el6.x86_64
erlang-observer-R14B-04.2.el6.x86_64
erlang-sasl-R14B-04.2.el6.x86_64
erlang-dialyzer-R14B-04.2.el6.x86_64
erlang-common_test-R14B-04.2.el6.x86_64
erlang-os_mon-R14B-04.2.el6.x86_64
erlang-examples-R14B-04.2.el6.x86_64
erlang-compiler-R14B-04.2.el6.x86_64
erlang-erts-R14B-04.2.el6.x86_64
erlang-xmerl-R14B-04.2.el6.x86_64
erlang-orber-R14B-04.2.el6.x86_64
erlang-cosTime-R14B-04.2.el6.x86_64
erlang-ssh-R14B-04.2.el6.x86_64
erlang-docbuilder-R14B-04.2.el6.x86_64
erlang-percept-R14B-04.2.el6.x86_64
erlang-parsetools-R14B-04.2.el6.x86_64
erlang-ic-R14B-04.2.el6.x86_64
erlang-pman-R14B-04.2.el6.x86_64
erlang-webtool-R14B-04.2.el6.x86_64
erlang-test_server-R14B-04.2.el6.x86_64
erlang-reltool-R14B-04.2.el6.x86_64
erlang-R14B-04.2.el6.x86_64
erlang-mochiweb-1.4.1-5.el6.x86_64
Every thing configures and makes and installs as expected. You can dump data into the database, you can create documents and all that. But I can not run any view, temporary or not.
The only error I see in the logs is like this one, and it is a lot of these errors :
[Sun, 18 Aug 2013 23:10:38 GMT] [error] [<0.124.0>] {error_report,<0.30.0>,
{<0.124.0>,crash_report,
[[{initial_call,
{mochiweb_socket_server,init,['Argument__1']}},
{pid,<0.124.0>},
{registered_name,[]},
{error_info,
{exit,eaddrinuse,
[{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]}},
{ancestors,
[couch_secondary_services,couch_server_sup,
<0.31.0>]},
{messages,[]},
{links,[<0.93.0>]},
{dictionary,[]},
{trap_exit,true},
{status,running},
{heap_size,987},
{stack_size,24},
{reductions,459}],
[]]}}
But I have no idea what they mean.
Do I need to compile and install erlang as well ? All the above packages or just erlang ?
Your compilation and installation looks fine. At least your error (note: eaddrinuse in traceback) is about that there is some process that listens same address and port as your CouchDB try to. Check other listening processes with netstat -anp command or change CouchDB's listen port to different.

Resources