I am trying to insert 10000 records in redis from erlang using gen_server. However, i get following exception
exception exit: {connection_error,{connection_error,eaddrnotavail}}
Note:-
Ports range on redis server is sufficient
Redis is configured to accept 10000 connections at once
I also tried using timer:sleep to rule out the possibility of connections are getting full.
I am starting connection , firing query and closing connection immediately
Call from gen_server to redis is synchronous
I am using eredis as a library
I get this error approximately 200 to 500 insertions in redis
Got It worked :) Posting answer so that it can help others...
Problem was kernal TIME_WAIT.
Eredis was using gen_tcp and as I was inside the fast network and generating 10000 connections Many of the connections was in TIME_WAIT state and eredis was using resuse addr to true so Although I close the connection in my code OS was the port which were in TIME_WAIT state and erlang was again trying to connect to that port.
Thanks for the posting.
Posting some of my changes as well.
change the eredis.hrl from
-define(SOCKET_OPTS, [binary, {active, once}, {packet, raw}, {reuseaddr, true}]).
to
-define(SOCKET_OPTS, [binary, {active, once}, {packet, raw}, {reuseaddr, false}]).
Related
To learn Erlang I am trying to implement a tiny web server based on gen_tcp. Unfortunately, my code seems to trigger some wired behaviour. To demonstrate the problem I have attached a minimised version of my implementation which is sufficient to reproduce the problem. It is just delivering a static 200 OK, no matter what the HTTP request was.
The problem arises when I try to run ab (Apache HTTP server benchmarking) against my web server (using loopback interface). Without any concurrent requests (-c) everything is running just fine. However, if I use -c 8 or -c 16, the call to gen_tcp:accept/1 seems to fail on some sockets as I see a number of request: closed lines in the shell.
What makes the whole story even weirder is, that I see different behaviours on different operating systems:
OS X+Erlang/OTP 18: ab reports "Connection reset by peer" almost immediately after starting.
Debian+Erlang R15B01: All but two of the HTTP requests seem to run through. But then, ab hangs for a few seconds and reports "The timeout specified has expired, Total of 4998 requests completed", when i run ab with -n 5000. Similarly, 14998 is reported when I run 15000 tests.
This one does not seem to be the problem. I am honestly quite lost and therefore appreciate any help! :) Thanks!
server(Port) ->
Opt = [list, {active, false}, {reuseaddr, true}],
case gen_tcp:listen(Port, Opt) of
{ok, Listen} ->
handler(Listen),
gen_tcp:close(Listen),
ok;
{error, Error} ->
io:format("init: ~w~n", [Error])
end.
handler(Listen) ->
case gen_tcp:accept(Listen) of
{ok, Client} ->
request(Client),
handler(Listen);
{error, Error} ->
io:format("request: ~w~n", [Error])
end.
request(Client) ->
Recv = gen_tcp:recv(Client, 0),
case Recv of
{ok, _} ->
Response = reply(),
gen_tcp:send(Client, Response);
{error, Error} ->
io:format("request: ~w~n", [Error])
end,
gen_tcp:close(Client).
reply() ->
"HTTP/1.0 200 OK\r\n" ++
"Content-Length: 7\r\n\r\n"
"static\n".
When you increase the number of concurrent requests sent with ab -c N it will immediately open multiple TCP sockets to the server.
By default a socket opened with gen_tcp:listen/2 will support only five outstanding connection requests. Increase the number of connection requests outstanding with the {backlog, N} option to gen_tcp:listen/2.
I tested your code on OS X with ab and saw this resolve the prolem with "Connection reset by peer".
I have a cowboy websocket server. Many clients send message over the websocket. I need to do processing on the message. I can do that in websocket_handle, However as it's realtime I would like to avoid it instead I want to send the message to a Global Process where all the processing can be done.
As Each cowboy has it's own process How to run a process where every user can send message and processing can be done in that process.
Just to clarify, each websocket connection will have its own erlang process in Cowboy, so messages from different websocket clients will be processed in different processes.
If you need to move the processing from the websocket you can simply start a new handler/server process when your app starts (e.g. when you start Cowboy) that listens for process commands and data. Sample processing code:
-module(my_processor).
-export([start/0]).
start() ->
spawn(fun process_loop/0).
process_loop() ->
receive
{process_cmd, Data} ->
process(Data)
end,
process_loop().
When you start it, also register the process with a global name. That way we can reference it from the websocket handlers later.
Pid=my_processor:start().
register(processor, Pid).
Now you can send the data from Cowboy's websocket_handle/3 function to the handling process:
websocket_handle(Data, Req, State) ->
...,
processor ! {process_cmd, Data},
...,
{ok,Req,State}.
Note that the my_processor process will handle the processing requests from all connections. If you want to have a separate process for each websocket connection you could start my_processor in Cowboy's websocket_init/3 function, store the Pid of the my_processorprocess in the State parameter returned from websocket_init and use that pid instead of the processor global name.
I'm using rabbitMQ server with amq.
I am having a difficult problem. After leaving the server alone for about 10 min, the connection is lost.
What could be causing this?
If you look at the Erlang client documentation http://www.rabbitmq.com/erlang-client-user-guide.html you will see a section titled Connecting To A Broker
This gives you a few different options that you can specify when setting up your connection to the RabbitMQ server, one of the options is the heartbeat, as you can see the default is 0 so no heartbeat is specified.
I don't know the exact Erlang notation, but you will need to do something like:
{ok, Connection} = amqp_connection:start(#amqp_params_network{heartbeat = 5})
The heartbeat timeout is specified in seconds. So this would cause your consumer to heartbeat back to the server every 5seconds.
Also take a look at this discussion: https://groups.google.com/forum/?fromgroups=#!topic/rabbitmq-discuss/u227xzvqOr8
The default connection timeout for the RabbitMQ connection factory is 600 seconds (at least in the Java client API), hence your 10 minutes. You can change this by specifying to the connection factory your timeout of choice.
It is good practice to ensure your connection is release and recreated after a specific amount of time, to prevent eventual leaks and excessive resournces. Your code should ensure that it seeks a valid connection that is not close to be timed-out, and re-establish a new connection on the ones that did time-out. Overall, adopt a connection-pooling approach.
- Java example:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost(this.serverName);
factory.setPort(this.serverPort);
factory.setUsername(this.userName);
factory.setPassword(this.userPassword);
factory.setConnectionTimeout( YOUR-TIMEOUT-IN-SECONDS );
Connection = factory.newConnection();
Hi I am a new bee in Erlang but managed to create a simple TCP server which accepts client in passive mode and displays message.
I spawn a new process every time new client connects to the server. Is there a way I could send message to the client using the process which gets spawned when client connects.
Here is code.
-module(test).
-export([startserver/0]).
startserver()->
{ok, ListenSocket}=gen_tcp:listen(1235,[binary,{active, false}]),
connect(ListenSocket).
connect(ListenSocket)->
{ok, UserSocket}=gen_tcp:accept(ListenSocket),
Pid=spawn(? MODULE, user,[UserSocket]),
gen_tcp:controlling_process(UserSocket, Pid),
connect(ListenSocket).
user(UserSocket)->
case gen_tcp:recv(UserSocket, 0) of.
{ok, Binary}->% Send basic message.
{error, closed}->% operation on close.
end.
Can I have some thing like if I do.
Pid!{"Some Message"}. And the message is send to the socket associated with the process with non blocking io,
You could try this tutorial for writing a TCP server using OTP principles: http://learnyousomeerlang.com/buckets-of-sockets#sockserv-revisited
If you use a gen_server instead of your connect loop, you can store the Pids in the state. Then you can use gen_server:cast/2 to send a message to one of the Pids.
The function you want to send a message to the client from the controlling process is gen_tcp:send(Socket, Message), so for example if you wanted to send a one off message on connection you could do this:
user(UserSocket)->
gen_tcp:send(UserSocket, "hello"),
case gen_tcp:recv(UserSocket, 0) of
{ok, Binary}->% Send basic message.
gen_tcp:send(UserSocket, "basic message"),
{error, closed}->% operation on close.
gen_tcp:send(UserSocket, "this socket is closing now"),
end.
i want to check my server connection to know if its available or not to inform the user..
so how to send a pkg or msg to the server (it's not SQL server; it's a server contains some serviecs) ...
thnx in adcvance ..
With all the possibilities for firewalls blocking ICMP packets or specific ports, the only way to guarantee that a service is running is to do something that uses that service.
For instance, if it were a JDBC server, you could execute a non-destructive SQL query, such as select * from sysibm.sysdummy1 for DB2. If it's a HTTP server, you could create a GET packet for index.htm.
If you actually have control over the service, it's a simple matter to create a special sub-service to handle these requests (such as you send through a CHECK packet and get back an OKAY response).
That way, you avoid all the possible firewall issues and the test is a true end-to-end one. PINGs and traceroutes will be able to tell if you can get to the machine (firewalls permitting) but they won't tell you if your service is functioning.
Take this from someone who's had to battle the network gods in a corporate environment where machines are locked up as tight as the proverbial fishes ...
If you can open a port but don't want to use ping (i dont know why but hey) you could use something like this:
import socket
host = ''
port = 55555
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host, port))
s.listen(1)
while 1:
try:
clientsock, clientaddr = s.accept()
clientsock.sendall('alive')
clientsock.close()
except:
pass
which is nothing more then a simple python socket server listening on 55555 and returning alive