I am getting an error an bind() with the value 34 (Result too large), can anyone help?
void Connect(string address, unsigned short port){
memset(&server2, 0, sizeof(server2));
server2.sin_family = AF_INET;
server2.sin_addr.s_addr = inet_addr(address.c_str());
server2.sin_port = htons(port);
desc2 = ::socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if(desc2 == -1) {
cout << "Error in Socket()" << endl;
}
if((::bind(desc2,(sockaddr*)&server2, sizeof(server2))) == -1) {
cout << "Error in Bind() " << errno << endl;
}
if((::connect(desc2, (sockaddr*)&server2, sizeof(server2))) > 0) {
cout << "Error in Connect()" << endl;
}
cout << "YOU ARE CONNECTED TO " << address << " ON PORT " << port << endl;
}
PS: I got this error 1 year ago too,the problem was simple i had write something bad when initializing the socket address, where to connect, but now again i have no clue where I made a mistake.
I don't know what caused your problem exactly to return that type of result.
You said you were a proxy server, so you are listening for incoming connections.
Try this:
server2.sin_addr.s_addr = 0;
server2.sin_family = AF_INET;
server2.sin_port = htons(port);
desc2 = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
// Puts the socket in listening mode, allowing 10 connection requests to queue.
bind(desc2, (sockaddr*)&server2, sizeof(server2));
listen(desc2, 10);
// Accepts the first connection request.
SOCKET accepted_socket = accept(desc2, NULL, NULL);
std::cout << "connection accepted!\n";
You will probably want to learn how to program asynchronously.
I don't know why you're getting 'name too long' -- without seeing how server2 was declared and defined, it's impossible to know for sure.
But I do know that calling bind() and then connect() on the same socket with the same address will fail -- bind() is assigning the local address to the socket, and connect() connects to the remote address. Giving the same address for both ends of the socket can only end badly.
Almost no protocols require bind() before connect(). (The exceptions would involve the "ports lower than 1024 can only be opened by root, so we can trust this connection" style of authentication, which hasn't been used in ages. Think rlogin.)
bind() makes most sense immediately before a listen() call. Clients will attempt to contact a server on a 'well-known port', and bind() is the mechanism you use to assign that name.
Related
I am trying to write a function that will make me break a epoll_wait().
I have
void SocketSystem::epollBreakWait(int epoll)
{
if (epoll == ERROR_CODE)
return;
int selfpipe[PIPE_PAIR];
if (pipe(selfpipe) < 0)
std::cout << "Error on self pipe." << std::endl;
if (::epoll_ctl(epoll, EPOLL_CTL_ADD, selfpipe[0], NULL) == ERROR_CODE)
std::cout << "Error breaking epoll." << std::endl;
int temp = 0;
::write(selfpipe[1], &temp, sizeof(temp));
}
But when I run it I get error (-1) and errno = Bad address.
Any thoughts?
How do you call epoll_wait itself? And I suppose you have to provide non null struct epoll_event to epoll_ctl
Spell it?
I have tried to put in a non null epoll_event to epoll_ctl as you suggested, and that seemed to fix the problem.
Trying to implement a JB check for non-default open ports (i.e., 22/TCP SSH):
/**
Checks for non-standard ports
*/
inline int isPortOpen(short port)__attribute__((always_inline));
-(BOOL)isPortOpen:(short) port
{
struct sockaddr_in addr;
int sock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
memset(&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(port);
if(inet_pton(AF_INET, "127.0.0.1", &addr.sin_addr)) {
int result = connect(sock,(struct sockaddr *)&addr, sizeof(addr));
// error :(
int w00t = errno;
NSLog(#"Error: %i",w00t);
if(result == 0){
NSLog(#"FAILED JB CHECK -- non-standard port open!");
return YES;
}
close(sock);
}
NSLog(#"PASSED JB CHECK -- non-standard ports closed.");
return NO;
}
But connect() fails (result is -1) and the errno is 1 -- EPERM
https://developer.apple.com/library/ios/documentation/System/Conceptual/ManPages_iPhoneOS/man2/intro.2.html
1 EPERM Operation not permitted. An attempt was made to perform an operation
limited to processes with appropriate privileges or to the
owner of a file or other resources.
Looking at the manual for connect(), EPERM is not one of the possible return errors.
https://developer.apple.com/library/ios/documentation/System/Conceptual/ManPages_iPhoneOS/man2/connect.2.html
So, the only reason I can see for getting EPERM is the sandbox, but:
Running on a jailbroken iPod (iOS 8.4)
The container seatbelt profile allows outbound network connections by default
I'd like to understand what's happening. Note that I've tried connecting to the wireless interface as well, instead of loopback.
I wroute a server program and a client program by qt.
client program create connection to local host port 60600 by tcp protocol and server program listen to that port.
after creating new connection by client, server accept connection and send packets to client.
In normal state, I create TCPClient class object and TCPServer class object in main function of client and server programs and it work properly.
but i need to create objects into thread and create and start thread in main function.
when i move code of creating client or server objects to run() function of Qthread it runs but server and clients does not work properly.
connect() function in constructor of server and client classes return true but connection does not work and slot function does not call.
can any help me?
server code is here:
main.cpp:
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
......
PgThread* MyThread = new PgThread( d );
MyThread->start();
......
return app.exec();
}
PgThread.cpp
PgThread::PgThread( int packetsizein )
{
packetsize = packetsizein;
//connect( this, SIGNAL(finished()), this, SLOT(finishedReq() ));
}
PgThread::~PgThread
()
{
delete server;
}
void PgThread::run()
{
server = new tcpserver(packetsize);
}
tcpserver.cpp:
tcpserver::tcpserver(int packsize)
{
packetSize = packsize;
server = new QTcpServer(this);
server -> listen( QHostAddress::Any,60600 );
connect( server,SIGNAL(newConnection()),this,SLOT(newConnectionRequest()) );
}
void tcpserver::newConnectionRequest()//does not call when tcpserver object create in run comnmand of pgthread
{
i=i+1;
QTcpSocket *clientConnection = server->nextPendingConnection();
connect(clientConnection, SIGNAL(disconnected()), clientConnection, SLOT(deleteLater()));
QByteArray block;
char c;
c=(char)i;
block.append(c);
for (int k=1;k<packetSize;k++)
block.append('A');
clientConnection->write(block);
clientConnection->flush();
qint64 current = QDateTime::currentMSecsSinceEpoch();
if(forDebug)
cout << "start time: " <<current<<endl;
else
cout << i << "\n" << current << endl;
clientConnection->disconnectFromHost();
}
I found Problem:
I must add exec() function in end of run function!
I have a network application meant for a private LAN. I am doing my testing using loopback. When I test on the LAN the socket creation order does not matter. If I test using loop back 127.0.0.1 then there is a socket creation ordering issue. Why is it different on loop back?
Here are more details...
There is one server, and many client instances. The server is broad casting data over UDP. The clients receive the data and process it.
I need to have the network layer not care about the order in which either the server or clients start. It is hard to administer process creation for my case. The application instances should be able to start on the network in any order and just see the data broadcasted on the UDP port when it is sent.
But there is something in the way I setting up my UDP sockets which is forcing ordering to take place. I must start the clients, THEN start the server. If I start the clients AFTER the server doing the UDP broadcast, the client sockets do not receive the data. If I force a running server instance to tear down and rebuild its UDP socket, suddenly all the clients start receiving data.
There must be something wrong with how I creating the socket. The client and server code use a shared function library to make the UDP socket. So the server is sending on m_fdOut. Each instance of the client is receiving on m_fdIn.
What am I doing wrong here?
SOCKET m_fdIn;
SOCKET m_fdOut;
if ((m_fdIn = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) < 0)
{
WARNF("socket failed, winsock error %d\n", WSAGetLastError());
exit(1);
}
if ((m_fdOut = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) < 0)
{
WARNF("socket failed, winsock error %d\n", WSAGetLastError());
exit(1);
}
int sockopt = 1;
if (setsockopt(m_fdOut, SOL_SOCKET, SO_BROADCAST, (char *)&sockopt,
sizeof(sockopt)) < 0)
{
WARNF("setsockopt failed, winsock error %d\n", WSAGetLastError());
exit(1);
}
sockopt = readPreference<int>("SOL_RCVBUF", 512*1024);
if (setsockopt(m_fdIn, SOL_SOCKET, SO_RCVBUF, (char *)&sockopt, sizeof(sockopt)) < 0)
{
WARNF("setsockopt failed, winsock error %d\n", WSAGetLastError());
exit(1);
}
sockopt = 1;
if (setsockopt(m_fdIn, SOL_SOCKET, SO_REUSEADDR, (char *)&sockopt, sizeof(sockopt)) < 0)
{
WARNF("setsockopt failed, winsock error %d\n", WSAGetLastError());
exit(1);
}
sockopt = readPreference<int>("IP_MULTICAST_TTL", 32);
if (setsockopt(m_fdOut, IPPROTO_IP, IP_MULTICAST_TTL, (char *)&sockopt, sizeof(sockopt)) < 0)
{
WARNF("setsockopt failed, winsock error %d\n", WSAGetLastError());
exit(1);
}
String destAdd = "255.255.255.255"
int portNumber = 1234;
int n1, n2, n3 ,n4;
if (sscanf(destAddr, "%d.%d.%d.%d", &n1, &n2, &n3, &n4) != 4)
{
n1 = n2 = n3 = n4 = 255;
}
u_long bcastAddr = (n1<<24) | (n2<<16) | (n3<<8) | n4;
outAddr.sin_family = AF_INET;
outAddr.sin_port = htons(portNumber);
outAddr.sin_addr.s_addr = htonl(bcastAddr);
struct sockaddr_in in_name;
in_name.sin_family = AF_INET;
in_name.sin_addr.s_addr = INADDR_ANY;
in_name.sin_port = htons(portNumber);
if (bind(m_fdIn, (struct sockaddr *)&in_name, sizeof(in_name)) < 0)
{
WARNF("bind failed, winsock error %d\n", WSAGetLastError());
exit(1);
}
So I did change the implementation from UDP broadcast to multicast. That seems to work in loopback so multiple processes can share the port.
I'm trying to implement simple traceroute for the iOS. Everything seems to work fine, except that somehow when I run my application on simulator or on the device it finds only a few (6-7) first routers on the way when the CLI traceroute finds all 14 routers.
const char *c = "www.gmail.com";
struct hostent *host_entry = gethostbyname(c);
char *ip_addr;
ip_addr = inet_ntoa(*((struct in_addr *)host_entry->h_addr_list[0]));
struct sockaddr_in destination, fromAddr;
int recv_sock;
int send_sock;
// Creting Sockets///
if ((recv_sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_ICMP)) <
0) // using UDP socket.
{
NSLog(#"Could not cretae recv_sock.\n");
}
if ((send_sock = socket(AF_INET, SOCK_DGRAM, 0)) < 0) {
NSLog(#"Could not cretae send_sock.\n");
}
memset(&destination, 0, sizeof(destination));
destination.sin_family = AF_INET;
destination.sin_addr.s_addr = inet_addr(ip_addr);
destination.sin_port = htons(80);
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 10000;
setsockopt(recv_sock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv,
sizeof(struct timeval));
char *cmsg = "GET / HTTP/1.1\r\n\r\n";
int max_ttl = 20;
int num_attempts = 5;
socklen_t n = sizeof(fromAddr);
char buf[100];
for (int ttl = 1; ttl <= max_ttl; ttl++) {
memset(&fromAddr, 0, sizeof(fromAddr));
if (setsockopt(send_sock, IPPROTO_IP, IP_TTL, &ttl, sizeof(ttl)) < 0)
NSLog(#"error in setsockopt\n");
for (int try = 0; try < num_attempts; try ++) {
if (sendto(send_sock, cmsg, sizeof(cmsg), 0,
(struct sockaddr *)&destination,
sizeof(destination)) != sizeof(cmsg))
NSLog(#"error in send to...\n#");
int res = 0;
if ((res = recvfrom(recv_sock, buf, 100, 0, (struct sockaddr *)&fromAddr,
&n)) < 0) {
NSLog(#"an error: %s; recvfrom returned %d\n", strerror(errno), res);
} else {
char display[16] = {0};
inet_ntop(AF_INET, &fromAddr.sin_addr.s_addr, display, sizeof(display));
NSLog(#"Received packet from%s for TTL=%d\n", display, ttl);
break;
}
}
}
I have tried to bind the send socket but have same results and I can't use Sock_raw on iOS. I tried to run it on my mac and got same results. The error I get is "Resource temporarily unavailable;" for the recvfrom(). Why is that? How can I fix it?
The EAGAIN error ( producing "Resource temporarily unavailable;" string) could be raised by the timeout of the receiving socket.
Since you set just 10000 microseconds as read timeout (that's really short IMHO) with this line...
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 10000;
setsockopt(recv_sock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv,sizeof(struct timeval));
... it's possibile that the longer the way (i mean the number of router you have to pass through), the more chance you have to incour in this situation.
Try to raise timeout value and let us know if it got better.
EDIT
I tried the source code under linux and i noticed two kind of problems.
As mentioned above: Timeouts
Problem with the 80 port
I just raised the timeout and used a port different than 80 (in my case i sent udp message to 40000 port) and i got back all the hops just like traceroute command.
I'm not sure why this behaviour occour. Maybe some kind of "possible malicious packet alarm" gets triggered by the router that discards it
FURTHER EDIT
Look at this link: man traceroute
In the List Of Available Methods section you can find many ways to achieve what you need. Your method is similar to the default one, stating:
Probe packets are udp datagrams with so-called "unlikely" destination ports. The "unlikely" port of the first probe is 33434, then for each next probe it is incremented by one. Since the ports are expected to be unused, the destination host normally returns "icmp unreach port" as a final response. (Nobody knows what happens when some application listens for such ports, though).
So, if you need to full emulate the behaviour of the common Linux traceroute you have to increase by 1 the destination port, everytime the TTL increase (or everytime you can't get a response IMHO)
MAYBE, sometimes your command doesn't work on certain ports because the router is listening to the latter (as suggested by Linux manual and underlined in bold by me).