According to this article poll vs select vs event-based:
select() only uses (at maximum) three bits of data per file descriptor, while poll() typically uses 64 bits per file descriptor. In each syscall invoke poll() thus needs to copy a lot more over to kernel space. A small win for select().
Here is the implementation of fd_set (found on Advisories : multiple applications fd_set structure bitmap array index overflow
#ifndef FD_SETSIZE
#define FD_SETSIZE 1024
#endif
#define NBBY 8 /* number of bits in a byte */
typedef long fd_mask;
#define NFDBITS (sizeof (fd_mask) * NBBY) /* bits per mask */
#define howmany(x,y) (((x)+((y)-1))/(y))
typedef struct _types_fd_set {
fd_mask fds_bits[howmany(FD_SETSIZE, NFDBITS)];
} _types_fd_set;
#define fd_set _types_fd_set
So, in the end, fd_set is just an array of long. It is also written:
A call to FD_SET sets a bit to 1 using socket number as an index:
which means, if I got a socket fd number 5, the element indexed 5 will be seleted, and its first bit will be flipped from 0 to 1. Since select() uses 3 bits, I guess the other two bits are for sending and receive. Is this correct? Why does select() use a long when it only needs 3 bits?
Also, stated above, poll() uses 64 bits for checking. Why does poll need to check every bit in pollfd struct? Here is the pollfd struct:
struct pollfd {
int fd; // the socket descriptor
short events; // bitmap of events we're interested in
short revents; // when poll() returns, bitmap of events that occurred
};
The total bits in the struct is 64 bits, for a 32-bit int and two 16-bit short. I know the usual way of checking a bit flag is using AND (&) operator to filter out other irrelevant bits. Does that apply to this case?
JavaScript questions and answers, JavaScript questions pdf, JavaScript question bank, JavaScript questions and answers pdf, mcq on JavaScript pdf, JavaScript questions and solutions, JavaScript mcq Test , Interview JavaScript questions, JavaScript Questions for Interview, JavaScript MCQ (Multiple Choice Questions)