This is the mail archive of the cygwin@cygwin.com mailing list for the Cygwin project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: unix domain socket with shared memory ?


>
> On Wed, Feb 06, 2002 at 03:17:27PM +0100, Ralf Habacker wrote:
> > Some guys may say, unix domain sockets are not implemented through tcp connection, but I'm relative
> sure, that this
> > is true:
>
> Huh?  Why are you "relative" sure?  Didn't you take a look into
> the Cygwin sources which would be the right place to learn how
> something's implemented?  net.cc is a good starting point.

Of course, I have studied net.cc and fhandler_socket.cc, like but while working with cygwin I have got some
irritations, so I starting writing "relative", but now because you have confirmed my thoughts, I can write "I know"
instead of "relativ" sure :-)

> I'm a bit surprised by your results, though.  Since AF_LOCAL
> and AF_INET are implemented the same way, they should be
> nearly equally fast^Wslow.  AF_LOCAL just has a bit of an
> overhead by some additional tests but the naked read() and
> write() calls should be nearly equivalent.
>
Some difference is caused by choosing the ip adress on the tcp benchmark (localhost or a real ip adress) The tcp
benchmark uses localhost, you can see this below, but there still remains a gap between the tcp (bw_tcp) and unix
domain sockets (bw_unix) benchmarks.

$ ./bw_tcp -s
starting tcp server

$ ./bw_tcp localhost
Socket bandwidth using localhost: 41.45 MB/sec

$ ./bw_tcp localhost
Socket bandwidth using localhost: 41.07 MB/sec

$ ./bw_tcp bramsche
Socket bandwidth using bramsche: 34.60 MB/sec

$ ./bw_tcp bramsche
Socket bandwidth using bramsche: 34.65 MB/sec

$ ./bw_unix
AF_UNIX sock stream bandwidth: 17.43 MB/sec

$ ./bw_unix
AF_UNIX sock stream bandwidth: 16.72 MB/sec

One question: Does unix domain sockets uses localhost address ? net.cc:cygwin_socketpair() seems to use first an ip
adress of zero and later the loopback address. Could this have an effect ?
I have tried to replace the locations using INADDR_ANY with htonl (INADDR_LOOPBACK) but recognized no changes.

cygwin_socketpair()
<snip>
  sock_in.sin_addr.s_addr = INADDR_ANY;
  if (bind (newsock, (struct sockaddr *) &sock_in, sizeof (sock_in)) < 0)
<snip>
  /* Force IP address to loopback */
  sock_in.sin_addr.s_addr = htonl (INADDR_LOOPBACK);
  if (type == SOCK_DGRAM)
    sock_out.sin_addr.s_addr = htonl (INADDR_LOOPBACK);
  /* Do a connect */
  if (connect (outsock, (struct sockaddr *) &sock_in,
<snip>

The benchmark loops seems to be equal (except the write of the forked server process in bw_unix)

... from previous created straced straced bw_unix
  127 1979351 [main] bw_unix 1788 fhandler_base::ready_for_read: read_ready 1, avail 1
 4749 1863604 [main] bw_unix 1876 _write: 65536 = write (4, 0xA012048, 65536)
  184 1863788 [main] bw_unix 1876 _write: write (4, 0xA012048, 65536)
 1966 1981317 [main] bw_unix 1788 _read: 32708 = read (3, 0xA012048, 65536), errno 0
  317 1981634 [main] bw_unix 1788 _read: read (3, 0xA012048, 65536) blocking, sigcatchers 0
  133 1981767 [main] bw_unix 1788 peek_socket: considering handle 0x210
  124 1981891 [main] bw_unix 1788 peek_socket: adding read fd_set /dev/streamsocket, fd 3
  176 1982067 [main] bw_unix 1788 peek_socket: WINSOCK_SELECT returned 1
  142 1982209 [main] bw_unix 1788 fhandler_base::ready_for_read: read_ready 1, avail 1
 1042 1983251 [main] bw_unix 1788 _read: 32708 = read (3, 0xA012048, 65536), errno 0
  307 1983558 [main] bw_unix 1788 _read: read (3, 0xA012048, 65536) blocking, sigcatchers 0
  132 1983690 [main] bw_unix 1788 peek_socket: considering handle 0x210
  121 1983811 [main] bw_unix 1788 peek_socket: adding read fd_set /dev/streamsocket, fd 3
  171 1983982 [main] bw_unix 1788 peek_socket: WINSOCK_SELECT returned 1
  127 1984109 [main] bw_unix 1788 fhandler_base::ready_for_read: read_ready 1, avail 1

... from previous created straced bw_tcp
  117 7226940 [main] bw_tcp 1792 fhandler_base::ready_for_read: read_ready 1, avail 1
 2573 7229513 [main] bw_tcp 1792 _read: 65416 = read (3, 0xA012048, 65536), errno 0
  315 7229828 [main] bw_tcp 1792 _read: read (3, 0xA012048, 65536) blocking, sigcatchers 0
  160 7229988 [main] bw_tcp 1792 peek_socket: considering handle 0x1F8
  113 7230101 [main] bw_tcp 1792 peek_socket: adding read fd_set /dev/tcp, fd 3
  165 7230266 [main] bw_tcp 1792 peek_socket: WINSOCK_SELECT returned 1
  117 7230383 [main] bw_tcp 1792 fhandler_base::ready_for_read: read_ready 1, avail 1
 2601 7232984 [main] bw_tcp 1792 _read: 65416 = read (3, 0xA012048, 65536), errno 0
  427 7233411 [main] bw_tcp 1792 _read: read (3, 0xA012048, 65536) blocking, sigcatchers 0
  128 7233539 [main] bw_tcp 1792 peek_socket: considering handle 0x1F8
  110 7233649 [main] bw_tcp 1792 peek_socket: adding read fd_set /dev/tcp, fd 3
  164 7233813 [main] bw_tcp 1792 peek_socket: WINSOCK_SELECT returned 1
  116 7233929 [main] bw_tcp 1792 fhandler_base::ready_for_read: read_ready 1, avail 1

If you look a little deeper you can see, that the read() in unix domain socket benchmark returns only 32708 bytes

 1966 1981317 [main] bw_unix 1788 _read: 32708 = read (3, 0xA012048, 65536), errno 0

while the read() in the tcp benchmark returns 65416

 2573 7229513 [main] bw_tcp 1792 _read: 65416 = read (3, 0xA012048, 65536), errno 0

and thats may be a reason for the performance difference.

The main difference between the two benchmarks are the used device /dev/sockstream against /dev/tcp.
But don't ask me about the reason why, now I'm left

Do you have any idea ?

BTW: If you are wondering about the result of 65416 in the second line above instead of the excepted 65536. It is
returned stable after a few (about 10) returns with the full buffer size of 65536 in the main benchmark loop. Could
this be a bug in the winsock code or is it be caused by timing differences, because not all data is send early
enough ? (There are 119 bytes  missed)

BTW2:
> I'm a bit surprised by your results, though.
I'm additional surprised that the native unix domain sockets performance under cygwiwn is only 7% of the linux
performance with the same hardware, while the tcp performance seems to be acceptable (64% of the linux
performance). So my main target is to speed this up. :-)

Ralf




--
Unsubscribe info:      http://cygwin.com/ml/#unsubscribe-simple
Bug reporting:         http://cygwin.com/bugs.html
Documentation:         http://cygwin.com/docs.html
FAQ:                   http://cygwin.com/faq/


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]