Ray Racine
2008-05-19 02:17:54 UTC
I have been doing some load testing, lots of threads/tasks with lots of
socket io on my Larceny WebServer.
Current test goes like this:
1. Client (browser, curl hammer scripts) makes an HTTP request to my
Larceny WebServer.
2. The Larceny server in handling the request opens additional out bound
HTTP requests to Amazon, Google, Wikipedia et al snarfing content,
mashing content, and returning mashed content to client.
3. I can hammer away. Stable, no mem leaks. Yea ha!.
4. Until I _interrupt_ a client in mid request at which point Larceny
just exits. No msg, no error, no muss, no fuss, just exits back to the
command line.
I have traced it to here.
;; write(2)
;; int write( int fd, void *buf, int n )
(define unix-write (foreign-procedure "write" '(int boxed int) 'int))
When called, it never returns.
Of course, I'm doing _tons_ of unix-reads as well. And I'm playing much
more games with reading such as non-blocking, EAGAIN, epolling. For
write I just do a simple write without much fanfare on the file
descriptor. Surprise surprise, no problems on read, but with the write.
Googling on socket writing I find...
Signals
When writing onto a connection-oriented socket that has been shut down
(by the local or the remote end) SIGPIPE is sent to the writing process
and EPIPE is returned. The signal is not sent when the write call
specified the MSG_NOSIGNAL flag.
My theory is, when the client is interrupted the socket is closed. The
Larceny server continues to try writing to the socket fd resulting in
the SIGPIPE signal and causing the exit. I looked briefly at signals.c
and don't see it being handled or masked. But I didn't look very
hard :).
I have 2 options that I know of.
1) Use send instead of write. Send allows for additional flags over
write. IN particular the following flag.
MSG_NOSIGNAL
Requests not to send SIGPIPE on errors on stream oriented sockets when
the other end breaks the connection. The EPIPE error is still
returned.
2) The Larceny runtime handles/ignores the SIGPIPE signal (if it does
not already).
socket io on my Larceny WebServer.
Current test goes like this:
1. Client (browser, curl hammer scripts) makes an HTTP request to my
Larceny WebServer.
2. The Larceny server in handling the request opens additional out bound
HTTP requests to Amazon, Google, Wikipedia et al snarfing content,
mashing content, and returning mashed content to client.
3. I can hammer away. Stable, no mem leaks. Yea ha!.
4. Until I _interrupt_ a client in mid request at which point Larceny
just exits. No msg, no error, no muss, no fuss, just exits back to the
command line.
I have traced it to here.
;; write(2)
;; int write( int fd, void *buf, int n )
(define unix-write (foreign-procedure "write" '(int boxed int) 'int))
When called, it never returns.
Of course, I'm doing _tons_ of unix-reads as well. And I'm playing much
more games with reading such as non-blocking, EAGAIN, epolling. For
write I just do a simple write without much fanfare on the file
descriptor. Surprise surprise, no problems on read, but with the write.
Googling on socket writing I find...
Signals
When writing onto a connection-oriented socket that has been shut down
(by the local or the remote end) SIGPIPE is sent to the writing process
and EPIPE is returned. The signal is not sent when the write call
specified the MSG_NOSIGNAL flag.
My theory is, when the client is interrupted the socket is closed. The
Larceny server continues to try writing to the socket fd resulting in
the SIGPIPE signal and causing the exit. I looked briefly at signals.c
and don't see it being handled or masked. But I didn't look very
hard :).
I have 2 options that I know of.
1) Use send instead of write. Send allows for additional flags over
write. IN particular the following flag.
MSG_NOSIGNAL
Requests not to send SIGPIPE on errors on stream oriented sockets when
the other end breaks the connection. The EPIPE error is still
returned.
2) The Larceny runtime handles/ignores the SIGPIPE signal (if it does
not already).