I don't know anything about Tomcat, so I will go by your description and the answer that you received to your
original question[
^].
I don't know what all these threads are for, but I can guess. More than one thread
per listener port should not be necessary. One thread can
listen
for new connections,
accept
them up to some limit,
poll
all of the resulting sockets for new packets,
recv
the packets from those sockets, and with the help of an application callback, assemble them into properly framed messages that are finally queued as work items for application worker threads.
I can think of two reasons why there would be more than one thread per TCP service. First, the thread that does what I just described might not get enough CPU time when it contends with lots of other threads. So the game that gets played is to create more threads to get more CPU time. This is one of the joys of preemptive and priority scheduling, and I've written articles about why it is stupid. But it's what most systems do, so it wouldn't come as a surprise.
Second, application work may run directly off these threads. That is, the system doesn't have a
queueing layer between the I/O layer and the application layer. The application doesn't provide its own threads, but is instead invoked by the threads that also service the sockets. This is usually a poor design, but it would also explain the existence of all these threads, especially if applications often perform blocking operations (e.g., disk I/O or reading/writing a database).
This may give you some idea as to what could be going on. If the I/O is non-blocking, 200 threads could well be able to handle 300 clients if the system works as described in the previous paragraph, provided that the application doesn't block the threads so often that none of them are available to handle incoming work.