Closes gh-19493

This is that pull request for Jetty. I think this property is Jetty specific and does not have a direct equivalent or current need in Tomcat or Undertow.

Approach

Create a JettyThreadPoolFactory interface that is used to set the backing thread pool on the Jetty[Servlet|Reactive]WebServerFactory.

This was needed because the thread pool's backing queue (which is where the max capacity is set) can not be safely modified in a customizer as by the time the customizer is called, the server has already had its QueuedThreadPool and its backing queue implementation set during construction - its immutable after that.

Comment From: snicoll

@bono007 the issue you've referenced has the following:

A pull request that adds this configuration option for Jetty, Tomcat, and Undertow would be welcome.

What makes you think there isn't an equivalent for Tomcat or Undertow? I think the whole discussion on that thread is that whatever mechanism we introduce should be introduced consistently.

Comment From: onobc

@snicoll

Sorry for the lack of explanation. Here was my thinking:

1) For Tomcat, the property already exists and is server.tomcat.accept-count.

2) For Undertow, I clearly did not understand the architecture last night at first look and thought it was bounded. I do a little more now and here is what I have found:

Undertow's worker thread pool is unbounded but it is NOT currently configurable in Nio/XnioWorker.

org.xnio.XnioWorker#XnioWorker

this.taskQueue = new LinkedBlockingQueue();  // If only we could do a new LinkedBlockingQueue(SomeWorkerOptionForMaxCapacityHerePlease)
this.taskPool = new XnioWorker.TaskPool(threadCount, threadCount, (long)optionMap.get(Options.WORKER_TASK_KEEPALIVE, 60000), TimeUnit.MILLISECONDS,                             this.taskQueue, new XnioWorker.WorkerThreadFactory(threadGroup, optionMap, markThreadAsDaemon), new AbortPolicy());

There is an option on io.undertow.Undertow.Builder to set the worker but it accepts an XnioWorker - again w/o the ability to configure its internal task queue. They do provide a RequestLimitingHandler which could be used to make sure that no more than N concurrent requests can be handled at once - which would roughly translate to the same as bounding the internal queue.

Summary

Mechanisms to set the "max queue capacity" (which will need to be renamed something besides "queue capacity" at this point):

Jetty: use our custom thread pool (this PR) Tomcat: use the existing property Undertow: set the request limiting handler (TBD)

Do we want to consolidate this functionality into this common property or handle the Tomcat and Undertow w/ documentation? For Tomcat its simple enough to set the other property. For Undertow though I could see doing a convenience default like I did for Jetty.

The underlying mechanisms vary enough that trying to abstract something on top of them beyond this common property does not seem like a big win. Tomcat has this accept count. Jetty has this custom JettyThreadPool. Undertow has handlers to simulate the max capacity. If they all has the concept of an exposed "thread pool" then I could see adding a "thread pool customizer" etc..

Proposal

  • Solve Jetty as I have done in this PR.
  • Tomcat - already done w/ its server.tomcat.accept-count
  • Add code to translate a sever.undertow.max-concurrent-requests into a RateLimitingHandler setup

Comment From: onobc

I have created a 2nd commit that does the same type of resource limiting for Undertow. However, it is a completely different mechanism in Undertow than in Jetty and Tomcat. In those servers, its a thread pool with min/max thread count and backing queue that we limit the capacity on. However, in Undertow we don't have access to the queue in the blocking thread pool that it uses. The recommended method is to install a RequestLimitingHandler to constrain the requests/queued up callers. Additionally I think it only is available in Servlet environment (no DeploymentInfoCustomizer in UndertowReactiveWebServerFactory). I will look into this a bit more but wanted to get this code in for Undertow so reviewers could see the differences in implementation for each server.

The main takeaway for me when comparing the 3 servers and the 3 mechanism to do this is that they are all different enough that I don't think it makes sense to try to unify them all under a single property. It would be artificial at best for Undertow specifically.

Summary

Jetty is solved w/ the previous commit and allows the following "dials":

server.jetty.max-threads=200
server.jetty.min-threads=8
server.jetty.max-queue-capacity=

Tomcat is already solved and allows the following "dials":

server.tomcat.max-threads=200
server.tomcat.min-spare-threads=8
server.tomcat.accept-count=100

Undertow is the 2nd commit and allows the following "dials":

server.undertow.io-threads
server.undertow.worker-threads
server.undertow.max-requests (NEW)
server.undertow.max-queue-capacity (NEW)

Comment From: wilkinsona

Thanks for the analysis, @bono007. In light of what you've said, and the differences in how this works in Undertow and Jetty and Tomcat, I think we should just tackle Jetty (for now at least) using server.jetty.* configuration rather than trying to do something across all three containers.

Comment From: snicoll

Thanks for the pr @bono007. I've reduced the amount of change by making sure a ThreadPool can be set without having to resort to a Servlet or Reactive implementation. That way, the existing customizer could be used to perform that job. It was already so now it's streamline in one place.