-
Notifications
You must be signed in to change notification settings - Fork 154
Description
This is closely related to #134
At present, multipart chunks within the HTTP response stream are set to 8 KiB. Additionally, maximum page sizes (after which an additional request must be made) are limited to 100 rows, which corresponds to about 2.5 MiB. All of this is much, much too small, and it would be very helpful to raise the former limit by around 2 orders of magnitude, and the latter by around 1.
My home internet connection tests around 250 Mbps downstream. Ignoring HTTP overhead (which is somewhat marginal), for me to saturate my bandwidth, I would need to receive 12.5 pages per second, which works out to one page every 80 milliseconds. A quick glance at traceroute suggests that ICMP propagation alone gives me a floor of one page every 30 seconds from your server, and an HTTPS connection and handshake process is considerably higher latency than this, even if we pretend that there is no per-page processing overhead on your server's side.
I think that a reasonable page limit would be around 25 MiB, which would ballpark to a limit=1000 or perhaps a little higher. These are still manageable page sizes from a server resource standpoint, but they would relieve some of the pressure on connection latency as the bounding factor in transfer rates.
Which brings us to chunk sizes… It would be considerably easier if the multipart chunk sizes were larger than just 8 KiB, simply because there's a meaningful amount of processing overhead which must surround each chunk (e.g. parsing the HTTP messages setting up the next chunk), which again cuts into transfer rates considerably. Using our application as a benchmark, we've found that the ideal chunk size for local filesystem access is usually around 1 MiB. This is fairly large for an HTTP response, but it's probably closer to the optimum overall. A multipart chunk size around 256 KiB would be easily manageable on any device, and it would relieve some of the processing penalties.