Skip to content

Memory leak #186

@soupi

Description

@soupi

I spotted what I believe is a memory leak trying to benchmark a very simple Spock app.

Reproduction steps in this repo.

You may need to use ulimit -n 4096 before running hello-spock, which is another indication that there's a problem.

Spock fairs rather poorly compared to twain and scotty. Here are some numbers:

Library Get (/) Params, query & header Post JSON
Spock 31,321.19 25,015.61 28,924.38
scotty 269,021.44 186,814.18 194,448.40
twain 306,501.25 230,075.55 227,636.17

When trying to figure out why that is I noticed very different behaviours between Spock and twain:

Screenshot from 2023-06-29 00-02-36

Screenshot from 2023-06-29 00-04-32

Here's a -s report of the two
> Spock is running on port 3000
149,218,795,624 bytes allocated in the heap
  31,465,251,520 bytes copied during GC
     505,829,376 bytes maximum residency (198 sample(s))
      67,873,264 bytes maximum slop
            1557 MiB total memory in use (61 MiB lost due to fragmentation)

                                     Tot time (elapsed)  Avg pause  Max pause
  Gen  0     14970 colls, 14970 par   21.356s   6.710s     0.0004s    0.0044s
  Gen  1       198 colls,   197 par   39.219s   5.222s     0.0264s    0.0956s

  Parallel GC work balance: 87.24% (serial 0%, perfect 100%)

  TASKS: 123 (1 bound, 121 peak workers (122 total), using -N12)

  SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)

  INIT    time    0.003s  (  0.002s elapsed)
  MUT     time  164.764s  ( 23.123s elapsed)
  GC      time   60.574s  ( 11.932s elapsed)
  EXIT    time    0.065s  (  0.003s elapsed)
  Total   time  225.407s  ( 35.060s elapsed)

  Alloc rate    905,650,467 bytes per MUT second

  Productivity  73.1% of total user, 66.0% of total elapsed

------

> Running twain app at http://localhost:3000 (ctrl-c to quit)
101,783,300,920 bytes allocated in the heap
   2,880,638,200 bytes copied during GC
      19,310,688 bytes maximum residency (23 sample(s))
         872,128 bytes maximum slop
             104 MiB total memory in use (0 MiB lost due to fragmentation)

                                     Tot time (elapsed)  Avg pause  Max pause
  Gen  0      3931 colls,  3931 par    7.664s   2.089s     0.0005s    0.0017s
  Gen  1        23 colls,    22 par    0.128s   0.037s     0.0016s    0.0030s

  Parallel GC work balance: 72.41% (serial 0%, perfect 100%)

  TASKS: 26 (1 bound, 25 peak workers (25 total), using -N12)

  SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)

  INIT    time    0.003s  (  0.002s elapsed)
  MUT     time  167.343s  ( 32.921s elapsed)
  GC      time    7.792s  (  2.126s elapsed)
  EXIT    time    0.003s  (  0.001s elapsed)
  Total   time  175.141s  ( 35.051s elapsed)

  Alloc rate    608,230,568 bytes per MUT second

  Productivity  95.5% of total user, 93.9% of total elapsed

And a quick look at the hello-spock.prof reveals that Crypto.Random is doing quite a bit of work:

COST CENTRE                           MODULE                            SRC                                                      %time %alloc

supportedBackends                     Crypto.Random.Entropy.Backend     Crypto/Random/Entropy/Backend.hs:(31,1)-(41,5)            40.6   52.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions