Skip to content

JOSS: evaluation / comparison #13

@BrunoLevy

Description

@BrunoLevy

JOSS reviewing / Suggestions for evaluation / comparison

Hello, here is a couple of suggestions for improving the performance evaluation in your JOSS submission.
Best wishes,
-- Bruno

Performance evaluation is not complete, performance is compared between Votess on the GPU and two implementations on the CPU (qhull and voro), new performance tests are needed:

  • comparison between Votess on CPU and the state of the art CPU implementations on a multicore machine (geogram, CGAL, and also this reference that has a public implementation here).
  • comparison between Votess on GPU and Ray et.al's approach (implementation available here)
  • the article mentions applications in cosmology. In cosmology, the distribution of points is highly heterogeneous (gravity tends to cluster the points in the same zone, there can be huge length scale dynamics, several orders of magnitude). It has an enormous influence on the performance of iterative clipping methods (Ray et.al and Voro++), see Ray et.al's article for a discussion on this. Performance needs to be evaluated on pointsets with very heterogeneous density. On such pointsets, parallel Bowyer-Watson based implementations (geogram, CGAL,hextreme) on CPU often beat the GPU implementations (that have cache problems)
  • comparison between Votess on GPU and state-of-the-art CPU implementations on a fast multicore (e.g., EPYC). Especially on cosmology problems, it is unclear which one will be faster (in my own experience, Bowyer-Watson-based implementations are faster when point density varies a lot)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions