Skip to content

Use LSF blaunch command in LSFJob to start the workers in a multitask job #672

@tlst76

Description

@tlst76

Hello, I am using Dask with IBM LSF cluster, I noted that the current LSF Cluster implementation does not give the ability to start an LSF job on multiple hosts, which can be achieved by using the LSF blaunch command.

Modification proposal

The current way of starting multiple workers on one job that has multiple tasks (-n bsub option) is by setting the --nworkers option of distributed.cli.dask_worker to the number of tasks. Could it be a better option to use instead the blaunch command provided by LSF to run a command on each requested task ? In this way the workers will be dispatched on the different hosts provided by LSF and the span[hosts=1] will not be required anymore.

Here a test code I made by subclassing LSFJob class to try using blaunch.

class CustomLSFJob(LSFJob):

    def __init__(self, scheduler=None, name=None, **kwargs):
        super().__init__(scheduler=scheduler, name=name, **kwargs)

        self._command_template = (
            f"blaunch '/path/to/python -m distributed.cli.dask_worker {self.scheduler} --name {name}-$LSF_PM_TASKID --nthreads 1 --memory-limit 1.86GiB --nworkers 1 --nanny --death-timeout 60'"
        )

        self.job_header = self.job_header.replace(
            '#BSUB -R "span[hosts=1]"',
            '#BSUB -R "span[hosts=-1]"',
        )


class CustomLSFCluster(LSFCluster):
    job_cls = CustomLSFJob

This code is not usable as is, but I think the changes will only be limited to the constructor of the LSFJob class.

Is this an interesting idea ?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions