You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 24, 2023. It is now read-only.
We are interesting in running Spark jobs using Cook on our DC/OS cluster.
There are two type of Spark jobs in our cluster.
Batch jobs that have high workload but do not require fast response.
Interactive jobs, that are triggered from user side, which have strict response time requirement (order of seconds or less).
Expectation:
We would like to know whether it is possible to put priority for such jobs, so that the interactive jobs have higher priority? Once the interactive jobs are triggered, batch jobs are supposed to quickly free resources (e.g., by getting killed) so that the interactive jobs can have the maximum resources allocated in order to meet the response time requirement.
It would be perfect if Cook support dynamic resource allocation. By "dynamic resource allocation", we mean that the resource requirement for a job MUST NOT be fixed. For instance, instead of giving a job 3 CPU and 5GB mem, it would be better to set the job that can take 50% of the available resource (and 50% of course a configuration parameter that users can define as their will).
So far, we could not find any documentation about Cook that mentions this.
Could you please enlighten us.