https://canonical-charmed-hpc.readthedocs-hosted.com/en/latest/howto/setup/deploy-slurm/#set-compute-nodes-to-idle documents using the resume action for slurmctld to bring nodes to IDLE following deployment. However, the nodes retain their "new" status in this case and soon return to DOWN status. This can be confirmed by checking the DownNodes entry in slurm.conf with reason New node.
Instead, running the node-configured action for slurmd bring the node to IDLE as expected and clears the "new status" (but has the drawback of needing run on each new compute node, one at a time).
We should either document the node-configured action or adjust the resume action so it works as documented.