-
Notifications
You must be signed in to change notification settings - Fork 170
CLUBB update and clubb_intr improvements #1441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: cam_development
Are you sure you want to change the base?
CLUBB update and clubb_intr improvements #1441
Conversation
… clubb_grid_dir = 1, no BFB (as expected) with -1
|
@huebleruwm - After reading through this text, I moved this PR to draft. Once it is ready for the SEs to review and process it for bringing into CAM, please move it out of draft. |
…grid, and flipping sections have been consolidated to directly around the time stepping loop. Next is to push them inward until the only clubb grid calculations are done inside advance_clubb_core.
…s descending BFB (except wp3_ta but that's internal and doesn't matter) - this is even true with clubb_cloudtop_cooling, clubb_rainevap_turb, and do_clubb_mf.
…ending mode, even though there is no notion of ascending in it. There must be some bug with the flipper perhaps? Otherwise an internal interaction between clubb and silhs somehow, it's very upsetting, but I think it's time to give up and come back to it someday.
…oning yet. All changes should be BFB, but a handful of field outputs are different above top_lev because I made it zero everything above top_lev. Among the fields that differ: some (RTM_CLUBB RTP2_CLUBB THLM_CLUBB UP2_CLUBB WP2_CLUBB) were being initiazed to a tolerance above top_lev, and others (THLP2_CLUBB RTP2_CLUBB UM_CLUBB VM_CLUBB) were never initialized or set above top_lev, so we were outputting random garbage.
…ootprint and data copying steps in clubb_intr, mainly by switching pbuf variables (which are mostly clubb_inouts) to have nzm or nzt dimensions, allowing them to be passed into clubb directly, making the copying/flipping step unnecesary. This was tested with a top_lev > 1, so it should be well tested. There were some above top_lev interactions found, which seem erroneous, so I've marked them with a TODO and a little explaination.
ed11ffc to
b2b3232
Compare
|
@adamrher I have done a number of ECT tests and it was pretty much as expected. I made a baseline from ESCOMP/cam_development commit To check the more "dangerous" changes to clubb_intr, I made another baseline from commit Other things I tested:
So I think the NoteAlso, I found a number of examples where clubb_intr is interacting above |
|
I ran some ECT tests with the performance options:
|
|
@huebleruwm I checked out this PR branch, and while the clubb_intr.F90 changes seem to be there, the .gitmodules are pointing to the clubb externals currently on cam_development: This should be pointing to a different tag / hash containing support for descending / ascending options, no? |
Yes. In hindsight, it would've made much more sense to update that with the right hash each commit, rather than just including it in the commit comment. I've just been going to |
|
|
|
I'm having trouble grabbing the clubb hashes. I am able to checkout master from Does this hash pertain to a different tag than master? |
They should be master, but I did just check and neither |
…3_ne30pg3_mt232, FHISTC_LTso, and comparing the GPU code to an intel baseline.
|
I have fixed up the GPU code now and confirmed correctness with an ECT test. Using a CPU baseline with intel, the GPU results with nvhpc (using top_lev > 1 and clubb/clubb_intr running on GPUs) match according to the ECT test. All the other tests I run seem to be passing too. ECT tests and ERP tests pass, even when changing the |
|
That's great to hear. My science validation runs are running, but may take a couple days to finish. In the meantime I noticed |
cacraigucar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a quick look at the code (not a thorough code review at all). A couple of items jumped out a at me.
| <entry id="clubb_fill_holes_type" type="integer" category="pblrad" | ||
| group="clubb_params_nl" valid_values="0,1,2,3,4,5,6" > | ||
| Selects which algorithm the fill_holes routine uses to correct | ||
| below threshold values in field solutions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could the different values be documented with their algorithm names? (This is the namelist documentation so it will be useful for folks wanting to change the values)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely a good idea, added here
| end do | ||
| end do | ||
| end if | ||
|
|
||
| !---- TODO: there seems to a an above top_lev interaction here that changes answers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A number of "todo" comments are in this code. Please see if they need to be addressed or the comments can be removed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I told @huebleruwm I'd go through all these "todo" blocks; it's on my todo list.
|
@huebleruwm I started testing the performance enhancing flags, and found that two of them are causing the model to bomb out when I'm doing all this work in the case directory: which is getting through the derecho queue at a good clip because it's setup as a 4 node job with I'm doing these experiments with CAM hash |
I found a bug in the penta_lu_solver in descending mode the other day, and have since fixed it.
Are you still testing with an older version of CLUBB? The head of clubb_release/master should work with the head of this branch, and includes the penta solve bug fix. Or the commit mentioned above should work too. Could you retry with that newer commit please? The new |
Apologies I missed the comment about fixing a bug in the penta solve. I re-ran with clubb_release/master and it ran successfully. |
…the namelist files
I was debating this a little in my head. It felt alright to leave it hardcoded, since it's mainly a debugging/testing option now, but I have added it to the namelist now in this commit. |
|
@vlarson - just adding this comment to include Vince in this PR |
I feel like we'll be working out bugs combining descending with different clubb options for some time, and so it might be useful to revert this at the the namelist level. OTOH CLUBB+MF uses ascending ordering, and I don't think I want to support both options, but rather convert CLUBB+MF to descending in this PR (I presume you didn't do that). For that reason we may want to hard code it, but we could also add an endrun statement if someone tries to combine CLUBB+MF with ascending. |
That makes sense. And you are correct, clubb_mf wasn't touched other than the vertical redimensioning (from I did notice that clubb_mf was written for ascending though, so I surrounded the call to Switching it to descending mode would still be nice though, because then we can just delete the flipping code. |
|
@huebleruwm I reproduced the issue with To reproduce these results, my case command is: |
I've just made another commit that might improve performance meaningfully. I didn't find a single "culprit", but some of the things I fixed were things I did, but never really polished. Some of the slicing I lazily left in could explain both why the mean times are similar but maxes can be much larger (e.g slicing costs only happen when pcol /= ncol). I also included a bunch of random other little improvements. @adamrher Could you do another timing run with the newest changes and compare them to the others? Hopefully this it was just these little things causing the slowdown. I used the ECT test to make sure results match cam commit |
|
Sure, I'll give it a go (edit -- derecho computes nodes are down today. I'll try it on our local cluster). |
|
@huebleruwm I ran the head of your branch for two years, and I'm sorry to report that it didn't move the needle even the slightest. I ran it with more detailed timings --
|
|
@huebleruwm I figured out the cause of ~half of the slowdown. It looks like you've moved the array flipping into the To elaborate a bit on the nested loops, the The impact of moving array flipping into the
1a refers to the commit where you only updated the clubb external (CAM hash I'll start looking into the the cause of the 4.8% slowdown when mm=6 -- starting by first running a longer two year job to get more accurate timings for the mm=3 and mm=6 cases. As to whether you should move the flipping outside of |
That's what I was thinking too - it's for debugging rather than speed, and it's minimally invasive when it's directly around the advance_clubb_core call. I was on the side of removing the ascending code at first, but we already found 2 bugs (fill_holes and penta_lu_solve) that were only triggered running descending mode in cam, and being able to rule out problems in ascending mode did make make bug hunting much simpler. I ran some timing runs too, but with only 10 days, and found only a speedup in clubb_intr when running in descending mode (except 1a since that functionality didn't exist yet then). Spreadsheet here.
From 1a to 6a I found clubb_intr reducing in cost by ~29% though. I also saw advance_clubb_core increasing by ~5% - I think because of the added slicing in the call when pcols/=ncol, but that was expected, and can go away when we stop using pbuf. The big discrepancy is definitely interesting though, I wonder if there's anything were doing different besides the run length. I remember you mentioning setting pecount to 1024, but was that for the timing runs? I used 512. Here's the bash script I made to do the runs, in case anything there jumps out ( Turning on the performance options (new solvers and fixed version of new hole filler) reduced advance_clubb_core cost by ~12%, which was less than I expected, but I think the difference is that I was using estimates from testing with cheaper options, mainly clubb_l_diag_Lscale_from_tau = T + clubb_l_call_pdf_closure_twice = F. If you subract the cost of these options, and the added cost from (likely) slicing, then the speedup would be ~21%, which makes sense. So I think the performance options we do want to enable are giving the expected speedups |
We could put if statements around the l_ascending_grid blocks - if t=1 for the before block and if t=nadv for the after block?
Is this added slicing in 1a, or just 6a?
By big discrepancy, you mean that 4.7% slowdown I'm getting for 1a v. 6a w/ ascending on? Could that be the added Would you be willing to move the timers, specifically the |
|
One of the reasons the new ascending code is slower is that the flipping step used to be combined with the data copy step. When we copy the the cam data structures (e.g We are also flipping more arrays now, some of the pdf_params need flipping too now because SILHS is only descending mode now. Some other things didn't (and don't) need flipping at all because they weren't (and aren't) used in clubb_intr, but are flipped now just in case they do get used in the future. The double flipping also adds cost, but I believe the only ways to reduce the number of times we flip will make it more invasive and require more code to work in ascending mode. For example, if we flip the arrays only once per nadv loop, then all the code inside the nadv loop (like So we could definitely reduce the cost in ascending mode by just not flipping certain things or by reducing the frequency of flips, but I vote we just don't worry about it since it's only a debugging option.
By discrepancy I meant why you saw a cost increase in descending mode, but I'm seeing a decrease. The last test you did in descending mode found it slower overall than 0a and 1a. But now that I think about it, that wasn't using the clubb_tend_cam timers directly was it? I'm not sure why that would give such a different answer, but might have something to do with it. |
Oh excellent, maybe the difference was in the timers being looked at.
I think it would be nice to move the timers a little too. At least to have to have the |





There's two parts here - getting a new version of clubb in, and enabling the descending grid mode in clubb_intr. Both these goals were split up over many commits. Fixes #1411
New CLUBB
The first commits are dedicated to getting a new version of CLUBB in. Because of clubb_intr diverging between
cam_development, which had significant changes to enable GPUization, and UWM's branch, which had redimensioning changes, the merging was done manually.The first 6 commits here include changes that can be matched with a certain version of CLUBB release:
3d40d8c0e03a298ae3925564bc4db2f0df5e9437works with clubb hash1632cf12e67fc4719fa21f8e449b024a0e3b6df2d0a7f8cbworks with clubb hash673beb05187d7b536c2f36968fc7f5e1b9d1167e430ad03fworks with clubb hashdc302b95e4b71220b33aeaddb0afc68c9103555edccb59ebworks with clubb hashdc302b95703aca60ed1e0b6b24f2cd890c3a4497041d25b8works with clubb hashd5957b304d9b1b8a528ca532d964c1799e1860e96e068a12works with clubb hashd5957b30(to use the clubb hash, go to
src/physics/clubband run the git checkout there)These commits all have to do with just getting a new version of clubb in, so we need to ensure that at least
4d9b1b8a528ca532d964c1799e1860e96e068a12is working correctly. The later commits have more complicated changes to clubb_intr, so if we find any problems with this branch, we should test commit4d9b1b8a528ca532d964c1799e1860e96e068a12as a next step.clubb_intr improvements
The next commits after getting new clubb in are increasingly ambitious changes aimed at simplifying clubb_intr and reducing its runtime cost.
e60848b4ec4df90a3060ffd7f664fab42e847509introduces a parameter,clubb_grid_dir, in clubb_intr that controls which direction the grid is when calling advance_clubb_core. When using -O0 to compile, and settingl_test_grid_generalization = .true.insrc/clubb/src/CLUBB_core/model_flag.F90, the results are BFB in either grid direction.dddff494966bf2bf4341fa4a7526b2f8b0f3d16eseparates the flipping code from the copying code (copying cam sized arrays to clubb sizes arrays). Should all be BFB.055e53f70741531a58d0f7da788b824c76fef087pushes flipping code inward until it's directly around the call toadvance_clubb_core, and the way of controlling the grid direction has been changed to a flag,l_ascending_grid. This should all be BFB as well, and I tested withclubb_cloudtop_cooling,clubb_rainevap_turb, anddo_clubb_mfall true to ensure BFBness in either ascending or descending mode. One caveat - the clubb_mf code assumes an ascending grid, so before calling it we flip the arrays, then flip the outputs to descending.2fb2ba9bd6b1ec5e5c60039f90d0c1020663d0c9is a pretty safe intermediate commit, mainly moving stuff around in preparation for redimensioning.bded8a561131e4dbbccad293f14226e5e8c0e856is some very safe dimensionings.fce8e1b1b5e8d3e93232c2129be7671fae79db23is the big one that redimensions most pbuf arrays, and uses them instead of the clubb_sizes local ones. This allows us to avoid the vast majority of the the data copying, and delete the local version of the clubb arrays.The rest are safe and easy things mainly, or commits to make GPU code work.
Testing
I plan to run an ECT test to compare the
cam_developmenthead I started with to the head of this branch. Answers are expected to change slightly due to the ghost point removal in clubb, so I think it unlikely that this passes, but if it does that would be fantastic, and might be the only testing we really need.If that first ECT test doesn't pass, then I'll will assume that the difference is due to the ghost point removal (or other small bit changing commits in clubb over the past year), and rely on @adamrher to check if the results are good.
The biggest concern is if the answer changes are acceptable, and the only real differences expected are from the new version of CLUBB. If the answers from this branch look bad, we should go back to commit
4d9b1b8a528ca532d964c1799e1860e96e068a12and check the answers from that, since it only includes the new version of CLUBB, and NO unnecessary clubb_intr changes. If the answers still look bad in that commit, then we have a few more we can step back through to try to figure out which commit introduces the differences. If the hypothetical problem is present in the first commit (3d40d8c0e03a298ae3925564bc4db2f0df5e9437), then the problem is harder, because that includes (pretty much only) the removal of the ghost point in clubb, which is expected to change answers, but hopefully not significantly.Again if that first ECT test fails, then I can still run another ECT test between the version where clubb is up to date (
4d9b1b8a528ca532d964c1799e1860e96e068a12) and the head of this branch. The changes between that commit and the head may be slightly bit changing without (-O0), but definitely shouldn't be answer changing. If this ECT test fails, then I've made a mistake in the changes meant to avoid the flipping/copying, and I'll have to step back through and figure out what bad thing I did.Some other tests I've ran along the way help confirm at least some of these changes:
e60848b4ec4df90a3060ffd7f664fab42e8475094d9b1b8a528ca532d964c1799e1860e96e068a12should be working on the GPU, but later commits are untestedNext steps
I left a number of things messy for now, such as comments and gptl timers. Once we confirm these changes, I'd like to go through and make some of those nicer as a final step.
Performance
In addition to the clubb_intr changes improving performance, we should use this as an opportunity to try to flags that should be significantly faster:
clubb_penta_solve_method = 2uses our custom pentadiagonal matrix solvers, which should be significantly faster than lapack and should pass an ECT testclubb_tridiag_solve_method = 2uses our custom tridiagonal solver, which should also be faster and pass an ECT testclubb_fill_holes_type = 4uses a different hole filling algorithm, that in theory is just all around better and faster than our current one, and I suspect it will pass ECT, but I have yet to test itclubb_l_call_pdf_closure_twice = .false.will avoid a pdf_closure call (which is significant in cost) and reduce the memory footprint , and I think will have minimal effect (based on my visual analysis of what that affects in the code), but is the most likely to break the ECT test