Conversation
|
JoerivanEngelen
left a comment
There was a problem hiding this comment.
Cool! Jinja templates and tests look good. I have some comments.
Furthermore: I'm not entirely sure about the API yet. For the UTL-ATS, we create a separate package (see: AdaptiveTimeStepping). I think we should do the same for UTL-HPC to keep things consistent, even though the UTL-HPC is such a trivial package to write.
This means a HighPerformanceComputing package, which has to be added to the simulation. The _write_hpc_package method can also be moved to this package. For user convenience, we can let the split method add this package to the generated simulation, so they don't have to worry less about not forgetting to add a HPC package themselves.
| if mpi_rank < 0: | ||
| raise ValueError(f"Negative MPI rank of {mpi_rank} is not allowed.") | ||
| else: | ||
| mpi_rank = -1 |
There was a problem hiding this comment.
I don't really like sentinel values to indicate the mpi_rank is not set. It's more understandable for other developers to set mpi_rank to None here to indicate mpi_rank is not set.
| class PartitionInfo(NamedTuple): | ||
| active_domain: GridDataArray | ||
| id: int | ||
| mpi_rank: int |
There was a problem hiding this comment.
See other comment on line R47 in this file
| mpi_rank: int | |
| mpi_rank: Optional[int] |
|
|
||
| for id, model_dict in self._partition_id_to_models.items(): | ||
| mpi_rank = self.partition_info[id].mpi_rank | ||
| if mpi_rank > 0: |
There was a problem hiding this comment.
in the code in common.utilities.partitioninfo.py you assume -1 is an inactive mpi_rank, but here you assume mpi_rank==0 is inactive as well?


Fixes #1785
Description
Support for writing the HPC input file as defined in the options block of the simulation name file. By enabling this option, the user can connect submodels to the MPI process ranks.
Checklist
Issue #nr, e.g.Issue #737pixi run generate-sbomand committed changes