Description
Recent Fortran language resources usually recommend defining a module with constants defining the precision that are later reused throughout the code, for example:
module precision
integer, parameter :: sp = kind(1.0)
integer, parameter :: dp = kind(1.0d0)
integer, parameter :: wp = dp
end module
These kinds of modules are duplicated throughout libraries. This can potentially lead to incompatibilities, i.e. if library 1 uses single precision as default, and library 2 uses double precision. The user is then faced with the problem of either adapting library 1, or making library 2 use the precision module from library 1.
Would it make sense to have some mechanism to give fpm the "power" of enforcing a certain default precision? Hopefully in the long-term most Fortran programmers would default to just using constants from the stdlib precision module. This might not always be enough (i.e. interfacing with C, or perhaps using fpm on some non x86_64 architectures).
Activity
certik commentedon Jul 16, 2020
Great question. I don't know, I am hoping we will encourage the community to use stdlib to get precision. Fpm has the "power" to do anything, but the question is how it would work and if it makes sense. Like the promoting of single precision to double?
ivan-pi commentedon Jul 17, 2020
Yes, the entire situation is kind of messy. I have seen some codes which rely on the compiler flags such as
-fdefault-real-8
to automatically upgrade real literals such as1.0
to double precision.In the Fortran METIS interface I used the C preprocessor:
to allow users to select the precision depending on the version of METIS installed on their system.
I think it would be good if we could establish some guidelines for package developers whether such precision choices are responsibility of the package developer or the package user, and whether it should be done by 1) a preprocessor (C, fypp) + build system, or 2) the package manager.
ivan-pi commentedon Sep 15, 2022
The Discourse Thread (https://fortran-lang.discourse.group/t/two-modules-with-the-same-name-in-one-program/4116) touched upon the issue of duplication of precision modules.
What are the existing practices for defining precision?
real(4)
orreal(8)
. This is not desirable and damages portability. The same goes for various legacy forms likereal*4
nag_rp
,nag_wp
, andnag_hp
; the precise widths are set by the provider depending on the target platform.For most type of numerical work, I'd argue option 3 is preferred. If one knows the code should interoperate with Python, R, MATLAB, etc. on a specific platform, the precision can also be picked accordingly. In C or C++ we can match our choice of precision with a
typedef
.To achieve a level of homogeneity and composability between packages which provide reusable libraries I propose the following:
Each package provider should include in their project a module, such as
When FPM is the build system, it will pass the
-D__FPM__
definition. When fpm is not the build system, package maintainers can implement whatever precision scheme/model they want. A possible example of the precision file would be:I imagine some package authors may prefer their own parameter names, in this case, they can over-ride them:
Further ideas/questions:
fpm_precision
module instead. Package creators only shipping via fpm, could simply calluse fpm_precision
without defining their own module.-D__FPM__
is not set)?stdlib_kinds
as "the" master precision module. I think it should be fpm instead. Either you join the fpm ecosystem and benefit from the homogeneous real kinds, or you keep doing your own thing and take responsibility for precision control yourself in all of your apps/packages dependencies.See also:
sp
,dp
,qp
kinds constants stdlib#85ivan-pi commentedon Sep 16, 2022
If relying on the preprocessor is not desirable (although J3 opened a forum on the preprocessor - https://mailman.j3-fortran.org/pipermail/j3/2022-August/013845.html), we could rely on "standard" filenames instead.