Skip to content

Commit b79ad53

Browse files
committed
skip ci
1 parent 2471c14 commit b79ad53

File tree

3 files changed

+10
-5
lines changed

3 files changed

+10
-5
lines changed
0 Bytes
Binary file not shown.

docs/src/Half_1.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,8 @@ support for IEEE 16 bit floats (Float16). A second format
66
significand (mantissa), thereby
77
trading precision for range. In fact, the exponent field in BFloat is
88
the same size (8 bits) as that for single precision (Float32). The
9-
significand, however, is only 8 bits. Which is less than that for
9+
significand, however, is only 8 bits. Compare this to the size
10+
of the exponent fields for
1011
Float16 (11 bits) and single (24 bits). The size of the significand
1112
means that you can get in real trouble with half precision in either
1213
format.
@@ -47,6 +48,9 @@ for double, single, and half precision, and the ratio of the half
4748
precision timings to double. The timings came from Julia 1.10.2
4849
running on an Apple M2 Pro with 8 performance cores.
4950

51+
I am constantly playing with ```hlu!.jl``` and these timings will almost
52+
certainly be different if you try to duplicate them.
53+
5054
## Half Precision is Subtle
5155

5256
Half precision is also difficult to use properly. The low precision can
@@ -84,7 +88,8 @@ julia> norm(b-A*z,Inf)
8488
julia> norm(z-xd,Inf)
8589
2.34975e-01
8690
```
87-
So you get very poor, but unsurprising, results. While __MultiPrecisionArrays.jl__ supports half precision and I use it all the time, it is not something you would use in your own
91+
So you get very poor, but unsurprising, results. While __MultiPrecisionArrays.jl__ supports half precision and I use it all the time, it is not something you
92+
should use in your own
8893
work without looking at the literature and making certain you are prepared for strange results. Getting good results consistently from half precision is an active research area.
8994

9095
So, it should not be a surprise that IR also struggles with half precision.

docs/src/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -246,7 +246,7 @@ end
246246
```
247247

248248
The function ```mplu``` has two keyword arguments. The easy one to understand is ```TF``` which is the precision of the factorization. Julia has support for single (```Float32```) and half (```Float16```)
249-
precisions. If you set ```TF=Float16``` then low precision will be half. Don't do that unless you know what you're doing. Using half precision is a fast way to get incorrect results. Look at the section on [half precision](#half-Precision) in this Readme for a bit more bad news.
249+
precisions. If you set ```TF=Float16``` then low precision will be half. Don't do that unless you know what you're doing. Using half precision is a good way to get incorrect results. Look at the section on [half precision](#half-Precision) in this Readme for a bit more bad news.
250250

251251
The other keyword arguement is __onthefly__. That keyword controls how the triangular solvers from the factorization work. When you solve
252252

@@ -279,8 +279,8 @@ and
279279
```
280280
AF2=mplu(A)
281281
```
282-
are very different. Typically ```lu``` makes a high precision copy of ```A```
283-
and
282+
are very different. Typically ```lu``` makes a high precision
283+
copy of ```A``` and
284284
factors that with ```lu!```. ```mplu``` on the other hand, uses ```A```
285285
as the high precision matrix in the multiprecision array structure and
286286
the makes a low precision copy to send to ```lu!```. Hence ```mplu```

0 commit comments

Comments
 (0)