You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/Half_1.md
+7-2Lines changed: 7 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,8 @@ support for IEEE 16 bit floats (Float16). A second format
6
6
significand (mantissa), thereby
7
7
trading precision for range. In fact, the exponent field in BFloat is
8
8
the same size (8 bits) as that for single precision (Float32). The
9
-
significand, however, is only 8 bits. Which is less than that for
9
+
significand, however, is only 8 bits. Compare this to the size
10
+
of the exponent fields for
10
11
Float16 (11 bits) and single (24 bits). The size of the significand
11
12
means that you can get in real trouble with half precision in either
12
13
format.
@@ -47,6 +48,9 @@ for double, single, and half precision, and the ratio of the half
47
48
precision timings to double. The timings came from Julia 1.10.2
48
49
running on an Apple M2 Pro with 8 performance cores.
49
50
51
+
I am constantly playing with ```hlu!.jl``` and these timings will almost
52
+
certainly be different if you try to duplicate them.
53
+
50
54
## Half Precision is Subtle
51
55
52
56
Half precision is also difficult to use properly. The low precision can
@@ -84,7 +88,8 @@ julia> norm(b-A*z,Inf)
84
88
julia> norm(z-xd,Inf)
85
89
2.34975e-01
86
90
```
87
-
So you get very poor, but unsurprising, results. While __MultiPrecisionArrays.jl__ supports half precision and I use it all the time, it is not something you would use in your own
91
+
So you get very poor, but unsurprising, results. While __MultiPrecisionArrays.jl__ supports half precision and I use it all the time, it is not something you
92
+
should use in your own
88
93
work without looking at the literature and making certain you are prepared for strange results. Getting good results consistently from half precision is an active research area.
89
94
90
95
So, it should not be a surprise that IR also struggles with half precision.
Copy file name to clipboardExpand all lines: docs/src/index.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -246,7 +246,7 @@ end
246
246
```
247
247
248
248
The function ```mplu``` has two keyword arguments. The easy one to understand is ```TF``` which is the precision of the factorization. Julia has support for single (```Float32```) and half (```Float16```)
249
-
precisions. If you set ```TF=Float16``` then low precision will be half. Don't do that unless you know what you're doing. Using half precision is a fast way to get incorrect results. Look at the section on [half precision](#half-Precision) in this Readme for a bit more bad news.
249
+
precisions. If you set ```TF=Float16``` then low precision will be half. Don't do that unless you know what you're doing. Using half precision is a good way to get incorrect results. Look at the section on [half precision](#half-Precision) in this Readme for a bit more bad news.
250
250
251
251
The other keyword arguement is __onthefly__. That keyword controls how the triangular solvers from the factorization work. When you solve
252
252
@@ -279,8 +279,8 @@ and
279
279
```
280
280
AF2=mplu(A)
281
281
```
282
-
are very different. Typically ```lu``` makes a high precision copy of ```A```
283
-
and
282
+
are very different. Typically ```lu``` makes a high precision
283
+
copy of ```A```and
284
284
factors that with ```lu!```. ```mplu``` on the other hand, uses ```A```
285
285
as the high precision matrix in the multiprecision array structure and
286
286
the makes a low precision copy to send to ```lu!```. Hence ```mplu```
0 commit comments