Description

Óbuda University PhD thesis Automatic roundo error analysis of numerical algorithms by Attila Gáti Supervisors: Prof. Dr. László Horváth Prof. Dr. István Krómer Applied Informatics Doctoral School Budapest,

Information

Category:
## Presentations & Public Speaking

Publish on:

Views: 11 | Pages: 23

Extension: PDF | Download: 0

Share

Transcript

Óbuda University PhD thesis Automatic roundo error analysis of numerical algorithms by Attila Gáti Supervisors: Prof. Dr. László Horváth Prof. Dr. István Krómer Applied Informatics Doctoral School Budapest, 2013 Contents 1 Introduction 2 2 The original Miller-Spooner software and its limitations 9 3 Straight-line programs, computational graphs and automatic di erentiation 11 4 Numerical stability measures 14 5 The optimization algorithms Comparative testing of the applied direct search methods Triangular matrix inversion Gaussian elimination Gauss-Jordan elimination with partial pivoting The new operator overloading based approach of di erentiation 21 7 The Miller Analyzer for Matlab De ning the numerical method to analyse by m- le programming The main m-function The input nodes The arithmetic nodes The output nodes Doing the analyses Creating a handle to the error analyser Setting the parameters of maximization Error analysis Applications Gaussian elimination, an important example The analysis of the implicit LU and Huang methods The implicit LU method The Huang method An automatic error analysis of fast matrix multiplication procedures Numerical testing Summary of the results 57 1 1 Introduction The idea of automatic error analysis of algorithms and/or arithmetic expressions is as old as the scienti c computing itself and originates from Wilkinson (see Higham [39]), who also developed the theory of oating point error analysis [80], which is the basic of today s oating point arithmetic standards ([65], [62]). There are various forms of automatic error analysis with usually partial solutions to the problem (see, e.g. Higham [39],[21]). The most impressive approach is the interval analysis, although its use is limited by the technique itself (see. e.g. [59], [60], [61], [43], [44], [76], [39]). Most of the scienti c algorithms however are implemented in oating point arithmetic and the users are interested in the numerical stability or robustness of these implementations. The theoretical investigations are usually di cult, require special knowledge and they do not always result in conclusions that are acceptable in practice (for the reasons of this, see, e.g. [79], [80], [11], [39], [21]). The common approach is to make a thorough testing on selected test problems and to draw conclusions upon the test results (see, e.g. [69], [70], [77]). Clearly, these conclusions depend on the lucky selection of the test problems and in any case require a huge amount of extra work. The idea of some kind of automatization arised quite early. The e ective tool of linearizing the e ects of roundo errors has been widely applied since the middle of the 70 s ([52], [53], [54], [55], [56], [57], [58], [73], [47], [72]). Given a numerical method the computed value is a function R (d; ) (R : R n+m! R k ), where d 2 R n is the input vector, and is the vector of individual relative rounding errors on the m arithmetic operations ( 2 R m, kk 1 u, where u is the machine rounding unit). Considering the rst order Taylor expansion of R at = 0 we have for all j = 1; 2; : : : ; k R j (d; ) = R j (d; 0) + mx i (d; 0) + o (kk) (1) From (1) an approximate error bound for the forward error immediately follows: where jr j (d; ) R j (d; 0)j / (d) u (j = 1; 2; : : : ; k) (2) (d) = max j j (d; i (3) i=1 The need for computing the partial derivatives accurately and e ectively motivated the design of new automatic di erentiation techniques. Since R is given as an algorithm, it is decomposed to basic arithmetic operations and elmentary functions. By automatic di erentiation we apply systematically the chain rule of calculus according to the relation being an operand of among the operations and elementary 2 functions to compute the derivatives of the composition function R. In contrast to numerical di erentiation, automatic di erentiation is free of truncation errors. On the other hand it has the drawback, that we can not treat numerical methods as black boxes, because we need the structural knowledge of their decomposition to basic operations and elementary function. The data structure commonly used to represent the structure of a numerical method is the computational graph. Let us consider a simple example from [53]: v d d; w d + v; x d v; y w + x; z y v: with a single input d, and a single output z. The corresponding computational graph is shown in Figure 1. Concerning the computational graph related error analysis Chaitin-Chatelin and Frayssé [11] gave the following summary. The stability analysis of an algorithm x = G (y) with the implementation G alg = G (i) depends on the j computed at the various nodes of the computational graph (see 2.6). The partial derivatives show how inaccuracies in data and rounding errors at each step are propagated through the computation. The analysis can be conducted in a forward (or bottom-up) mode or a backward (or top-down) mode. There have been several e orts to automate this derivation. One can cite the following: 1. Miller and Spooner (1978); 2. the B-analysis: Larson, Pasternak, and Wisniewski (1983), Larson and Sameh (1980); 3. the functional stability analysis: Rowan (1990); 4. the automatic di erentiation (AD) techniques: Rall (1981), Griewank (1989), Iri (1991) Miller developed his approach in several papers ([49], [50], [51], [52], [53], [54], [55], [56], [57], [58]). The basic idea of Miller s method is the following. Given a numerical algorithm to analyze, a number! (d) is associated with each set d (d 2 R n ) of input data. The function! : R n! R measures rounding error, i.e.,! (d) is large exactly when the algorithm applied to d produces results which are 3 d d d v 1 d z Figure 1: A computational graph. excessively sensitive to rounding errors. A numerical maximizer is applied to search for large values of! to provide information about the numerical properties of the algorithm. Finding a large value of! can be interpreted as the given numerical algorithm is su ering from a speci c kind of instability. The software performs backward error analysis. The value! (d) u (where u is the machine rounding unit) can be interpreted as the rst order approximation of the upper bound for the backward error. The computation of the error measuring number is based on the partial derivatives of the output with respect to the input and the individual rounding errors. An automatic di erentiation algorithm is used to provide the necessary derivatives. The Miller algorithm was implemented in Fortran language (actually in FOR- TRAN IV) by Webb Miller and David Spooner [56], [57] in More information on the use of the software by Miller and its theoretical background can be found in [52], [53], [54] and [56]. The software is in the ACM TOMS library with serial number 532 [57]. In the book of Miller and Wrathall [58], the potential of the software is clearly 4 demonstrated through 14 case studies. The answers of the software are consistent in these cases with the well known formal analytical and experimental results. The program shows correctly the stability properties of algorithms such as the inversion of triangular matrices the linear least-squares problem by solving the normal equations Gaussian elimination without pivoting and with partial pivoting, the Gauss-Jordan elimination, the Gaussian elimination with iterative improvement, the Cholesky factorization, Cholesky factorization after rank-one modi cation, the classical and modi ed Gram-Schmidt methods, the application of normal equations and Householder re ections for linear least squares problem, rational QR method, downdating of the QR factorization, the characteristic polynomial computation by the Faddeev method, symmetric matrix representations. Miller s approach was further developed for relative error analysis by Larson, Pasternak, and Wisniewski [47]. Larson et al. [47] say (pp ) the following. To perform the error analysis, the algorithm being analyzed is represented as a directed graph with each numeric quantity being a node in this graph. Directed arcs lead from operands to their results. Each node s value is subject to error, called local relative error. This error a ects those subsequent nodes that are computed using the contamined node as an operand. All of the local relative errors together are used to produce a total relative error for each node. From the directed graph, a system of equations relating to the local and total relative errors is generated. Our software is related to that of Miller [4] and Miller and Spooner [5], which analyzes the numerical stability of algorithms using absolute error analyses. In particular, the input formats are similar, and the minicompiler, a program for translating a FORTRAN-like language into the data format for the code [5], can be used to specify the computational graph of the algorithm being analyzed. 5 Rowan [72] considers the numerical algorithms as black boxes. The two main elements of his approach is a function that estimates a lower bound on the backward error, and a new optimization method (called subplex and based on the Nelder-Mead simplex method) that maximizes the function. A numerical method is considered unstable if the maximization gives a large backward error. The FOR- TRAN software requires two user-supplied Fortran subprograms; one implements the algorithm to be tested in single precision, and the other provides a more accurate solution, typically by executing the same algorithm in double precision. Bliss [5], Bliss et al [6] developed Fortran preprocessor tools for implementing the local relative error approach of Larson and Sameh. They also used some statistical techniques as well. The structure of their instrumentation environment is shown in Figure 2 taken from Bliss et al [6]. Cedar Fortran Preprocessor z pass Performance Data Flow Numerical Quality OP_COUNT LGRAPH PERTURB Cache Studies TOMS 594 (Larson) Interval analysis Infinite precision arithm. Qualitative stability TOMS 532 (Miller) Figure 2: The instrumentation environment of Bliss et al [6]. Error analysis tools based on the di erential error model have been developed not only for numerical but also for symbolic computing environments. Stoutemyer [73] was the rst to use computer algebra for error-analysis software. He used the accumulated error expressions from the di erential error model to determine a bound on the error and proposes analyzing the condition expressions to nd points where instability is a problem. For other authors and approaches, see Mutrie et 6 al. [63], Krämer [43], [44], or [39]. Finally, we have to mention the PRECISE toolbox of Chaitin Chatelin et al. [11], [21] that embodies an entirely di erent strategy (somewhat similar to the naive testing, but much more sophisticated). PRECISE allows experimentation about stability by a straightforward randomisation of selected data, then lets the computer produce a sample of perturbed solutions and associated residuals or a sample of perturbed spectra. ([11], p. 104). Upon the basis of corresponding literature, the Miller approach seems to be the most advanced although Miller s method has several setbacks. The numerical method to be analyzed must be expressed in a special, greatly simpli ed Fortranlike language. We can construct for-loops and if-tests that are based on the values of integer expressions. There is no way of conversion between real and integer types, and no mixed expressions (that contains both integer and real values) are allowed. Hence we can de ne only straight-line programs, i.e., where the ow of control does not depend on the values of oating point variables. To analyze methods with iterative loops and with branches on oating point values, the possible paths through any comparisons must be treated separately. This can be realized by constrained optimization. We con ne search for maximum to those input vectors by which the required path of control is realized. The constraints can be speci ed through a user-de ned subroutine. Higham [39] points out that the special language and its restrictions as the greatest disadvantage of the software: ... yet the software has apparently not been widely used. This is probably largely due to the inability of the software to analyse algorithms expressed in Fortran, or any other standard language . Unfortunately, this can be said more or less on the other developments that followed the Miller approach. The aim of my research work is to improve Miller s method to a level that meet today s requirements. I improved and upgraded Miller s method in two main steps. The rst step was a F77 version usable in PC environment with standard F77 compilers such as the GNU and Watcom compilers [24], [25], [26]. This version was able to handle algorithms with a maximum of 3000 inputs, and a maximum of 1000 outputs, and operations of a maximum of while the original Miller program was able to handle a maximum of 30 inputs, 20 outputs and 300 operations [24], [25], [26]. However even this new version used the simpli ed Fortran like language of Miller which is considered as a major problem by Higham and others. Using the recently available techniques such as automatic di erentiation, object oriented programming and the widespread use of MATLAB, I have eliminated the above mentioned drawbacks of Miller s method by creating a Matlab interface. Applying 7 the operator overloading based implementation technique of automatic di erentiation Griewank [37] and Bischof etal [4] we have provided means of analyzing numerical methods given in the form of Matlab m-functions. In our framework, we can de ne both straight-line programs and methods with iterative loops and arbitrary branches. Since the possible control paths are handled automatically, iterative methods and methods with pivoting techniques can also be analyzed in a convenient way. Miller originally used the direct search method of Rosenbrock for nding numerical instability. To improve the e ciency of maximizing, we added two more direct search methods [30], [28], [29], [35]: the well known simplex method of Nelder and Mead, and the so called multidirectional search method developed by Torczon [75]. In the thesis we present a signi cantly improved and partially reconstructed Miller method by designing and developing a new Matlab package for automatic roundo error analysis. Our software provides all the functionalities of the work by Miller and extends its applicability to such numerical algorithms that were complicated or even impossible to analyze with Miller s method before. Since the analyzed numerical algorithm can be given in the form of a Matlab m- le, our software is easy to use. We used the software package to examine the stability of some ABS methods [1], [2], namely the implicit LU methods and several variants of the Huang method [25]. The obtained computational results agreed with the already known facts about the numerical stability of the ABS algorithms. The program has shown that implicit LU is numerically unstable and that the modi ed Huang method has better stability properties than the original Huang method and the famous MGS (modi ed Gram-Schmidt) method (see [58], [2] or [36]). We also tested three fast matrix multiplication algorithms with the following results: 1. The classical Winograd scalar product based matrix multiplication algorithm of O (n 3 ) operation cost is highly unstable in accordance with the common belief, that has never been published. 2. Both the Strassen and Winograd recursive matrix multiplication algorithms of O (n 2:81 ) operation costs are numerically unstable. 3. The comparative testing indicates that the numerical stability of Strassen s algorithm is somewhat better than those of Winograd. Upon the basis of our testing, we may think that the new software called Miller Analyzer for Matlab will be useful for numerical people or algorithm developers to analyze the e ects of rounding errors. The results of my research were published in the works [24], [25], [26], [27], [30], [28], [29], [31], [32], [33], [34], [35]. 8 2 The original Miller-Spooner software and its limitations Using the software, the numerical method to analyse must be expressed in a special and simpli ed Fortran-like language. The language allows the usage of integer and real types. The variables can be scalars, or one or two dimensional arrays of both types. We can construct for-loops and if tests, but only that are not based on the values of real variables. This means, that we can only de ne algorithms, in which the ow of control does not depend on the value of oating point variables. To analyse methods that contain branching based on the value of real variables the possible pathes through comparisions have to be treated separately. This can be realized by constrained optimization. We constrain the search for maximum to such input vectors, by which the required path of control is realized. The constraints can be speci ed through a user-de ned Fortran subroutine, called POSITV. The software package consists of three programs, a minicompiler and two error analyser programs. The minicompiler takes as data the numerical method to analyse, written in a special programming language, and produces as output a translation of that program into a series of assignment statements. The output is presented in a readable form for the user, and in a coded form for use as input to the error analyser programs. One of the error analyser programs (i) is for deciding numerical stability of a single algorithm, and the other (ii) is for comparing two algorithms. The input for the error-testing programs is arranged as follows: (1) the output of the minicompiler (For program (i), it is the compiled version of the algorithm to analyse. In the case of program (ii) we must provide the compiled version of the two algorithms to compare), (2) a list of initial data for the numerical maximizer, (3) the code of the chosen error-measuring value, (4) target value for the maximizer. The programs return with the nal set of data found by the maximizer routine and with the value of the chosen error-measuring number at this set of data. If program (i) diagnoses instability (the value of the error-measuring number at the nal set of data exceeds the target value) then the condition number is also computed at the nal set of data. The minicompiler does its job in two phases: the compilation phase and the interpreter-like execution phase. In the compilation phase the program makes syntactical and semantical analysis, and generates an intermediate code (a coded representation of the algorithm to analyse) for the interpreter routine. The interpreter executes this intermediate code in a special way. All integer expressions are actually evaluated in order to perform the correct number of iterations of for-loops, and to interpretively perform if-then tests. In contrast no actual real arithmetic computation is done. The interpreter does not evaluate the oating point operations, only registers them in the form of assignment statements. Throughout all 9 real variables are treated symbolically as being the n-th input value, intermediate value, or real constant. The restrictions of the language ensures, that the produced series of assignment statements, which is actually a representation of the algorithms computational graph, is independent from the input data. Based on the computational graph generated by the minicompiler, the error analyser programs compute the partial derivatives of the output of the analyzed numerical method with respect to the inputs and individual rounding errors in every data required by the maximizer routine. The di erentiation is realized through graph algorithms, so the applied method is not numerical derivation. The computation of stability measuring function is based on these partial derivatives. The direct search method used for maximizing is the Rosenbrock method. The Miller-Spooner roundo error

Related Search

PhD Thesis, University of GlasgowError AnalysisPhD ThesisError Analysis and Correction FeedbackError analysis in written Greek as L2Detailed summary of PhD Thesiserror analysis in SLAError Analysis in written EnglishError Analysis on College students writing coWavelet Analysis and Subdivision Algorithms

Similar documents

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks