Non-stationary Parallel Multisplitting Two-Stage Iterative Methods with Self-Adaptive Weighting Schemes

In this paper, we study the non-stationary parallel multisplitting two-stage iterative methods with selfadaptive weighting matrices for solving a linear system whose coefficient matrix is symmetric positive definite. Two choices of Self-adaptive weighting matrices are given, especially, the nonnegativity is eliminated. Moreover, we prove the convergence of the non-stationary parallel multisplitting two-stage iterative methods with self-adaptive weighting matrices. Finally, the numerical comparisons of several self-adaptive nonstationary parallel multisplitting two-stage iterative methods are shown. Received on 05 January 2013; accepted on 03 August 2013; published on 04 March 2014


Introduction
To solve large sparse linear system of equations on multiprocessor systems, Ax = b, A = (a ij ) ∈ R n×n nonsingular and b ∈ R n .(1) O'Leary and White [14] first proposed parallel methods based on multisplitting of matrices in 1985, after this, combing with two-stage iterative methods (see [2,4,10]), the multisplitting two-stage iterative methods [15] were proposed, where several basic convergence results were found.The scheme was proposed as following # This paper is an extended version of [22].We have added a kind of self-adaptive weighting schemes in Algorithm 1, and also proven the convergence of Algorithm 1 in this condition.In addition, we have added the numerical example and completely recalculated the numerical examples with highly precision and higher size of coefficient matrix.* Chuan-Long Wang.Email: clwang218@126.comwhere E i ≥ 0, diagonal, and will be unchanged and independent of the iterative number k.
Later, many authors studied the methods for the case that A is an M-matrix, an H-matrix and a symmetric positive definite matrix.When A is an M-matrix or an H-matrix, many parallel multisplitting two-stage iterative methods (see [3,5,6,12,15,17]) were presented, and the weighting matrices E i , i = 1, 2, • • • , m were generalized (see [1,11]) and E (k) i is diagonal, but these weighting matrices were preset as multi-parameter.
When A is a symmetric positive definite matrix, generally, which require the assumption that the weighting matrices are multiples of the identity matrix, that is E i = α i I, i = 1, 2, • • • , m (see [8,14]), but these results have little applicability for analysis of parallel processing.In order to improve the weighting matrices, White [19,20] and Wen [18] presented the multisplitting which had a very special structure, Chen [21] discussed asynchronous multisplitting, Cao [7] gave a nonstandard multisplitting, Migall ón [13] proposed the non-stationary multisplittings, Wang and Bai [17] discussed the non-stationary two-stage multisplitting, but the non-stationary multisplitting usually had a block splitting for parallel processing.Furthermore, as we know, the weighting matrices have important role in parallel multisplitting methods, but the weighting matrices in all above-mentioned methods are determined previously, they are not known to be good or bad, this influences the efficiency of parallel methods.Fortunately, Wang [23] has presented modified parallel multisplitting iterative methods by optimizing the weighting matrices based on the sparsity of the coefficient matrix A. But none has ever studied that how to choose optimal weighting matrices for the parallel multisplitting two-stage iterative algorithms, we will discuss this problem in the paper.
Here, we still use the scalar weighting matrices in the parallel multisplitting two-stage iterative method, but α ) are chosen by finding the optimal point in the hyperplane H k , where the optimal parameters in k-th iteration.In other words, the point generated by the optimal weighting matrices (6) may be the optimal point to the solution of linear systems (1) in H k .Thus, we search the optimal weighting matrices without nonnegative condition.In fact, numerical examples (will be seen in section 4) show that the methods with the weighting matrices (6) are effective.
The paper is organized as follows.In Section 1, we give some notations and preliminaries.In Section 2, the non-stationary parallel multisplitting two-stage iterative methods with self-adaptive weighting schemes are put forward.In Section 3, the convergence of the new method is established.We provide numerical results in Section 4.
Here are some essential notations and preliminaries.R n×n is used to denote the n × n real matrix set, the matrix A T denotes the transpose of A. Similarly the transpose of a vector x is denoted by x T .A matrix A ∈ R n×n is called symmetric positive definite(or semidefinite), if it is symmetric and for all x ∈ R n , x 0, it holds that x T Ax > 0(or x T Ax ≥ 0).A = M − N is called a splitting of the matrix A if M ∈ R n×n is nonsingular; this splitting is called a convergent splitting if ρ(M −1 N ) < 1; a P -regular splitting of the symmetric positive definite matrix A if M T + N is positive definite, a symmetric positive definite splitting if N is symmetric positive semi-definite (see [6,16]).

Algorithms
In this section, we give the non-stationary parallel multisplitting two-stage iterative methods with selfadaptive weighting schemes.Let The non-stationary parallel multisplitting two-stage iterative methods with selfadaptive weighting schemes Step 0. Given the precision ϵ > 0, the initial point x (0) and set k := 0; For k = 0, 1, • • • , until convergence.

By introducing matrices
We can rewrite the SMTS as the following iteration where It follows from straightforward derivation that and the iteration matrix For the quadratic programming, we have following results (see [9]). Let } be linear independent, the solution of the quadratic programming (10) is as following where } be linear independent, the solution of the quadratic programming (11) is as following where .

Convergence Analysis
In this section, we study the convergence theories for algorithm 1 with self-adaptive weighting matrices.
Lemma 3.0.3.[11] Assume that A is a symmetric positive definite matrix, let A = M − N be P -regular splitting.
Then there exists a positive number r such that Lemma 3.0.4.[8] Assume that A is a symmetric positive definite matrix, let A = F − G is a P -regular splitting.Given m ≥ 1, there exists a unique splitting , m be symmetric positive definite splittings, and B i = M i − N i be P-regular splittings.If there exists a positive integer q such that the non-stationary iteration number Then there exists a positive number r such that Proof.We compute G(i, k) directly From Lemma 3.0.4,there exists a unique P-regular splitting and thereby, From the assumptions of Lemma 3.0.5, the splitting are P-regular splittings.Thus, there exist the positive numbers r(i, k Because of the q(i, k) ≤ q, q(i, k) = 1, 2, • • • , q has q different values.Thus, the splittings (26) have at most q different splittings, so are the positive numbers Hence, there exists a positive number r such that (22) holds.
Theorem 3.0.6.Assume that A is a symmetric positive definite matrix.Let A = B i − C i , i = 1, 2, • • • , m be symmetric positive definite splitting, and B i = M i − N i be P -regular splittings.Suppose that weighting matrices (10).If there exists a positive integer q such that the non-stationary iteration number q(i, k) ≤ q.Then {x (k) } generated by algorithm 1 converges to the unique solution of the linear system of equations (1).
Lemma 3.0.7.Assume that A is a nonsingular matrix, let A = M − N be a convergent splitting.If the matrix Proof.At first, the matrix Hence, the matrix

and only if the matrix A T M + M T A − A T A is symmetric positive definite. On the other hand, the matrix
Lemma 3.0.8.Assume that A is a nonsingular matrix.
Let A = B i − C i , i = 1, 2, • • • , m be convergent splittings, and let B i = M i − N i , i = 1, 2, • • • , m be also convergent splittings.Suppose the induced splitting and are symmetric positive definite.If there exists a positive integer q such that the non-stationary iteration number Proof.We apply Lemma 3.0.7 to the splitting

EAI European Alliance
for Innovation Theorem 3.0.9.Assume that A is a nonsingular matrix.(11).If the induced splitting

• , and
are symmetric positive definite, then {x (k) } generated by algorithm 1 converges to the unique solution of the linear system of equations (1).
Proof.The model ( 11) is equivalent to the following quadratic programming model Thus, similar to Theorem 3.0.6, From Lemma 3.0.8we have so is the sequence {ε (k) }.Hence, we have proved this theorem.
Remark 3.0.10.The choice the optimization model of weighting matrices in k-th iteration can be various.
Here, we only consider two schemes of optimizing weighting matrices for a linear system.In order to obtain self-adaptive weighting matrices, we need to solve the quadratic programming, but it may decrease the iterations largely because of the inequality implied in Theorem 3.0.6 and Theorem 3.0.9.Furthermore, we can parallel compute α as ( 19) and (20).

Numerical Experiments
In this section, we give some preliminary computational results.We implement our Algorithm 1 with three splittings (Gauss-Seidel splitting, Relaxation splitting and upper Gauss-Seidel splitting) to solve the linear system (1).
The test PDE problem we are considering in this paper is with (x, y) ∈ Ω, where Ω = (0, 1) × (0, 1) is a square region.In all cases, the initial vector x (0) is set to zero and the stopping criterion for Algorithm 1 is where ∥ • ∥ 2 refers to L 2 -norm.In the following Tables, IT stands for the number of iterations satisfying the stopping criterion mentioned above, CPU stands for the parallel execution time of Algorithm 1.All timing results are reported in seconds.For the test problems, only the matrix A, which is constructed from finite difference discretization of the given PDE (34), is of importance, so the right-hand side vector b is created artificially.Hence, the right-hand side function f (x, y) in Examples 1 and 2 is not relevant.
Example 1 This example considers equation Ax = b obtained from nine-point finite difference discretization of the given PDE (34).So the coefficient matrix where  In all our numerical experiments, three splittings of the matrix A are proposed as following.Let Especially in Examples 1, we chose and in Examples 2, we chose where . and corresponding to the D i block, L i is strictly block lower triangular matrix.M i and N i of Algorithm 1 are determined by the following three splitting methods.
In Example 2, the coefficient matrix A itself contains more zero entries than the matrix of Example 1.So we choose larger p.From Table 1 and Table 2 we see that the iteration counts and the CPU times of SMTS with (11) grow rapidly than SMTS with (10) with problem size, but they are much less than the usual old algorithm with fixed weighting matrices.The reason is that the nonnegativity of weighting matrices are deleted, the range for finding the optimal weighting matrices is extended.The iteration counts and the CPU times of the old algorithm with (iii) is not stable because of randomly, so we have chosen lesser iteration number than the old algorithms with (i) and (ii).Numerical experiments have been presented showing the effectiveness of the self-adaptive strategy for weighting matrices.

.Example 2
and the right-hand side vector b is chosen so that b = (1, 2, 3, • • • , n) T .This example considers equation Ax = b from five-point finite difference discretization of the given PDE (34).So the matrix A is constructed as in Example 1, but D p and G p are different from Example 1, p×p and G p = −I, and the right-hand side vector is chosen so that b = (1, 1, • • • , 1) T .

Table 1 .
Comparison of computational results for Example 1

Table 2 .
Comparison of computational results for Example 2