Domain suitable for parallel computing systems, [41, 47].

Domain Decomposition Methods are an important class of methods for the parallel solution of sparse linear systems of large order resulting from the discretization of Partial Differential Equations, 16. Decomposition methods are used to solve a boundary problem by dividing it into smaller boundary value problems corresponding to sub domains, 41, 47. The solution is accomplished by an iterative process where the solution between adjacent sub domains is coordinated. The sub domain problems are independent, making decomposition methods suitable for parallel computing systems, 41, 47. Domain decomposition methods are usually used as preliminary for Krylov’s iterative methods, such as Conjugate Gradients or Generalized Minimum Residuals, 44, 45, 53. In overlapping area decomposition methods, the sub domains overlap more than just the interface. Overlapping decomposition methods include the Schwarz optimized method and the additive Schwarz method. Many decomposition methods can be written and analyzed as a special case of the Schwarz method, 41, 47. In non-overlapping methods, the sub domains are only joined at their interface. Primary methods, such as Balancing domain decomposition, require the continuation of the solution through the sub domain interface, representing the value of the solution in all adjacent sub domains by the same unknown. In methods such as Finite Element Tearing and Interconnect (FETI), 22, 23, the continuity of the solution across the interface is imposed by Lagrange multipliers. Decomposition methods that are not overlapping are also called iterative substructuring methods, 15. Mortar methods are methods of discretization for partial differential equations, which use separate discretization in non-overlapping sub domains. The meshes in sub domains do not fit into the interface and the equality of the solution is imposed by the Lagrange multipliers, chosen to maintain the accuracy of the solution. In practice, in the Finite Element method, the continuity of solutions between unmatched sub domains is implemented with multiple points restrictions, 41, 47. Recently, multi-projection methods have been proposed for semi-aggregation, 38, 39. These techniques are based on the construction of sub domains consisting of a part of the initial domain as well as aggregates corresponding to the other sub domains, 38, 39. Aggregates are created using projectors and are used to improve the local solution that corresponds to each sub domain, 38, 39. The advantage of these methods is that the number of iterations required to achieve convergence at a specified accuracy decreases with the increase of sub domains, as opposed to the behavior of classical separating methods such as Block Jacobi and the Restricted Additive Schwarz (RAS), 41, 47. 13 1.5 Parallel Systems The need for greater discretization accuracy results in a large order of sparse linear systems which should be efficiently solved in large-scale parallel systems. So the need for a significant improvement in the speed of solution is great, since solving a physical problem as well as other scientific issues require solving millions of equations with millions of unknowns. Therefore, improving the computing performance of a problem is important. Improvement in speed is usually achieved either by designing more efficient algorithms or by increasing the computational power of systems. However, designing ever more efficient algorithms is not always possible. While increasing the computational power of systems can be achieved by two methods: designing stronger processing units or new computing system architectures. The construction of more and more improved processors is subject to limitations such as the present technology and the laws of nature. Also, algorithms are often so complicated that hardware upgrades end up being pointless, with no significant reduction in time to find the solution. At this point, it should also be noted that frequent upgrading of the material is neither an economic nor an easy solution. One solution to the above problems is the use of parallel systems. Using the present technology, an increase in the solution speed can be achieved by applying techniques of parallel processing of the data of a problem. The logic of parallel data processing is the partitioning of data into different computing systems or different processors so that they can be processed simultaneously. In this way, the solution speed is increased depending on the number of processing units. Algorithms that solve parallel processing problems are called parallel algorithms. Unfortunately, however, in practice most algorithms contain parts that are not easy nor in the interest of parallelizing, 10, 35, 42. Parallel systems are widely used in various scientific fields. The need for more resources has led to the design of parallel systems made up of more processors. In order to solve large sparse linear systems and numerical algorithms, parallel connectivity is especially important in major problems as it can provide solutions within an acceptable time frame. Parallel systems provide tools and structures for implementing parallel algorithms or using multiple resources. Various parallel environments have been proposed at times. Some of the most common are the OpenMP environment and the Message Passing Interface (MPI), 10, 35, 42. 

Written by