Toward Combining Fuzzy Graphs Based on Hedge Algebra

In this paper, we study fuzzy graph properties with combinatorial matrix theory in fuzzy linguistic matrix. We use hedge algebra and linguistic variables for combining and reasoning with words. We figure out theorem of limiting in matrix space. We also discover limit space states of fuzzy graph with a fundamental theorem. This is the important theorem to decide whether automata are finite or not. Received on 14 July 2019; accepted on 18 August 2019; published on 20 August 2019


Introduction
In everyday life, people use natural language (NL) for analyzing, reasoning, and finally, make their decisions. Computing with words (CWW) [5] is a mathematical solution of computational problems stated in an NL. CWW based on fuzzy set and fuzzy logic, introduced by L. A. Zadeh is an approximate method on interval [0,1]. In linguistic domain, linguistic hedges play an important role for generating set of linguistic variables. A well known application of fuzzy logic (FL) is fuzzy cognitive map (FCM), introduced by B. Kosko [1], combined fuzzy logic with neural network. FCM has a lots of applications in both modeling and reasoning fuzzy knowledge [3,4] on interval [0,1] but not in linguistic values, However, many applications cannot model in numerical domain [5], for example, linguistic summarization problems [6]. To solve this problem, in the paper, we use an abstract algebra, called hedge algebra (HA) as a tool for computing with words. The remainder of paper

Preliminaries
This section presents basic concepts of HA and FCM used in the paper.

2.1
Hedge algebra . In this section, we review some HA knowledges related to our research paper and give basic definitions. First definition of a HA is specified by 3-Tuple HA = (X, H, ≤ ) in [7]. In [8] to easily simulate fuzzy knowledge, two terms G and C are inserted to 3-Tuple so HA = (X, G, C, H,

Fuzzy cognitive map
Fuzzy cognitive map (FCM) is feedback dynamical system for modeling fuzzy causal knowledge, introduced by B. Kosko [1]. FCM is a set of nodes, which present concepts and a set of directed edges to link nodes. The edges represent the causal links between these concepts. Mathematically, a FCM is defined by A FCM is a 4-Tuple: In which: . . , C n } is the set of N concepts forming the nodes of a graph.
. . , C n (0)] ∈ [0, 1] N is the initial vector, recurring transformation function f is defined as: Example 2. Fig.1 shows a medical problem from expert domain of strokes and blood clotting involving. Concepts C={blood stasis (stas), endothelial injury ( inju), hypercoagulation factors (HCP and HCF)} [2]. The connection matrix is: FCMs have played a vital role in the applications of scientific areas, including expert system, robotics, medicine, education, information technology, prediction, etc [3,4].

3
Modeling with words [10] Fuzzy model, based on linguistic variables, is constructed from linguistic hedge of HA.
Definition 3.1 (Linguistic lattice). With L as in the section 2.1, set {∧, ∨} is logical operators, defined in [7,8], a linguistic lattice L is a tuple: Property 3.1. The following is some properties for L: 1. L is a linguistic-bounded lattice.
In which: . . , C n } is the set of N concepts forming the nodes of a graph.
. . , C 0 n ] ∈ L N is the initial vector, recurring transformation function f is defined as: Example 3. Fig. 2 shows a simple LCM. Let is the adjacency matrix of LCM. Causal relation between c i and c j is m ij , for example if i = 1, j = 2 then causal relation between c 1 and c 2 is: " if c 1 is true then c 2 is M true is L true" or let P ="if c 1 is true then c 2 is M true" be a proposition then truth(P ) = L true Definition 3.3. [11] A LCM is called complete if between any two nodes always having a connected edge (without looping edges).

Combining LCM
In many learning algorithms, which use Hebbian rule [3,4], as time t, the weight of every edges will always be updated ∆e ij = f ( k e jk × c k × c j ). Let M(n), 2 ≤ n ≤ N be total connection matrices with N vetices. Fuzzifying edge set uses hedges. The following theorem figures out side of the connection matrix.
We proof theorem ( 4.1) by using induction method on number of vetices n, 2 ≤ n ≤ N. This process follows two steps. First, set n = 2 and check to see if the M(2) is true. Next, assume M(n) is true, we have to prove M(n + 1) is true as in logical expression: Proof. Without loss of generality, we set = 2 and induction on n. with ≥ 3, the process is the same.
Applying product rule: QED.
By using the counting method, it is straightforward to prove theorem (4.1) in the case of complete LCM Theorem 4.1 is important in counting the connection matrices. On the other hand, let LCM(n) be total LCM which generate from N vertices. We want to know whether LCM(n) finite or infinite. Finding LCM(n) helps to limit searching space in many cases.
Proof. It is straightforward to prove theorem 4.2 by using combinatory algebra.
• N vetices with cases for each vertex which produce ( ) N • Applying result from theorem 4.1:

Combining LCM
. Linguistic Cognitive Maps allow for a simple aggregation of knowledge which obtained from experts. The combination will improving reliability of the final model. LCM linguistic matrices additively combine to form new LCMs.
For Fig. 4, square matrix: For Fig. 5    Conclusions and future work We have proved two important theorems in combining fuzzy graphs. First theorem verified connection matrix is limited by expression ( ) 2×( N 2 ) . We also demonstrated the theorem about the whole state space is ( ) N 2 . This is the important theorem to indicate that graph state space is finite and therefore automata are finite. Our next study is as follows: